This post introduce the upcoming drools and jBPM persistence api. The motivation for creating a persistence api that is to not be bound to JPA, as persistence in Drools and jBPM was until the 7.0.0 release is to allow a clean integration of alternative persistence mechanisms to JPA. While JPA is a great api itRead more →
This post introduce the upcoming drools and jBPM persistence api. The motivation for creating a persistence api that is to not be bound to JPA, as persistence in Drools and jBPM was until the 7.0.0 release is to allow a clean integration of alternative persistence mechanisms to JPA. While JPA is a great api it is tightly bound to a traditional RDBMS model with the drawbacks inherited from there – being hard to scale and difficult to get good performance from on ever scaling systems. With the new api we open up for integration of various general NoSQL databases as well as the creation of tightly tailor-made persistence mechanisms to achieve optimal performance and scalability.
At the time of this writing several implementations has been made – the default JPA mechanism, two generic NoSQL implementations backend by Inifinispan and MapDB which will be available as contributions, and a single tailor made NoSQL implementation discussed shortly in this post.
The changes done in the Drools and jBPM persistence mechanisms, its new features, and how it allows to build clean new implementations of persistence for KIE components is the basis for a new soon to be added MapDB integration experimental module. The existing Infinispan adaptation has been changed to accommodate to the new structure.
Because of this refactor, we can now have other implementations of persistence for KIE without depending on JPA, unless our specific persistence implementation is JPA based. It has implied, however, a set of changes:
Creation of drools-persistence-api and jbpm-persistence-api
In version 6, most of the persistence components and interfaces were only present in the JPA projects, where they had to be reused from other persistencies. We had to refactor these projects to reuse these interfaces without having the JPA dependencies added each time we did so. Here’s the new set of dependencies:
The first thing to mention about the classes in this refactor is that the persistence model used by KIE components for KieSessions, WorkItems, ProcessInstances and CorrelationKeys is no longer a JPA class, but an interface. These interfaces are:
PersistentSession: For the JPA implementation, this interface is implemented by SessionInfo. For the upcoming MapDB implementation, MapDBSession is used.
PersistentWorkItem: For the JPA implementation, this interface is implemented by WorkItemInfo, and MapDBWorkItem for MapDB
PersistentProcessInstance: For the JPA implementation, this interface is implemented by ProcessInstanceInfo, and MapDBProcessInstance for MapDB
The important part is that, if you were using the JPA implementation and wish to continue doing so with the same classes as before. All interfaces are prepared to work with these interfaces. Which brings us to our next point
PersistenceContext, ProcessPersistenceContext and TaskPersistenceContext refactors
Interfaces of persistence contexts in version 6 were dependent on the JPA implementations of the model. In order to work with other persistence mechanisms, they had to be refactored to work with the runtime model (ProcessInstance, KieSession, and WorkItem, respectively), build the implementations locally, and be able to return the right element if requested by other components (ProcessInstanceManager, SignalManager, etc)
Also, for components like TaskPersistenceContext there were multiple dynamic HQL queries used in the task service code which wouldn’t be implementable in another persistence model. To avoid it, they were changed to use specific mechanisms more related to a Criteria. This way, the different filtering objects can be used in a different manner by other persistence mechanisms to create the queries required.
Task model refactor
The way the current task model relates tasks and content, comment, attachment and deadline objects was also dependent on the way JPA stores that information, or more precisely, the way ORMs related those types. So a refactor of the task persistence context interface was introduced to do the relation between components for us, if desired. Most of the methods are still there, and the different tables can still be used, but if we just want to use a Task to bind everything together as an object (the way a NoSQL implementation would do it) we now can. For the JPA implementation, it still relates object by ID. For other persistence mechanisms like MapDB, it justs add the sub-object to the task object, which it can fetch from internal indexes.
Another thing that was changed for the task model is that, before, we had different interfaces to represent a Task (Task, InternalTask, TaskSummary, etc) that were incompatible with each other. For JPA, this was ok, because they would represent different views of the same data.
But in general the motivation behind this mix of interfaces is to allow optimizations towards table based stores – by no means a bad thing. For non table based stores however these optimizations might not make sense. Making these interfaces compatible allows implementations where the runtime objects retrieved from the store to implement a multitude of the interfaces without breaking any runtime behavior. Making these interfaces compatible could be viewed as a first step, a further refinement would be to let these interfaces extending each other to underline the model and make the implementations simpler
(But for other types of implementation like MapDB, where it would always be cheaper to get the Task object directly than creating a different object, we needed to be able to return a Task and make it work as a TaskSummary if the interface requested so. All interfaces now match for the same method names to allow for this.)
Extensible TimerJobFactoryManager / TimerService
On version 6, the only possible implementations of a TimerJobFactoryManager were bound in the construction by the values of theTimeJobFactoryType enum. A refactor was done to extend the existing types, to allow other types of timer job factories to be dynamically added
Creating your own persistence. The MapDB case
All these interfaces can be implemented anew to create a completely different persistence model, if desired. For MapDB, this is exactly what was done. In the case of the MapDB implementation that is still under review, there are three new modules:
That are meant to implement all the Task model using MapDB implementation classes. Anyone with a wish to have another type of implementation for the KIE components can just follow these steps to get an implementation going:
Create modules for mixing the persistence API projects with a persistence implementation mechanism dependencies
Create a model implementation based on the given interfaces with all necessary configurations and annotations
Create your own (Process|Task)PersistenceContext(Manager) classes, to implement how to store persistent objects
Create your own managers (WorkItemManager, ProcessInstanceManager, SignalManager) and factories with all the necessary extra steps to persist your model.
Create your own KieStoreServices implementation, that creates a session with the required configuration, and adding it to the classpath
You’re not alone: The MultiSupport case
MultiSupport is a Denmark based company that has used this refactor to create its own persistence implementation. They provide an archiving product that is focused on creating a O(1) archive retrieval system, and had a strong interest in getting their internal processes to work using the same persistence mechanism they used for their archives.
We worked on an implementation that allowed for an increase in the response time for large databases. Given their internal mechanism for lookup and retrieval of data, they were able to create an implementation with millions of active tasks which had virtually no degradation in response time.
In MultiSupport we have used the persistence api to create a tailored store, based on our in house storage engine – our motivation has been to provide unlimited scalability, extended search capabilities, simple distribution and a performance we struggled to achieve with the JPA implementation. We think this can be used as a showcase of just how far you can go with the new persistence api. With the current JPA implementation and a dedicated SQL server we have achieved an initial performance of less than 10 ‘start process’ operations per second, now with the upcoming release we on a single application server have a performance more than 10 fold.
Hello everyone. This post is just to let you know that jBPM6 Developer Guide is about to get published, and you can pre-order it from here and get from a 20% to a 37% discount on your order! With this book, you can learn how to: Model and implement different business processes using the BPMN2Read more →
Hello everyone. This post is just to let you know that jBPM6 Developer Guide is about to get published, and you can pre-order it from here and get from a 20% to a 37% discount on your order! With this book, you can learn how to:
Model and implement different business processes using the BPMN2 standard notation
Understand how and when to use the different tools provided by the JBoss Business Process Management (BPM) platform
Learn how to model complex business scenarios and environments through a step-by-step approach
Here you can find a list of what you will find in each chapter:
Chapter 1, Why Do We Need Business Process Management?, introduces the BPM discipline. This chapter will provide the basis for the rest of the book, by providing an understanding of why and how the jBPM6 project has been designed, and the path its evolution will follow. Chapter 2, BPM Systems Structure, goes in depth into understanding what the main pieces and components inside a Business Process Management System (BPMS) are. This chapter introduces the concept of BPMS as the natural follow up of an understanding of the BPM discipline. The reader will find a deep and technical explanation about how a BPM system core can be built from scratch and how it will interact with the rest of the components in the BPMS infrastructure. This chapter also describes the intimate relationship between the Drools and jBPM projects, which is one of the key advantages of jBPM6 in comparison with all the other BPMSs, as well as existing methodologies where a BPMS connects with other systems. Chapter 3, Using BPMN 2.0 to Model Business Scenarios, covers the main constructs used to model our business processes, guiding the reader through an example that illustrates the most useful modeling patterns. The BPMN 2.0 specification has become the de facto standard for modeling executable business processes since it was released in early 2011, and is recommended to any BPM implementation, even outside the scope of jBPM6. Chapter 4, Understanding the Knowledge Is Everything Workbench, takes a look into the tooling provided by the jBPM6 project, which will enable the reader to both define new processes and configure a runtime to execute those processes. The overall architecture of the tooling provided will be covered as well in this chapter. Chapter 5, Creating a Process Project in the KIE Workbench, dives into the required steps to create a process definition with the existing tooling, as well as to test it and run it. The BPMN 2.0 specification will be put into practice as the reader creates an executable process and a compiled project where the runtime specifications will be defined. Chapter 6, Human Interactions, covers in depth the Human Task component inside jBPM6. A big feature of BPMS is the capability to coordinate human and system interactions. It also describes how the existing tooling builds a user interface using the concepts of task lists and task forms, exposing the end users involved in the execution of multiple process definitions’ tasks to a common interface. Chapter 7, Defining Your Environment with the Runtime Manager, covers the different strategies provided to configure an environment to run our processes. The reader will see the configurations for connecting external systems, human task components, persistence strategies and the relation a specific process execution will have with an environment, as well as methods to define their own custom runtime configuration. Chapter 8, Implementing Persistence and Transactions, covers the shared mechanisms between the Drools and jBPM projects used to store information and define transaction boundaries. When we want to support processes that coordinate systems and people over long periods of time, we need to understand how the process information can be persisted. Chapter 9, Integration with other Knowledge Definitions, gives a brief introduction to the Drools Rule Engine. It is used to mix business processes with business rules, to define advanced and complex scenarios. Also, we cover Drools Fusion, and added feature of the Drools Rule Engine to add the ability of temporal reasoning, allowing business processes to be monitored, improved and covered by business scenarios that require temporal inferences. Chapter 10, KIE Workbench Integration with External Systems, describes the ways in which the provided tooling can be extended with extra features, along with a description of all the different extension points provided by the API and exposed by the tooling. A set of good practices is described in order to give the reader a comprehensive way to deal with different scenarios a BPMS will likely face. Appendix A, The UberFire Framework, goes into detail about the based utility framework used by the KIE Workbench to define its user interface. The reader will learn the structure and use of the framework, along with a demonstration that will enable the extension of any component in the workbench distribution you choose. Hope you like it! Cheers,
In this opportunity we’ll go over one of the exercises you will be able to see in the Drools and jBPM Public Training. It involves rule execution for a particular case: We’re going to represent the scenario of a cat trapped on a limb, and all the things needed to provide solutions to theRead more →
In this opportunity we’ll go over one of the exercises you will be able to see in the Drools and jBPM Public Training. It involves rule execution for a particular case:
We’re going to represent the scenario of a cat trapped on a limb, and all the things needed to provide solutions to the situation. For that, we will need:
A Pet: with a name, a type and a position
A Person: will have a pet assigned to him, and can call the pet down
A Firefighter: Will be able to get the cat down from the tree as a last resort
Once we have a representation, we need to start defining rules to determine different types of situations and act accordingly: Rule “Call Cat when it is in a tree” When my Cat is on a limb in a tree Then I will call my Cat Rule “Call the Fire Department” When my Cat is on a limb and it doesn’t come down when I call Then call the Fire Department Rule “Firefighter gets the cat down” When the Firefighter can reach the Cat Then the Firefighter follows steps to retrieve the Cat Each of these rules will have a specific DRL representation, based on the model we defined: rule "Call Cat when it is in a tree" when $p: Person($pet: pet, petCallCount == 0) $cat: Pet(this == $pet, position == "on a limb", type == PetType.CAT) then //$cat.getName() + " come down!" $p.setPetCallCount($p.getPetCallCount()+1); update($p); end rule "Call the fire department" when $p: Person($pet: pet, petCallCount > 0) $cat: Pet(this == $pet, position == "on a limb", type == PetType.CAT) then Firefighter firefighter = new Firefighter("Fred"); insert(firefighter); end rule "Firefighter gets the cat down" when $f: Firefighter() $p: Person($pet: pet, petCallCount > 0) $cat: Pet(this == $pet, position == "on a limb", type == PetType.CAT) then $cat.setPosition("on the street"); update($cat); retract($f); end
And then, some Java code to end up firing said rules:
KieServices kservices = KieServices.Factory.get(); KieSession ksession = kservices.getKieClasspathContainer().newKieSession(); Person person = new Person("John!"); Pet pet = new Pet("mittens", "on a limb", Pet.PetType.CAT); person.setPet(pet); ksession.insert(person); ksession.fireAllRules();
When we have these components defined, we will take advantage of the course to start modifying it to see how rules interact with each other, by:
Creating rules that insert dogs
Creating rules that make dogs chase cats that are on the same place as they are
If you’re going to the Red Hat Summit on April, take advantage of this opportunity:Plugtree is organizing a public training on Drools and jBPM the week after Red Hat Summit in the San Francisco area for April 21st to the 25th in four different modalities: Drools: April 21st to 23rd jBPM: April 21st, 24th andRead more →
If you’re going to the Red Hat Summit on April, take advantage of this opportunity: Plugtree is organizing a public training on Drools and jBPM the week after Red Hat Summit in the San Francisco area for April 21st to the 25th in four different modalities:
Drools: April 21st to 23rd
jBPM: April 21st, 24th and 25th
Full (Drools + jBPM): April 21st to the 25th
This workshop introduces Business Process and Rules Management, preparing you to be immediately effective in using both Drools and jBPM to improve your applications. In the training, we will cover:
All the different syntax for defining rules
Drools runtime configuration tricks
Writing BPMN2 files and projects from scratch, to the point of having runnable modules.
jBPM configuration to gain full control of your process-based applications.
Kie Workbench user guides, including tips for integration with other systems.
Integration tips for architectural design of rule-based and process-based applications.
If you’re interested in this training, you can download the full agenda, or click here to register. You can contact us at email@example.com if you have any questions. Hope to see you there! We offer options for Drools only (days 1 to 3), jBPM only (days 1, 4 and 5) and full training (days 1 to 5). Everyone can assist these trainings, regardless of their attendance to the Red Hat Summit
Greetings everyone! In this post I’ll be showing one of the exercises we will be playing with in the next Drools & jBPM Training in London, October 21-25. There’s still time to register so go ahead! This exercise shows a process interaction of something us developers and analysts know pretty well: managing requirements in aRead more →
Greetings everyone! In this post I’ll be showing one of the exercises we will be playing with in the next Drools & jBPM Training in London, October 21-25. There’s still time to register so go ahead!
This exercise shows a process interaction of something us developers and analysts know pretty well: managing requirements in a sprint. It’s something that we do everyday, so we don’t have to waste so much time explaining the domain, and we can get really fast to the process definitions and how to do each task.
It’s also a very good example to run through all the things related to a process execution:
Human tasks: Writing code, performing QA analysis, reporting bugs and fixing them.
Automated tasks: Jenkins interactions in a continuous interaction environment, automatic deploys and tests, email notifications, all have a use in this small case
Process interactions: Each requirement in a sprint is a process by itself, and the sprint runs as a process too.
Rules execution tasks: We can use them to validate requirements, define initial priorities, and probably a lot more.
The processes look something like this for the requirements: You can see that we have all the requirement life cycle defined in this process definition; when developers do it, when it has to be tested, what to do if bugs are found… and finally, the process instance is completed when no more bugs are found in the requirement implementation.
This one is for the sprint: It’s a bit more cryptic, but the objective is simple. It starts and distributes all requirements priorities using rules, then starts each requirement in a process using a script task. The process will then finish when a signal is sent to the instance that either all requirements are completed or the sprint was manually closed.
We will show you a small test case where you can simulate all the steps of these processes, learn how to make them intercommunicate using different methods, how to run them using custom handlers, and also asynchronous executors, and we will have a lot of fun learning how to add new features to the processes and the runtime. You can download the example from here to play with it in the meantime!
I’m preparing a workshop that introduces Business Process Management and prepares you to be immediately effective in using both Drools and jBPM to improve your applications. You will learn how to utilize the different stages of BPM where development is involved, for both versions 5 and 6 of the Drools and jBPM platform.We’ll discuss severalRead more →
I’m preparing a workshop that introduces Business Process Management and prepares you to be immediately effective in using both Drools and jBPM to improve your applications. You will learn how to utilize the different stages of BPM where development is involved, for both versions 5 and 6 of the Drools and jBPM platform. We’ll discuss several best practices that allow for effective BPM, and how the jBPM components are more suitable placed into those best practices. We’ll also cover how is the best way to perceive the software writing work related to running effective Business Processes and rules, and see how this allows a best fit from an End User perspective. Where? London, England, Number 1 Poultry, EC2R 8JR When? October 21-25 , filled with Q&A sessions and workshops, from 10:00 AM to 18:00 PM, with the last two hours for specialized questions and workshopping everyday. What will it cover? Full theoretical and technical overview of Drools and jBPM. You can download the full agenda from here We offer different options depending on your interest:
Introduction: October 21. Full theoretical introduction to Drools and jBPM components. USD 500.00
Drools: October 21-23. Introduction + full technical coverage of Drools. USD 1350.00
jBPM: October 21, 24 and 25. Introduction + full technical coverage of jBPM. USD 1350.00
Full: October 21 to 25. USD 1728.00, and 1929.00 after 9/21/13. Register now and get the early bird pricing!
(Original post here) This is a topic I’ve wanted to discuss for a long time. This post is to show you how to use a new component I’ve made called jbpm-rollback-api, a configurable module that allows you to rollback persistent jBPM process instances to a previous step. It makes it possible by just adding anRead more →
This is a topic I’ve wanted to discuss for a long time. This post is to show you how to use a new component I’ve made called jbpm-rollback-api, a configurable module that allows you to rollback persistent jBPM process instances to a previous step. It makes it possible by just adding an environment variable, a process event listener and an extra class to the jBPM persistence unit. I’ll discuss it in as much detail as possible in Plugtree’s next Public Training in London, I invite you to register
Why? When running process instances, especially during the first runs in a new BPM based project, you might get to a point where you wished you had done something different along the steps of your business process (maybe specifying a different value for a variable, or you end up in a path you didn’t wish in the first place. If you can’t change that aspect of the process instance, you need to drop it altogether. This isn’t an issue when running from a JUnit test case, but if you find this issue in a running system, you might not want to drop the process instance and start again, specially when it involves other people’s work. The possibility of rollbacking the tasks of a process allows you to get to a previous state of the process instance without having to start over again.
How? The whole idea spins around the way the process instances are persisted today in the database. Here’s a nice explanation of the database persistence if you wish to go into detail about it. In short, there is a small blob of data marshalled into each ProcessInstance row in the database. Since it is overwritten every time the process instance changes, the rollback module takes a copy of that blob and stores it aside to have it available after the process changes. It can’t just copy it every time it pleases when it is inside a database transacted operation, so it does it whenever the session reaches a safe state (that is, after the transaction is finished and the session method is ready to return). And it can’t just do it for all live process instances, that would be too expensive performance-wise. So it does it only for the process that changed during the last transaction.
Overall the configuration looks like this:
The way it works is by four simple components:
ProcessSnapshotLogger: This class acts as two things:
A process event listener to monitor for any process instance changes within a persistent session. If a process instance changes, we mark it as a candidate to persist a snapshot after the knowledge session transaction is done. Also if a process instance is completed, we mark the process snapshots of that instance for deletion to keep the database runtime at a steady size.
It also works as an interceptor to wait for all safe states in a persistent session, to persist any changed process instances. The interceptor is added as a step every time a command of the command based knowledge session finishes executing.
ProcessSnapshot: A database entity designed to store the full blob representation of a process instance every time it changes, to be able to reload it on demand afterwards.
ProcessSnapshotAcceptor: Taking snapshots of process instances can affect performance, so this class provides a very simple interface to configure what process instances to monitor for rollback, omitting by default. You have a few implementations available that allow you to select all instances of a given process definition ID, all the instances, or you can implement your own by implementing the method boolean accept(String processId, long processInstanceId).
ProcessRollback: A utility class to query for old snapshots of a process instance and to paste them on top of the preexisting process instance. You can use the goBack(KieSession ksession, long processInstanceId) static method to go back one step, or the overridden static method goBack(KieSession ksession, long processInstanceId, int numberOfSteps) to go back as many steps as you like
Internally, the rollback recreates the old process instance and reactivates any nodes that were alive at the moment of the snapshot. At the moment (and in this form) you don’t send any signals to external systems that steps taken in the process need to be rolled back. However, this would be a next step for this module; by creating a RollbackableWorkItemHandler interface with a rollbackWorkItem(WorkItem item, WorkItemManager manager) method that people could implement, the rollback could go one step at a time and notify any external systems about a rollback being effected. This is one of the many subjects we would love to discuss and teach about: We at Plugtree are organizing a Public Training in London on October 21st to 25th at N 1 Poultry. We’ll cover Drools 5, 6, and jBPM 5 and 6 with as much detail as possible. We’ll introduce both the BPM and AI theory, as well as all technical specifications for the different components, in order to take the most advantage of Drools and jBPM. Here’s an overall agenda:
Day 1: Introduction to all components, for technical and non-technical folks alike
Days 2 and 3: Focus on Drools components, both versions 5 and 6, from theory to practice in as much detail as possible. We’ll also cover rule writing in most formats and best practices
Days 4 and 5: Focus on jBPM components, both versions 5 and 6, from theory to practice in as much detail as possible. We’ll also cover BPMN2 writing in as much detail as possible.
You can download the full detailed agenda. We offer different packages for group registrations and different interests. If you wish to hear more about special packages, feel free to contact us. You can click here to register. Take advantage of the early bird pricing!
If you wish to download and play with the code discussed here, you can download it from here
Original post here by me: Hello and welcome to a post in which I intend to show you how to create your own implementation of drools and jBPM persistence. I’ve worked on an infinispan based persistence scheme for drools objects and I learnt a lot in the process. It’s my intention to give you aRead more →
Hello and welcome to a post in which I intend to show you how to create your own implementation of drools and jBPM persistence. I’ve worked on an infinispan based persistence scheme for drools objects and I learnt a lot in the process. It’s my intention to give you a few pointers if you wish to do something of the sort.
If you’re reading this, you probably already have a “why” to redefine the persistence scheme that drools uses, but it’s good to go over some good reasons to do something like this. The most important thing is that you might consider the JPA persistence scheme designed for drools doesn’t meet your needs for one or more reasons. Some of the most common I’ve found are these:
The given model is not enough for my design: Current objects created to persist the drools components (sessions, process instances, work items and so on) currently are as small as possible to allow the best performance on the database, and most of the operational data is stored in byte arrays mapped to blob objects. This scheme is enough for the drools and jBPM runtime to function, but it might not be enough for your domain. You might want to keep the runtime information in a scheme that is easier to query from outside tools, and to do that you would need to enrich the data model, and even create one of your own.
The persistence I’m using is not compatible with JPA: There are a lot of persistence implementations out there that no longer use databases as we once knew (distributed caches, key value storages, NoSQL databases) and the model usually needs extra mappings and special treatment when persisting in such storages. To do so, sometimes JPA is not our cup of tea
I need to load special entities from different sources every time a drools component is loaded: When we have complex objects and/or external databases, sometimes we want new models to relate in a special way to the objects we have. Maybe we want to make sure our sessions are binded to our model in a special way because it makes sense to our business model. To do so we would have to alter the model
In order to make our own persistence scheme for our sessions, we need to understand clearly how the JPA scheme is built, to use it as a template to build our own. This class diagram shows how the JPA persistence scheme for the knowledge session is implemented:
Looks complicated, right? Don’t worry. We’ll go step by step to understand how it works.
First of all, you can see that we have two implementations of the StatefulKnowledgeSession (or KieSession, if you’re using Drools 6). The one that does all the “drools magic” is StatefulKnoweldgeSessionImpl, and the one we will be using is CommandBasedStatefulKnowledgeSession. It has nothing to do with persistence, but it helps a lot with it by surrounding every method call with a command object and deriving its execution to a command service. So, for example, if you call the fireAllRules method to this type of session, it will create a FireAllRulesCommand object and give it to another class to execute.
This command based implementation allows us to do exactly the thing we need to implement persistence on a drools environment: It lets us implement actions before and after every method call done to the session. That’s where the SingleSessionCommandService class comes in handy: This command service contains a StatefulKnowledgeSessionImpl and a PersistenceContextManager. Every time a command has to be executed, this class creates or loads a SessionInfo object and tells the persistence context to save it with all the state of the StatefulKnowledgeSessionImpl.
That’s the most complicated part: the one that implements the session persistence. Persistence of pretty much everything else is done easily through a set of given interfaces that provide methods to implement how to load everything else related to a session (process instances, work items and signals). As long as you create a proper manager and its factory, you can delegate on them to store anything to anywhere (or do anything you want, for that matter).
So, after seeing all the components, it’s a good time to start thinking of how to create our own implementation. For this example, we’ve created an Infinispan based persistence scheme and we will show you all the steps we took to do it.
Step 1: (re)define the model
Most of the time when we want to persist drools objects in our way, we might want to do it with a gist of your own. Even if we don’t wish to change the model, we might need to add special annotations to the model to work with your storage framework. Another reason might be that you want to store all facts in a special way to cross-query them with some other legacy system. You can literally do this redefinition any way you want, as long as you understand that whatever model you create, the persistence scheme will serialize and deserialize it every time you call a method on the knowledge session, so always try to keep it simple.
Here’s the model we created for this case:
Nothing too fancy, just a flattened model for all things drools related. We weren’t too imaginative with this model, because we just wanted to show you that you can change it if you want to.
One thing to notice in this model is that we are still saving all the internal data of these objects pretty much the same way as it is stored for the JPA persistence. The only difference is that JPA stores it in a Blob, and we store it in a Base64 encrypted string. If you wish to change the way that byte array is generated and read, you have to create your own implementations of these interfaces:
org.kie.api.marshalling.Marshaller for knowledge sessions
org.jbpm.marshalling.impl.ProcessInstanceMarshaller for process instances
But providing an example of that would take way too much time and perhaps even a whole book to explain, so we’ll skip it 🙂
Step 2: Implementing the PersistenceContext
For some cases, redefining the PersistenceContext and the PersistenceContextManager would be enough to implement all your persistence requirements. The PersistenceContext is an object in charge of persisting work items and session objects by implementing methods to persist them, query them by ID and removing them from a particular storage implementation. The PersistenceContextManager is in charge of creating the PersistenceContext, either once for all the application or on a per-command basis. The comand service will use it to persist the session and its objects when needed.
In our case we implemented a PersistenceContext and a PersistenceContextManager using an Infinispan cache as storage. The different PersistenceContextManager instances will have access to all configuration objects through the Environment variable. We’ve used the already defined keys in Environment to store our Infinispan related objects:
EnvironmentName.ENTITY_MANAGER_FACTORY is used to store an Infinispan based CacheManager
EnvironmentName.APP_SCOPED_ENTITY_MANAGER and EnvironmentName.CMD_SCOPED_ENTITY_MANAGER will point to an Infinispan Cache object.
You can see that code here:
At this point we have some very important steps to redefining our drools persistence. Now we need to know how to configure our knowledge sessions to work with this components.
Step 3: Creating managers for our work items, process instances and signals
Now that we have our persistence contexts, we need to teach the session how to use them properly. The knowledge session has a few managers that can be configured that allow you to modify or alter the default behaviour. These managers are:
org.kie.api.runtime.process.WorkItemManager: It manages when a work item is executed, connects it with the proper handler, and notifies the process instance when the work item is completed.
org.jbpm.process.instance.event.SignalManager: It manages when a signal is sent to or from a process. Since process instances might be passivated, it needs to
org.jbpm.process.instance.ProcessInstanceManager: It manages the actions to be taken when a process instance is created, started, modified or completed.
JPA implementation of these interfaces already work with a persistence context manager, so most of the times you won’t need to extend them. However, with Infinispan, we have to make sure the process instance is persisted more often than with JPA, so we had to implement them differently.
Once you have these instances, you will need to create a factory for each type of manager.The interface names are the same, except with the suffix “Factory”. Each receives a knowledge session as parameter, from which you can get the Environment object and all other configurations.
Step 4: Configuring the knowledge session
Now that we have our different managers created, we will need to tell our knowledge sessions to use them. To do so you need to create a CommandBasedStatefulKnowledgeSession instance with a SingleSessionCommandService instance. The SingleSessionCommandService, as its name describes, is a class to execute commands against one session at a time. SingleSessionCommandService’s constructor receives all parameters needed to create a proper session and execute commands against it in a way that it becomes persistent. Those parameters are:
KieBase: the knowledge base with the knowledge definitions for our session runtime.
KieSessionConfiguration: Where we configure the manager factories to create and dispose of work items, process instances and signals.
Environment: A bag of variables for any other purpose, where we will configure our persistence context mananager objects.
sessionId (optional): If present, this parameter looks for an already existing session in the storage. Otherwise, it creates a new one.
Also, in our example, we’re using Infinispan, which is not a reference based storage, but a value based storage. This means that once you say to infinispan to store a value, it will store a copy of it and not the actual object. Some things in drools persistence are managed to be stored through reference based storages, meaning you can tell the framework to persist an object, change its attributes, and see those changes stored in the database after committing the transaction. With infinispan, this wouldn’t happen, so you have to implement an update of the cache values after the command execution is finished. Luckily for us, the SingleSessionCommandService allows us to do this by implementing an Interceptor.
Interceptors are basically your own command service to wrap the default one. You can tell each command to add more behaviour before or after each execution. Here’s a couple of diagrams to explain how it works:
As you can see, the SingleSessionCommandService allows for a command service instance to actually invoke the command’s execute method. And thanks to the interceptor extension of the command service, we can add as many as we want in chain, allowing us to have something like the next sequence diagram executing every time a command needs execution:
In our case, we created a couple of these interceptors and added them to the SingleSessionCommandService. One makes sure any changes done to a session object are stored after finishing the command. The other one allows us to do the same with process instance objects.
Overall, this is how we need to create our knowledge sessions at this point to actually use infinispan as a persistence scheme:
Complicated, right? Don’t worry. There’s yet another couple of classes to make it easier to configure.
Step 4: Creating our own initiation service
Yes, we could write that ton of code every time we want to create our own customized persistent knowledge sessions. It’s a free world (for the most part). But you can also wrap this implementation in a single class with two exposed methods:
One to create a new session
One to load a previously existing session
And creates all the configuration internally, merging it whenever you wish to change one or more things. Drools provides an interface to serve as a contract for this called org.kie.api.persistence.jpa.KieStoreServices
We created our own implementation of this interface and also a static class to access it, called InfinispanKnowledgeService. This allows us to be able to create the session like this:
Drools persistence can seem complicated to understand and to get working, let alone to implement it in your own way. However, I hope this shows a bit of demystification to those who need to implement drools persistence in a special way, or were even wondering if it is possible to do so in any other way than JPA.
Also, if you wish to see the modifications done to make it work, see these three pull requests:
Greetings from Argentina. This post will try to cover a general view of where the form builder is right now and where it is going to be in the near future. You can get a current status view from the video below: On this video you can see a great deal of how the formRead more →
Greetings from Argentina. This post will try to cover a general view of where the form builder is right now and where it is going to be in the near future. You can get a current status view from the video below:
On this video you can see a great deal of how the form builder works today. It doesn’t cover the jBPM console integration part or the automatic form generation option, but it allows you to see how users will experience working with the form builder. To download the project, you can find it in the following locations
Feel free to download, comment or join. That pretty much covers where the form builder is right now. As to where it is heading, here’s an initial roadmap
jBPM Form Builder Roadmap
Adding HTML5 templates: Current items work with pure GWT and HTML4 for Freemarker. The idea is to build a new renderer that will allow users to export to Freemarker as well, but allowing them to export using HTML5 instead of HTML4 to create the forms. The idea as well is to create as many menu options to properly cover HTML5 capabilities, such as audio and video tags, menus, fieldsets and so on.
Adding new validations: Current validations cover basic concepts, like number or email validation. The idea is to expand and improve validation definitions, in order to allow both client side and server side validation to any level of complexity. Among new validations that are thought to be added, there will be regular expression validation, multi-field validations (applied directly to the form), and rules-based validation (in order to define a ruleflow-group specially to validate the correctness of input data)
Cleaning up effects and form items library: Current implementations of form items (the UI components you drop on your form) and form effects (the actions available when you right click on a form item) are done in a way that could be made configurable with a proper refactor. The whole idea is to make it more easily extensible, minimizing the amount of code to be added to create a right click action for UI components, or a UI component itself.
Adding tests for effects and form items library: Along with the previous item, some refactoring will be made to allow a better separation of display logic from actions logic, in order to create better test prepareness on the code side. Along with that, proper tests will be implemented for the form items and effects library.
Adding server validation to the generated form’s API: Extension of the form utility API used from jBPM console, to handle validations on server side.
Adding server interpretation of complex objects to the generated form’s API: Currently, all form submit responses are treated like a map of simple data. The idea is to create complex object associations to particular paths in the form definition, in order to create the proper objects on submit time. This will benefit both user task definitions and rule-based form validations.
Adding template management for complex objects for the generated form’s inputs: The previous item covers submit to server rendering of request data, from simple data types to complex data types. This item covers the other way around, for when a user task is given a complex type and needs to decompose it to make it available for form input data. It will allow the user to define paths within an object when defining form inputs.
Improvement of properties edition panel to add better coverage of properties for form items: This comes in hand with item 4. Once a proper management of form item properties is done, there will be a need for a better way of editing such properties.
Improvement on tree and edition panel visualization: Bug fixing and visual highlighting in the current form are two of the main things to be tackled by this bullet.
Allow switching layouts once they’re filled without losing content: Currently, once you define a layout and start adding content to it, the only way to change layouts is to create a new one and move all the content manually. This item is thought to be able to do that automatically.
Adding script helpers to allow onEvent simple validations on client side: Along with validation library expansion and server validation API, this item is thought to allow some validations to happen on the client side, to be handled on particular events (i.e. like on the change of value of an input field)
Pagination items: Create UI components that would allow to create very large forms within several pages, all part of the same form.
Definition of a standard page layout for a given user or role: Most companies have a template structure for most of their forms (wether it has a logo on a particular place, a standard stylesheet, etc). The idea is to allow designers to define such page layout and force its use to either some people or a group of people within the company.
Definition of standard UI visualization strategies for particular types of data: This is to aid the automatic form generation. The idea is to allow users to define, for example, the standard way to create visual content for Strings, Integers, Booleans and so on. It should cover complex data types as well.
New translators and renderers for JSF, XUL, Android, IPhone and Swing: Among other technologies, this would be a nice subset to cover. The order of the technologies and the omission of any don’t express any priority whatsoever.
Adding effects to allow loading contents from an ajax script or from an array variable: This way, content from a form could be loaded from an external source from the client side.
Importing of inputs from other external sources: Right now the only way to import inputs on the IO Data tab is to have them defined inside a BPMN2 process. The idea is to be able to take them from a server invocation, a user file, or any other way. This will also allow to define forms for other platforms different than the BPM engine.
Greetings! Among the things developed over the last two weeks for the jBPM Form Builder, here are the ones worth mentioning: Script helper refactor: Some of the classes had issues when being stored on the server side, due to dependencies with GWT client-side classes. A small refactor was made to make them GWT independent, and utilizeRead more →
Greetings! Among the things developed over the last two weeks for the jBPM Form Builder, here are the ones worth mentioning:
Script helper refactor: Some of the classes had issues when being stored on the server side, due to dependencies with GWT client-side classes. A small refactor was made to make them GWT independent, and utilize the GWT API from a particular view object to render on screen. User roles implemented: JAAS implementations for JBoss and Jetty are available now as stated in my last post. The jBPM Installer inside my fork has the necessary implementations for JBoss, and the Jetty implementations are available to start up the project from the Debug mode in the Eclipse Plugin. Profiles are created as described previously: web designer and functional analyst. Web designer has all the functions available (can define forms, custom menu items and use any item available), while functional analyst can only define forms using the menu items authorized by the web designer.
The whole idea behind these components will be to facilitate web designers to administrate component standarization from inside the form builder. And here’s some of the next items on the to do list:
HTML5 templates: Current form generation templates for Freemarker work with HTML4. There will be a new set of them that will use HTML5, which will probably lead to new menu items to fully cover HTML5 components. More script helpers: Current script helpers allow to make an easy implementation of an ajax service call, a combobox population ajax call, and to toggle visualization of a particular component (selected by id). There will be more script helpers, focused on creating new visual components on runtime and live validation of fields. And that’s where the next one falls in More validations: We had a few simple validations to start checking where to store them and what to do with them. Now that they seem to reach a plateau where no major refactor is needed, it is a good moment to start adding a lot more validations to the ones that are already there.