Production Rule systems, such as JBoss Rules are just one form of declarative programming; I believe that to meet the challenges of tomorrows technology we need to be looking at a unified tooling and platform approach to declarative programming. Workflow/BPM has been the predominant declarative approach for a while and Production Rules are one of the first technologies from the AI family to go mainstream. One of the nice things about Production Rule systems is they can be expanded to handle a number of domains declaratively and a variety declarative tooling can be built on top – as shown in the diagram below. I hope to use JBoss Rules as a vehicle to bring more of these technologies into the mainstream – Event Stream/Management Processing, Backward chaining and Constraint Programming are all essential items to a modern enterprise and each allows you to model the different behaviours of your system – I see these as shorter term goals. Genetic Algorithms, GIS, Neural Networks and the Semantic Web are all interesting long term goals.
For now we need to focus on the two mainstream technologies rules and workflow and figure out how provide a much needed unified modelling environment.
Declarative programming often involves the generation of content which can expose a whole heap of problems. The jBPM way is to own the control of this using their deployment tool to couple the process definition and the java classes. You write your process definition xml file and select its dependant class files which it then deploys as a PAR; which is a JAR containing the xml file and the compiled dependant classes. This gives you control over the java classes, making sure that a particular process definition always works with the correct version of the java classes. We can extend this to allow the process definition xml to reference one ore more drl files which it then makes available as a rulebase using variable reference in the jBPM context. There are two ways we can approach this. Either jBPM compiles the rulebase to a binary and serialises that to the PAR and reducing the runtime dependencies or it can add the drl to the PAR needing a runtime compilation of the drls into Rule Bases. Users can then easily look-up and reference the working memory from their own concrete handler implementations.
While this is an effective and simple way to solve the deployment problem it has a number of downsides. The PAR has ownership of those java classes and their versions, this may clash with existing deployed classes for other applications – so care needs to be taken here. While processes rarely change and many consider them to be code and thus more suited to the traditional development and deployment, rules on the other hand are both code and data and can have continual updates – this can make traditional deployment systems quite cumbersome. Further to this large organisations can have thousands of processes and tens of thousands of rules – suddenly maintaining this in a standard source control management system doesn’t sound so appetising.
The next release of JBoss Rules will include a BRMS which will allow a more enterprise approach to management of rules, please see my previous blog “JBoss Rules Server” for more details on this. The basic idea is that all rules are stored in the BRMS where they are versioned with full meta data and categorised, I can configure specific versions of those rules into a RuleBase that I can then make available – the rule base itself is also versioned. With this approach the jBPM process definition can specify a URI to the BRMS for the rule base it needs, which is lazy retrieved on first use. While this does provide better management for a large number of rules the key benefit this provides is that the application and the BRMS are now aware of each other and the BRMS can feed the application regular updates to the rule bases its using; allowing rules to live up to their role of being data.
If this approach proves successful we can look at managing other knowledge assets within the BRMS, such as the classes and the process definitions themselves. We can build versioned bundles which depend upon other versioned bundles which means that during deployment we can have a more fine grained way to solve the initial problem of the PAR owning the classes – with this approach it would be up to the server to manage versions and bundle dependencies. OSGi tries to solve some of these issues, and its something I want to explore as a long term solution to this.
The previous section covered deployment; which is actually very boring and not of interest to most users. What they are really interested in is the tooling. Currently jBPM nodes are handler based; you create a concrete class that implements an interface and retrieves the variables it needs from the jBPM context, the jBPM node then specifies that class as its handler. While simple this concept is very powerful in its flexibility and is one of jBPM’s strong points and why it is favourite with BPM developers. However there are times when you want something a bit more integrated and declarative, especially when working hand in hand with business users. We can’t integrate DRLs any more than the previous deployment example provides; however as the above diagram shows Production Rule systems facilitate a number of very good ways to declaratively author rules, the most common of which are decision trees, decision tables, score cards, structured natural language, gui driven rule authoring and rule flow. What users really want is an ability to specify one of these authoring tools and apply them to a jBPM node so that they can declaratively control the decision or task node – this offers real value. Further more the “glue” to hold these together, i.e. the passing of the relevant variables from the jBPM context and the calling and execution of the rules must be handled seamlessly in the background. I expect that the actually author page should be embedded in one of the existing jBPM process tabs in eclipse.
The above sections cover the stateless use of JBoss Rules with jBPM, this means that the life time of a working memory is for the execution of the current node, there are no long lived Working Memories. The reason for this is because jBPM supports the persistence of long lived processes, it is not generally expected that a Working Memory will be suspended and persisted else where. For this reason we currently consider it an exercise for the end user to manage, although we will try and provide a number of useful enhancements to enable JBoss Rules to play well in this environment.
So far I have only covered the unification of a Production Rule system and Workflow/BPM system; however this still only allows you to model a specific and limited set of problems. For a bit of background many of you may have heard of the AI Winter where AI was hyped and failed to deliver and has been relegated to mostly academic areas. Production Rules is one of the AI technologies now going mainstream, even though it’s been around for 25 years – what’s enabling this? I think it’s mostly due to good tooling and better documentation, mere mortals are simply not able to work in shell environments and be productive, let alone decipher some cryptic paper on how to use and implement a specific technology. Many of the other technologies have worked in silos again with limited or no tooling. Both probability/uncertainty and genetic algorithms have had a number of papers showing how they can work well with a Production Rule system, yet this hasn’t had any real uptake. It is my hope that the introduction of those systems into JBoss Rules with good tooling and documentation can help take these technologies such as these mainstream and improve the way software is written. With the way that hardware is scaling with the move to multi core processors and grid computing existing procedural ways of programming will not be enough and we will need paradigms such as these and distributed agent computing to really make the most of these environments.