JFDI a new Business Action Scripting Language

We are working on a new non-imperative conseqeuence language for JBoss Ruless, called JFDI which is a sort of business action scripting language. We have placed this project over at Codehaus as we are hoping that it will get taken up by other declarative systems, whether they are rule or process engines.

When we first thought about doing this we got a deluge of emails like “have you looked at groovy, rhino, judoscript etc” – the answer was obviously “yes”. The last thing wanted to do was yet another language and we looked hard into using those languages, but they all fell short. Not short in features, but short becuase they did too much!!! Here was the main feature set:
1) A rule consequence should be “when this then that”, not “when this maybe that”, its quite simply a list of actions to be performed when this situation occurs. Therefor it should not contain any imperative code, no “if”, “switch” statements, its purely focused on expressions, method/function calls and we will allow a foreach loop. If someone needs to do some imperative code it should be encapsulated within a method or a function which is then called from the consequence.

2) The language should not be ambiguous and thus should be typed, although we may eventually allow type inference. Support for refactoring is definitely a long term goal.

3) A rule base may have thousands of rules, currently with JBoss Rules you end up with a minimum of two classes per rule and then a further class for each function, predicate, return value or ‘eval’ used. This large number of classes can cause a number of problems from perm gen issues to slowing down the system’s classloader. Therefore it must be able to run interpreted, no bytecode generation. A consequence can be serialised to disk and lazy loaded on first use. While this is interpreted, as much as possible is done at “compile” time with no runtime introspection, so it’s a tree of cached java.lang.reflect.Method calls. Although eventually we would like optional bytecode generation optimisations, but generally it is not the execution of the consequence that is the bottleneck.

4) A simple groovy/ruby like expression syntax. We need non-verbose ways to reference sub fields and to declare and reference maps and arrays, with support for inline anonymous maps and arrays. We would like further support for variable interpolation and <<EOF type stream writing.

5) The traditional way to “inject” variables into a scripting language is to put each key/value pair into a dictionary or context. We use to do this before drools-2.0-beta-14, it is not performant, instead we need to allow the application to determine at compile time how to resolve those external variables.

6) We need to be able to decorate a variable, at compile time, with additional methods – this way in a consequence a user can do “myVar.getFactHandle()” even if the variable itself does not support that method.

7) Native support for more complex types like BigInteger, BigDecimal and currency types is needed – so we need more control over that.

8) In-built support for FactTemplates (a better DynaBean) so they can be used as if they were real classes.

9) A less verbose way to set setters “instance(field1=z, field2=42)”. Ability to call a constructor and set fields in a single statement “new BarBaz(field1 = “val”, field2 = “x”)”.

10) The dependancy must be small, a few hundred kb.

Bob McWhirter has been busy beavering away on this and the project is getting to an almost useable state; although there is no content on the website you can start to look over the unit tests to get an idea of how the language is turning out.
http://svn.codehaus.org/jfdi/trunk/test/org/codehaus/jfdi/interpreter/
http://svn.codehaus.org/jfdi/trunk/test/org/codehaus/jfdi/

A quick look at the language itself:


//fields
instance.field = value;
instance(field1=z, field2=42)
instance.map["key"] = value;
instance.array[0] = value;

// method call with an inline map and array
instance.method( [1, 2,"z", var], {"a" => 2, "b" <= c} );

// standard constructor
bar = new BarBaz("x", 42)

// calls default constructor, THEN setters
bar = new BarBaz(field1 = "val", field2 = "x")

We are still trying to decide on the for each loop syntax. Bob wants to offer both the crappy java5 syntax (otherwise people will complain) and something more readable:


foreach item in collection {
func(item)
func2(index) # index available automatically as a counter?
}

For ( item : collection ) {
func(item)
func2(index) # index available automatically as a counter?
}

We don’t just plan to use this language for consequences, it will also be used from predicates, return values and ‘eval’s as well as the upcoming ‘accumulate’ and ‘from’ conditional elements.


$cheese : Cheese( /* additional field constraints go here */ )
from session.getNamedQuery("thename").setProperties( {"key1" => "value1", "key2" => "value2" }).list()

So if you like what you see why not roll up your sleaves and get involved. A free T-Shirt to the first person that hassles bob to apply some patches πŸ™‚

Post Comment

Just say no to DynaBeans

The subject of DynaBeans just came up on the mailing list, and it is one I’ve been asked about before – so I thought I would do a blog my answer.

Dynabeans was written as a solution for Struts back in 2000, they are not JavaBean compliant – thus they have no value outside of that provided with commons BeanUtils, thus coupling your enterprise system to BeanUtils forever – no other apps are aware of them, or able to script them like they can JavaBeans. They cannot be used with any database schema mapping tools, so you cannot easily leverage db schema generation tools. JBRules 3.0 has no support for mapping utilities thus you’ll only be able to script those from within evals. If you only use evals you will have crippled the engine to work like a simple chain of command scripting engine – under no circumstances do I recommend this. Evals are a worst use case for conditions that cannot be expressed in field constraints, as field constraints and conditional elements become more powerful the dependency of evals is reduced. Further more DynaBeans are backed by a map, each read or write results in a hashmap read or write – this is not a scalable solution.

A much better solution that works now for JBRules 3.0 is to have runtime class generation. See this blog entry here: http://sixlegs.com/blog/java/death-to-dynabeans.html

Tools like cglib make this trivial now:


BeanGenerator bg = new BeanGenerator();
bg.addProperty("foo", Double.TYPE);
bg.addProperty("bar", String.class);
Object bean = bg.create();

That produced bean can have hibernate mappings generated at runtime, so you’ll get runtime hibernate support. JBRules can compile drls at runtime using the classloader those beans where generated with thus providing full field constraint use of them and pojo like access in the consequence – much nicer πŸ™‚

For JBRules 3.2 we will provide a feature called FactTemplates – this is akin to jess/clips DefTemplates. These allow people to define Facts without the need for a compiled class, which is preferable to some who want complete encapsulation of their rules and business objects within the drl. FactTemplates are backed by an array and thus read/writes are much faster. FactTemplates support both named and int values to specify the field for the value; named set/get result in a HashMap lookup so have similar performance to DynaBean. However JBRules provides compile time optimisation of those named fields and swaps them to int lookups – we will have support for this in the JFDI language too, which we also hope will be support in jBPM. FactTemplates will also provide mapping facilities which means they can be mapped to any underlying structure – be it JavaBean, a Hashmap or a DynaBean.

Post Comment

Just say no to DynaBeans

The subject of DynaBeans just came up on the mailing list, and it is one I’ve been asked about before – so I thought I would do a blog my answer.

Dynabeans was written as a solution for Struts back in 2000, they are not JavaBean compliant – thus they have no value outside of that provided with commons BeanUtils, thus coupling your enterprise system to BeanUtils forever – no other apps are aware of them, or able to script them like they can JavaBeans. They cannot be used with any database schema mapping tools, so you cannot easily leverage db schema generation tools. JBRules 3.0 has no support for mapping utilities thus you’ll only be able to script those from within evals. If you only use evals you will have crippled the engine to work like a simple chain of command scripting engine – under no circumstances do I recommend this. Evals are a worst use case for conditions that cannot be expressed in field constraints, as field constraints and conditional elements become more powerful the dependency of evals is reduced. Further more DynaBeans are backed by a map, each read or write results in a hashmap read or write – this is not a scalable solution.

A much better solution that works now for JBRules 3.0 is to have runtime class generation. See this blog entry here: http://sixlegs.com/blog/java/death-to-dynabeans.html

Tools like cglib make this trivial now:


BeanGenerator bg = new BeanGenerator();
bg.addProperty("foo", Double.TYPE);
bg.addProperty("bar", String.class);
Object bean = bg.create();

That produced bean can have hibernate mappings generated at runtime, so you’ll get runtime hibernate support. JBRules can compile drls at runtime using the classloader those beans where generated with thus providing full field constraint use of them and pojo like access in the consequence – much nicer πŸ™‚

For JBRules 3.2 we will provide a feature called FactTemplates – this is akin to jess/clips DefTemplates. These allow people to define Facts without the need for a compiled class, which is preferable to some who want complete encapsulation of their rules and business objects within the drl. FactTemplates are backed by an array and thus read/writes are much faster. FactTemplates support both named and int values to specify the field for the value; named set/get result in a HashMap lookup so have similar performance to DynaBean. However JBRules provides compile time optimisation of those named fields and swaps them to int lookups – we will have support for this in the JFDI language too, which we also hope will be support in jBPM. FactTemplates will also provide mapping facilities which means they can be mapped to any underlying structure – be it JavaBean, a Hashmap or a DynaBean.

Post Comment

Rete with Lazy Joins

I’ve just spent the last four weeks stuck in front jProfiler taking JBoss Rules performance to the next level and the results have been great and well beyond what I hoped. To achieve this I wrote custom collections, unrolled loops and cached variables that are used repeatedly for join attempts. I really feel I’ve taken traditional Rete implementations with the following well known performance enhancements node sharing, alpha node hashing and beta node hashing to the limit. Yet I’m still way off OPSJ levels of performance for manners128 and waltz50.

My initial thought was that OPSJ must be doing some kinda of compile time/static agenda, like JRules attempts. This idea was slated for investigation when I next tackle performance again. The idea with compile time/static agenda’s is that you arrange the nodes and their memories so that propagations and joins occur in such a way that they mimick the results of a simple lifo style agenda and thus they can fire as soon as they hit the terminal node. You gain speed as you are no longer determining all cross products and conflict sets, this is kind of a cross between Rete and Leaps.

I have recently had the pleasure of exchanging a few emails with Charles Forgy, Earnest Friedman-Hill and a few others where ofcourse I just had to take the opportunity to quiz Charles Forgy on this. Here was his reply “OPSJ does not do compile-time static analysis to avoid computing conflict sets. It computes complete conflict sets, and it applies full MEA analysis to the conflict set (not some less-expensive composite pseudo-MEA)”. I was gob smacked and straight away went to off to verify this:

Waltz on OPSJ (results given to me):
Added To Agenda: 29,910
Fired : 14,067

Waltz on JBoss Rules:
Added To Agenda: 31,841
Fired : 14064

The differences are most likely due to my “less-expensive composite pseudo-MEA” conflict resolution strategy. These results proved Charles’ statement true and after having spent four weeks taking true Rete to what I thought was the limit it left me feeling like the twelve year old kid who thought he could play football with the adults.

Anyway not to be detered I went back to racking my brains again. One of the great things about systems like OPSJ and Jess is they give you something to aim for, without those markers I would probably have given up on trying to improve Rete long ago and opted for a compile time/static agenda like JRules has done. So here goes my next attempt at double guessing OPSJ πŸ™‚

In my latest drools code my hashing system does two things, firstly it allows for indexing of composite fields and secondly it returns a bucket where it guarantees that all iterated values are already true for the indexed fields, you do not need to test them again – only further non “==” fields for that join node. Inside a join node I return this bucket and join and propagate for each iterated value. However imagine if instead of iterating over that bucket you simple give a reference from the incoming token to that bucket and propagate forward – so by the time it reaches the Agenda there have been no joins but it has references to all the buckets that contain potential successful joins – I use the term “potential” as the bucket may contain un-indexed fields due to field constraints with non “==” operators. The joining process is then delayed right until the activation is finally fired. Obviously there are still a huge number of details to work out and I’m still not totally sure if this would work, especially if a join node refers to variable constraints that are in a buckets that contains non “==” operators, as you cannot be sure that you should be testing and joining those.

For Manners i’m pretty sure this works as the “find_seating” rule as two β€˜not’ nodes with only “==” constraints. For waltz it’s more difficult as all joins also have “!=” in them. Still I have enough of an idea now that I can start messing with code to try and do a proof of concept.

Post Comment

Beyond ORM

A rule has many similarities to a query. It contains one or more propositional and first order logic statements organised into a network to filter data additions and changes – we call this a “discrimitation network” as it discrimates against data that does not match its statements. Any data that successfully matches all statements for a rule arrives at a Terminal Node; at this point we can either execute a series of actions (ignoring the role of the Agenda, for the moment) on that “row” of data or return it as part of a query request.

Like a database we index our data to provide high performance. There are two types of indexing – Alpha Node hashing and Beta Node indexing. Alpha Node hashing is used to index literal constriants, so that we only propagate the data to the rules that match that literal value. Beta Node indexing is used to index data used in joins – we have two facts, person and cheese, we record who owns which cheese and there is a rule that says “when the person’s cheeses are out of date then email that person” we have a join between the person and the cheeses, so we index each cheese against its owner. We also attempt to share indexes, so if rules use the same literals or joins where possible we try and share those indexes – this reduces memory consumption and also avoids duplicate constraint evaluations.

However a Production Rule system provides many features beyond a traditional database. Many of the features, marked with a * are planed for JBoss Rules releases next year.

  • Efficient First Order Logic with ‘exists’, ‘not’ and ‘forall’* quantifiers as well as cardinality qualifiers with ‘accumulate’* and ‘collect’.
  • Abilitity to mix reasoning of data inside and both outside of the Working Memory using ‘from’.
  • Object Validation* so only objects that are valid can exist. “name length must be less than 30”.
  • Backward Chaining* for complex inferencing.
  • Ontology* support for rich Object Models, probably via some Semantic Web OWL support.
  • Efficient Truth Maintenance. Truth relationships can be set in place to ensure the Working Memory never breaks this statement.A “Red Alert” object can only exist while there are 3 or more emergencies.
  • Event Stream Processing(ESP)* can analyse sets of data over time windows. “Determine the average stock ticker price in the last 30 seconds”.
  • Event Correlation Processing(CEP)* can analyse data sets with temporal comparisons between object.

  • Firstly, before ORM advocates try and organise a public lynching, let me just state that “Beyond ORM” does not mean to replace ORM, there are clearly many applications where ORM is preferable – particularly when dealing with truly massive datasets or where you want to be able to represent relational data in differing form. However there are situations which can benefit from a richer and better integrated solution with features as described above. There is still a lot of work to achieve all of the above and we then need to consider how to cluster working memories to provide fault tolerance and some way to also make working memories transactional. If we can solve those problems we can look to integrate JBoss Cache, JBoss Rules and JBoss ESB for a next generation Data Centre. This is is a long term R&D proposal, but I thought I would sketch down the basic ideas in the diagram below.

    Post Comment

    Why Java code is bad for rules and a declarative alternative

    One of the selling features of Drools, and one of the reasons we are often chosen over competitors, has always been the ability to allow the use of Java code in specific parts of rules; expressions and consequences. This makes for a lower learning curve as Java developers can start writing consequences without additional training; whether its updating a value, sending messages or retrieving information from a database. This alignment with Java, or “Java like” languages, for Rule Engines is often touted by marketing as a reason to choose their systems over others. On the face of it this looks great and its something that management can relate to – less training, leveraging existing skill sets, where’s the down side?

    The use of a Java language detracts from the real value of a Production Rule System like Drools. The moment you allow the use of Java code in a rule you encourage the use of imperative programming. Rule Engines offer a Turing complete system with a declarative language, if you read the Drools manual we spend some time explaining propositional and first order logic and how they can be used to describe any scenario in a declarative manner – this is an incredibly power approach to software development. In quick summary Java is imperative because you have describe how to do things, this can often take up many lines of code and means you have to read all those lines of code to understand what it is doing. As the code gets more complex the more lines of code you have to read, eventually to the point that the complexity becomes large enough to make the code difficult to understand and maintain; thus creating a dependency on the original author who becomes the only person able to efficiently maintain the code. Declarative programming allows you to express complex rules using keywords; each rule identifies a specific scenario and details the corresponding actions. Those actions should also be specified in a declarative manner, otherwise we reduce the value of investment that we made in defining and authoring the conditions of the rule. This is very much in spirit with the Business Rules Approach methodology. For those more interested in this methodology I recommend the following three books:
    Business Rules and Information Systems: Aligning IT with Business Goals
    Tony Morgan
    ISBN: 0201743914

    Principles of the Business Rule Approach
    Ronald G. Ross
    ISBN: 0201788934

    Business Rules Applied
    Von Halle
    ISBN: 0471412937

    So the moment we start putting in complex nested if structures and loops into our consequence we increase the complexity of maintenance. Instead the consequence should focus on the calling of functions or object methods. Each function or method has a clear and documented role; it specifies the parameters it takes, the operations it makes on those parameters and what it returns – they also align themselves better to be represented with a simple domain specific language.

    While Drools has no plans to drop its Java support and we soon hope to be adding back in Groovy and later Javascript – we also want to introduce a language that will support the declarative approach, rather than detract from it. This new language will then become the “default” standard that we push for rule authoring.

    So what should such a language include?

    • full expression support
    • Auto-box and auto-unbox
    • Object method calls
    • Object field assignments
    • Object creation
    • simple declerative data structures
    • basic if/switch/loop support (to be used sparingly)

    I’m still trying to decide if we need support for anonymous classes – there may be times when we need to specify callbacks or action listeners on objects. However this obviously adds a level of complexity that I wish to avoid inside of consequences and may well be something that should be farmed out to function or method. Control structures like if/switch/loop might also be made configurably “optional” and discouraged. The language should be able to work in both interpreted and compiled mode. Compiled mode allows for maximum speed execution and is ideal for small to medium systems – large systems in the thousands of rules will suffer permgen issues and it may be best to use interpreted mode.

    I have done some searching but most languages are “complete” in that they are full blown languages for application development – we need something much smaller and simpler. This is more inline with what templating languages, like FreeMarker and Velocity, already have – but is not available as a stand alone language. I have recently discovered “Simple Declarative Language” which seems to go in the right direction; but has no support for function or method calls or expression evaluation – that I can see – however with the absence of anything else on the market it may be a good place to start in building our own solution.

    Post Comment

    Real World Rule Engines

    Here is an excellent article, introduction reproduced below, from our very own mailing list mentor Geoffrey Wiseman:
    http://www.infoq.com/articles/Rule-Engines

    For many developers, rule engines are buzzwords, or black boxes on an architectural diagram: something to be feared or admired from afar, but not understood. Coming to terms with this, is one of the catch-22s of technology:

    • It’s difficult to know when to use a technology or how to apply it well until you’ve had some first-hand, real-world experience.
    • The most common way to gain that experience is to use an unknown technology in a real project.
    • Getting first-hand experience using a new technology in a production environment is an invaluable experience for future work but can be a major risk for the work at hand.

    Over the course of this article, I’ll be sharing my practical experience with rule engines and with Drools in particular to support in-market solutions for financial services, in order to help you understand where rule engines are useful and how to apply them best to the problems you face.

    Why Should I Care?

    Some of you will have already considered using a rule engine and will be looking for practical advice on how to use it well: patterns and anti-patterns, best practices and rat-holes.

    Others haven’t considered using a rule engine, and aren’t sure how this is applicable to the work you’re doing, or have considered rule engines and discarded the idea. Rule engines can be a powerful way to externalize business logic, empower business users, and solve complicated problems wherein large numbers of fine-grained business rules and facts interact.

    If you’ve ever taken a series of conditional statements, tried to evaluate the combinations, and found yourself writing deep nested logic to solve a problem, these are just the sorts of entanglements that a rule engine can help you unravel.

    Some of our more complicated financial services work, when rephrased in a rule approach, began to look markedly more comprehensible. Each step in converting procedural conditional logic to Drools business rules seemed to expose both more simplicity and more power at once.

    Finally, if you’re not convinced by the above, consider this: rule engines are a tool, another way to approach software development. Tools have their strengths and weaknesses, and even if you aren’t making immediate use of this one, it’s helpful to understand the tradeoffs so that you can assess and communicate applicability in the future.

    Post Comment

    Rule Execution Flow with a Production Rule System

    Some times workflow is nothing but a decision tree, a series of questions with yes/no answers to determine a final answer. This can be modelled far better with a Production Rule System, and is already on the Drools road map.

    For the other situations we can use a specialised implementation of Agenda Groups to model “stages” in rule engine execution. Agenda Groups are currently stacked, like Jess and Clips modules. But imagine instead if you could model linear Agenda Group execution – this is something I have been thinking about for a while to allow powerful and flexible modelling of processes in a Production Rule System. A successful implementation has clear advantages over two separate engines – as there is an impedance mismatch between the two. While there is little issue using a rule engine with workflow, using workflow to control linear execution of a rule engine will very suboptimal – this means we must seek a single optimal solution for performance sensitive applications.

    Let’s start by calling these special Agenda Groups “nodes”, to indicate they are part of a linear graph execution process.

    Start rules don’t need to be in a node and resulting target nodes will detach and evaluate once this rule has finished:


    rule "start rule"
    target-node "<transition>" "<name>"
    when
    eval(true)
    then
    // assert some data
    end

    The start rule and the nodes can specify multiple target nodes and additional constraints for those target nodes; which is explained later. The start rule can fire on initialisation, using eval(true), or it could have some other constraints that fire the start rule at any time during the working memory life time. A Rule Base can have any number of start rules, allowing multiple workflows to be defined and executed.

    The start rule dictates the next valid target-nodes – only activated rules in these nodes can fire as a result of the current assertions. While the activated rules in other nodes will not be able to fire, standard rules and Agenda Groups will react, activate and fire as normal to changes in data.

    A node rule looks like a normal rule, except it declares the node it’s in. As mentioned previously a node can contain multiple rules; but only the rules with full matches to the LHS will be legible for firing:


    rule "rule name"
    node "<name>"
    when
    <LHS>
    then
    // assert some data
    end

    There is an additional node structure, which the rules are associated with, and specifies the resulting targets:


    node "node name"
    target-node "<transition>" "<name>"
    end

    Target nodes are only allowed to evaluate their activated rules once the previous start rule has finished or the previous node is empty because it has fired all its rules. Once a node is ready to be evaluated, we “detach” it and then spin it off into its own thread for rule firing, all resulting working memory actions will be “queued” and assert at safe points, so Rete is still a single process. Once a node is detached the contained rules can no longer be cancelled, they must all fire – further to this no further rules can be added. All our data structures are serialisable so suspension/persistence is simply a matter of calling a command to persist the detached node off to somewhere.

    As well as a rule specifying the LHS constraints for it to activate, the previous node can specify additional constraints. A rule can be in multiple nodes, so if two incoming nodes specify additional constraints they are exclusive to each other – in that the additional constraints of the non current incoming node will have no effect:


    node "node name"
    target-node "<transition>" "<name>" when
    <additional constraints>
    end
    end

    Further to this a node can specify multiple targets each with its own optinonal additional constraints. Sample formats are showing below:


    node "node name"
    target-node "<transition>" "<name>"

    target-node "<transition>" "<name>" when
    end

    target-nodes "<transition>" "<name>"
    "<transition>" "<name>"
    "<transition>" "<name>"

    target-nodes "<transition>" "<name>"
    "<transition>" "<name>"
    "<transition>" "<name>" when
    end
    end

    Further to this we need additional controls to implement “join nodes” and to also allow reasoning to work with both the transition name as well as the node name.

    This highlights the basics for linearly controlled execution of rules within a Production Rule system. It also means we can model any BPM process, as it’s now a simplified subset, but allow it to be done in a highly scalable way that integrates into very demanding tasks. Further to this we can still have standard agenda groups and rules that fire as a result of data changes. This provides for a very powerful solution that is far more powerful than the simple subset that most workflow solutions provide.

    Post Comment

    What is a Rule Engine

    Drools is a Rule Engine but it is more correctly classified as a Production Rule System. The term “Production Rule” originates from formal grammer – where it is described as “an abstract structure that describes a formal language precisely, i.e., a set of rules that mathematically delineates a (usually infinite) set of finite-length strings over a (usually finite) alphabet”. Production Rules is a Rule Based approach to implementing an Expert System and is considered “applied artificial intilligence”.

    The term Rule Engine is quite ambiguous in that it can be any system that uses rules, in any form, that can be applied to data to produce outcomes; which includes simple systems like form validation and dynamic expression engines: “How to Build a Business Rules Engine (2004)” by Malcolm Chisholm exemplifies this ambiguity. The book is actually about how to build and alter a database schema to hold validation rules which it then shows how to generate VB code from those validation rules to validate data entry – while a very valid and useful topic for some, it caused quite a suprise to this author, unaware at the time in the subtleties of Rules Engines differences, who was hoping to find some hidden secrets to help improve the Drools engine. jBPM uses expressions and delegates in its Decision nodes; which controls the transitions in a Workflow. At each node it evaluates a rule that dicates the transition to undertake – this is also a Rule Engine. While a Production Rule System is a kind of Rule Engine and also Expert System, the validation and expression evaluation Rule Engines mention previously are not Expert Systems.

    A Production Rule System is turing complete with a focus on knowledge representation to expression propositional and first order logic in a concise, non ambigious and declarative manner. The brain of a Production Rules System is an Inference Engine that is able to scale to a large number of rules and facts; the engine is able to schedule many rules that are elegible for execution at the same time through the use of a “conflict resolution” strategy. There are two methods of execution for Rule-Based SystemsForward Chaining and Backward Chaining; systems that implement both are called Hybrid Production Rule Systems. Understanding these two modes of operation are key to understanding why a Production Rule System is different.

    Forward Chaining is ‘data-driven’ and thus reactionary – facts are asserted into the working memory which results in rules firing – we start with a fact, it propagates and we end with multiple elegible Rules which are scheduled for execution. Drools is a forward chaining engine. Backward Chaining is ‘goal-driven’, we start with a conclusion which the engine tries to satisfy. If it can’t it searches for conclusions, ‘sub goals’, that help satisfy an unknown part fo the current goal – it continues this process until either the initial conclusion is proven or there are no more sub goals. Prolog is an example of a Backward Chaining engine; Drools will adding support for Backward Chaining in its next major release.

    The Rete algorithm by Charles Forgy is a popular approach to Forward Chaining, Leaps is another approach. Drools has implementations for both Rete and Leaps. The Drools Rete implementation is called ReteOO signifying that Drools has an enhanced and optimised implementation of the Rete algorithm for Object Oriented systems. Other Rete based engines also have marketing terms for their proprietary enhancements to Rete, like RetePlus and Rete III. It is important to understand that names like Rete III are purely marketing where, unlike the original published Rete Algorithm, no details of implementation are published; thus asking a question like “Does Drools implement Rete III?” is nonsensical. The most common enhancements are covered in “Production Matching for Large Learning Systems (Rete/UL)” (1995) by Robert B. Doorenbos

    Business Rule Management Systems build value on top of an Rule Engine providing systems for rule management, deployment, collaboration, analysis and end user tools for business users. Further to this the Business Rules Approach is a fast evolving and popular methodology helping to formalise the role of Rule Engines in the enterprise.

    For more information read the following two chapters from the manual:
    Introduction and Background
    Knowledge Representation

    Post Comment