Here Lucaz (Lucas Amador) and I take the first approach to integrating Apache Camel with the Drools Pipelines project (http://blog.athico.com/2009/04/batchexecutor.html).
This first stage of the drools-pipeline/drools-camel project includes just the Apache Camel dependencies and two simple tests that
use Drools VSM (Virtual Service Manager) to show how we can interact with a Drools session going through a pipeline that transforms
a piece of data into executable commands. The main idea of the integration at this stage is to show how Camel can do exactly what
Drools Pipeline is doing while adding more flexibility with all the built-in camel components (http://camel.apache.org/components.html).
The other advantage that Apache Camel brings is the possibility to implement more advanced enterprise integration patterns than a simple transformation pipeline.
Take a look at the following URL to discover all the patterns that Apache Camel supports and you can start using right away to interact with a Drools Session from the outside world. (http://www.enterpriseintegrationpatterns.com/toc.html)
Here we also provide an example that shows how we can configure Apache Camel to listen to a directory in your hard drive, take all the content from the files that are stored there and then start a pipeline that will transform these XML files into commands that will run against a Drools Session. At last the pipeline will send an email and log the results to show how we can route the results to different outcomes.
In the following route we describe how we chain different endpoints that will process the message, in this case we start with a file that will be transformed. This message will be transformed by Processors, that will be defined by implementing the Processor interface.
Because we are using the SpringCamelContext implementation this route can be defined inside the applicationContext.xml file. The following XML snippet shows us how we can express the route shown above. Please note that all the Processors used in this example need to be defined as Spring Beans too. Take a look inside the project to see the full spring configuration.
<camel:process ref="droolsContextInitProcessor" />
<camel:process ref="xmlNodeTransformer" />
<camel:to uri="direct:xstreamTransformer" />
<camel:from uri="direct:xstreamTransformer" />
<camel:process ref="camelXStreamFromXmlVsmTransformer" />
<camel:to uri="direct:executor" />
<camel:from uri="direct:executor" />
<camel:process ref="batchExecutorProcessor" />
<camel:to uri="direct:xstreamTransformerResult" />
<camel:from uri="direct:xstreamTransformerResult" />
<camel:process ref="camelXStreamToXmlVsmTransformer" />
<camel:to uri="direct:finalResult" />
<camel:from uri="direct:finalResult" />
<camel:process ref="assignResultProcessor" />
<camel:to uri="direct:executeResult" />
<camel:from uri="direct:executeResult" />
<camel:process ref="executeResultProcessor" />
<camel:to uri="file://src/test/resources/xml/output" />
<camel:to uri="log:org.apache.camel.example.result?level=INFO" />
In the previous figure we show the full pipeline configured with Spring taking advantage of the camel-spring module. This pipeline describes the route that one file containing
commands will pass through to be executed. As you can see in the example project, a directory is polled in order to look for files. When a file is found the first step will be to initialize the Drools Context.
This Drools context will be inserted into the Message that will pass throughout all the camel endpoints. This context will represent the services needed to execute all the stuff related with Drools inside camel.
If you want to use Apache Camel with Drools you will probably need to initialize this Drools context using the pluggeable Camel Processor called: DroolsContextInitProcessor.
The following steps executed in this pipeline are:
* Transform the content of the file into a DOM document
* Convert the DOM document using XStream into a Set of commands (BatchExecution)
* Execute all these commands
* Tranform the Results objects into XML
* Assign the result to the ResultHandler
The last two steps are pure Camel endpoints. One of them will Log the result to the standard output using Commons Logging, and the other will create a file with the obtained results. As you can see we are taking advantage of the Routing capabilities of Camel that let us send the message to two different endpoints. It’s important for you to know that multiple endpoints can be added here, for example: send a mail with the results, print the results, etc.
In this case we use the following file that will be consumed by the first endpoint that is pooling the directory: file://src/test/resources/xml/
When the pipeline is executed we will get the following outputs:
* A file inside the //src/test/resources/xml/output
* A log in the system console
* The result from the resultHandler
Each of these three outputs will contain the following content:
sample.xml (in the output directory):
<fact-handle identifier="lucaz" externalform="0:1:9695314:9695314:2">
Just one last thing to note is that for each new input type that you use you will need to implement a Processor to get the XML content. For more details about that take a look at the FileContextInitProcessor implementation.
Download the example project: Pipelines with Camel