That’s all fine and dandy, but let’s get to the point: How fast is the Phreak algorithm today? 🙂
Important note: Phreak is only at the beginning of its potential. The initial implementation was designed for correctness and with future multi-core exploitation in mind. For example many aspects have syncs added, ready for when multi-threading is added in the future. Nor has any profiling been done yet. After our first implementation, designed for correctness, our main hope was that no performance use cases would be slower than ReteOO. Which we seem to have achieved and more, now the fun begins with profiling and adding multi-thread support. Also other larger examples, or poorly written rule bases, should benefit further from the lazy algorithm; which should be more forgiving.
I ran 4 use cases of OptaPlanner benchmarks over a total of 39 datasets. All of them use a stateful Drools session and run over 5 minutes each. Both variants use Drools 6.0 and use the exact same code and configuration. The only difference between the Phreak and the ReteOO variants is the RuleEngineOption flag (kieBaseConfiguration.setOption(RuleEngineOption.PHREAK/RETEOO).
Feel free to rerun these benchmarks yourself (such as this one). Or run any of the other use cases I haven’t had time to run.
Average per use case (over all datasets per use case):
- Course scheduling: Phreak is 20% faster than ReteOO
- Exam scheduling: Phreak is 21% faster than ReteOO
- Hospital bed planning: Phreak is 4% slower than ReteOO (*)
- Nurse rostering: Phreak is 20% faster than ReteOO
(*) but Phreak scales better and therefore is faster than ReteOO on the bigger datasets.
Hospital bed planning
Phreak is already faster and more scalable than ReteOO. And it’s going to get even better. (And we need to take a deeper look at the hospital bed planning example.)