Scaling Planner with JIT selectors in memory consumption and performance

For Drools Planner 5.5.0.Beta1, I ‘ve rewritten the Selector architecture from scratch. I ‘ve just pushed the changes to enable it on master. A few weeks ago, I already blogged about the Just In Time Selectors, which I ‘ll talk about more in this blog entry. In the next few weeks I ‘ll cover the other new features in the new Selector architecture.

Let’s take a look at the memory footprint on a few datasets, with the old Selector architecture and the new Selector architecture.

Dataset machinereassignment B1

The B1 dataset has 5 000 planning entities and 100 planning values with a search space of 10^10000.

Before (without the new selectors):

Solving ended: time spend (61178), best score (0hard/-5368866214soft), average calculate count per second (85983).

After (with the new selectors in JIT mode):

Solving ended: time spend (60672), best score (0hard/-3818500264soft), average calculate count per second (144530).

Except for the selectors, all the code is exactly the same. Notice how:

  • The old selectors used 500 MB memory. JIT selectors use around 100 MB memory. That’s about 80% less memory usage.
    • With a lower max memory setting, it can even use less.
    • The difference is even more for bigger problems (B2-B10).
  • The old selectors calculated 85k scores per second. JIT selectors calculate 144k scores per second. That’s almost 70% more performance.
    • How is that possible if even the score calculation code is exactly the same? That’s because the JIT selectors don’t shuffle the entire move list on every step, they use Random far more efficiently. This leaves more time to do more steps (from 41 steps to 2186 steps).
  • In 60 seconds, the old selectors found a best solution of score -5.3m, JIT selectors found a score -3.8m. That’s a 28% better solution.

Bigger datasets have even better results. Do note that in non-JIT mode (cacheType STEP or PHASE) the new selector architecture produces a graph and results similar to the old Selector architecture.

Dataset machinereassignment B10

The B10 dataset has 50 000 planning entities and 10 000 planning values with a search space of 10^184948.

Before (without the new selectors):

OutOfMemoryError
The old Selector architecture simply could not scale to this problem size on a regular computer.

After (with the new selectors in JIT mode):

Solving ended: time spend (60994), best score (0hard/-37237760070soft), average calculate count per second (18302).

This dataset is so big, that it takes over 750 MB and half a minute just to load the dataset from file into memory. However, running Planner on top of it, barely takes 200 MB more, so in this case, Planner has a memory footprint of only 27% of the dataset itself.

Conclusion

When the use cases become really big, traditional selector methods run into memory limits. In those cases, JIT selectors don’t: they continue to scale out.

However, most use cases are smaller, with only a few thousand planning entities. They have plenty of free memory and therefor see little or no benefit from JIT selectors. But they do benefit from other new features in the new Selector architecture, such as filtering and probability selection. I’ll blog about those soon.

Author

Comments are closed.