Even a basic planning problem, such as bin packing, can be notoriously hard to solve and scale. One might consider the brute force algorithm. Let’s take a look at how that algorithm works out on the cloud balance example of Drools Planner:
Given a set of servers with different hardware (CPU, memory and network bandwidth)
and given a set of processes with different hardware requirements,
assign each process to 1 server
and minimize the total cost of the active servers.
The brute force algorithm is simple: try every combination between processes with each process assigned to each server. For example, if we have 6 processes (P0, P1, P2, P3, P4, P5) and 2 servers (S0, S1), we’d try these solutions:
- P0->S0, P1->S0, P2->S0, P3->S0, P4->S0, P5->S0
- P0->S0, P1->S0, P2->S0, P3->S0, P4->S0, P5->S1
- P0->S0, P1->S0, P2->S0, P3->S0, P4->S1, P5->S0
- P0->S0, P1->S0, P2->S0, P3->S0, P4->S1, P5->S1
- P0->S1, P1->S1, P2->S1, P3->S1, P4->S1, P5->S1
On my machine, it takes 15ms to calculate the score of these 2^6 combinations. When I scale out to 9 processes and 3 servers, which are 3^9 combinations, it becomes 1582ms. So it scales like this:
Notice that despite that the number of processes has not even doubled, the running time multiplied by 100! For comparison, I ‘ve added the running time of the First Fit algorithm.
And it gets worse: for 12 processes and 4 servers, which are 4^12 combinations, it take more than 17 minutes:
What if we want to distribute 3000 processes over 1000 servers? With this kind of scalability, it will take too long. In fact, the brute force algorithm is useless.
Luckily, Drools Planner implements several other optimization algorithms, which can handle such loads. If you want to know more about them, take a look at the Drools Planner manual or come to my talk at JUDCon London (31 Oct – 1 Nov).