Day 1 (part 2)
Day 2 (part 2)
Last (half) day where I have to present myself as well (as 3rd of the day).
A Well-Mixed Cocktail: Blending Decision and RPA Technologies in 1st Gen Design Patterns
Lloyd introduced an RPA-enabled case mgmt platform that is used in the context of a use case to determine eligibility for Affordable Care Act. Using Sapiens for decisions and Appian for BPM, approximately 4000 people are using this as a work mgmt application (where work is assigned to people so they can work through this). To be able to achieve higher throughput, they however combined this with RPA that emulate the behavoir of the users. He showed (unfortunately in a prerecorded video, not a live demo) how they implemented the robots to perform some of the work (up to 50% of the total work done by the users !). The robots learned how to soft fail if there were issues (in which case the work would go back into the queue), needed to accomodate for latency, etc.
Emergent Synthetic Process
Keith Swenson – Fujitsu
Keith presented a way to customize processes to different contexts (for example slightly different regulations / approaches in different countries) by being able to generate a customized process for your specific context (when you start the process). Rather than encoding processes in a procedural manner (after A do B), he is using “service descriptions” to define the tasks and the preconditions. You can then generate a process from this by specifying your goal and context and working backwards to create a customized process from this. This allows you to add new tasks to these processes easily (as this is much more declarative logic and therefore additive).
The demo showed a travel application with approval by different people. Service descriptions can have required tasks, required data, etc. The process is generated by working backwards from the goal, adding required steps one by one. Different countries can add their own steps, leading to small customizations in the generated process.
Automating Human-Centric Processes with Machine Learning
Kris Verlaenen – Red Hat
I was up next ! I presented on how to combine Process Automation and Machine Learning (ML), to create a platform that combines the benefits of encoding business logic using a combination of business processes, rules etc. but at the same time can become more intelligent over time by observing and learning from the data during execution. The focus was on introducing “non-intrusive” ways of combining processes with ML, to assist users with performing their tasks rather than to try and replace them.
The demo was using the it-orders application (one of our out-of-the-box case management demos that employees can use to order laptops) that focused on 3 main use cases:
- Augmenting task data: While human actors are performing tasks in your processes or cases, we can observe the data and try to predict task outcomes based on task inputs. Once the ML algorithm (using Random Forest algorithm, with the SMILE library as the implementation) has been trained a little, it can start augmenting the data with possible predictions, but also with a confidence it has on that prediction, the relative importance of the input parameters, etc. In this case, the manager approving the order would be able to see this augmented data in his task form and use it to make the right decision.
- Recommending tasks: Case management allows users to add addition dynamic tasks to running cases (even though they weren’t modeled in the case upfront) in specific situations. Similarly, these can be monitored and ML could be used to detect patterns. These could be turned into recommendations, where a user is presented with a recommendation to do (or assign) a task based on what the ML algorithm has learned. This can help the users significantly to not forget things or to assist them by preparing most of the work (they simply have to accept the recommendation).
- Optimizing processes based on ML: One of the advantages of the Random Forest algorithm is that you can inspect the decision trees that are being trained to see what they have learned so far. Since ML also has disadvantages (that it can be biased or that it is simply learning from what is being done, which is not necessarily correct behavior), analyzing what was learned so far and integrating this back into the process (and/or rules etc.) has significant advantages as well. We extended the existing case with additional logic (like for example an additional decision service to determine whether some manager approvals could be automated, or additional ad-hoc tasks included in the case that would be triggered under certain circumstances), so that some of the patterns detected by ML would be encoded and enforced by the case logic itself.
These non-introsive ways of combining processes with ML is very complementary (as it allows us to take advantage of both approaches which mitigates some of the disadvantages of ML) and allows users to start getting advantages of ML and build up confidence in small and incremental steps.
Keith explained the efforts that are going into the DMN TCK, a set of tests to verify the compliance of DMN engines. When running these tests, it takes a large number of models and test cases (currently over a thousand but still growing) and check the results. He explained some of the challenges and opportunities in this context (e.g. error handling).
While many vendors claim DMN compatibility, Red Hat is one of the few vendors that actually has the results to prove it !
That concludes bpmNEXT 2019! As previous years, I very much enjoyed the presentations, but probably even more the discussions during the breakouts and evenings.