We are happy to announce a fresh Kogito Tooling release! On this release, we did a lot of improvements and bug fixes. We progressed a lot on the DMN/BPMN Editors, delivered a brand new UX for the Online Editor and also improved a lot the ‘native’ experience of VSCode. Another important milestone achieved is the first experimentalRead more →
We are happy to announce a fresh Kogito Tooling release!
On this release, we did a lot of improvements and bug fixes. We progressed a lot on the DMN/BPMN Editors, delivered a brand new UX for the Online Editor and also improved a lot the ‘native’ experience of VSCode.
Another important milestone achieved is the first experimental release of our extension with the of the new VSCode Custom webview proposed API. In order to give it a try, you will need to download the latest version of VSCode (1.43.0), install the specific extension (vscode_extension_kogito_kie_editors_0.2.8-new-webview-api-release.vsix) and run it with the following command:
In case you don’t want to run VSCode in proposed API mode, for now, we are still packing vscode_extension_kogito_kie_editors_0.2.8.vsix. Included on this release:
DMN Editor Improvements/Bug Fixes
KOGITO-964: Run on VSCode all DMN demos published on Kogito examples
What a great year! I’m really proud of everything that our team delivered in 2019. Without a shadow of a doubt, this was the most exciting year of our team. Our team’s primary focus was on Business Central and Kogito Tooling. First of all, I would like to congrats all the engineering team for the excellentRead more →
What a great year! I’m really proud of everything that our team delivered in 2019. Without a shadow of a doubt, this was the most exciting year of our team. Our team’s primary focus was on Business Central and Kogito Tooling.
First of all, I would like to congrats all the engineering team for the excellent work, including Adriel 🇦🇷, Paulo 🇧🇷, Tiago 🇧🇷, Pere 🇪🇸, William 🇧🇷, Gabriel 🇧🇷, Guilherme 🇧🇷, Rishiraj 🇮🇳 , and Abhishek🇮🇳, with all the support from our manager David 🇪🇸, our Product Owner Myriam 🇲🇽 and our architect Alex 🇧🇷.
All this work is only possible because we collaborate with an impressive QE Team (Tomas 🇨🇿, Barbora 🇸🇰, Dominik 🇸🇰, and Jakub 🇨🇿), and an awesome UXD Team (Liz 🇺🇸, Sarah 🇺🇸, Kyle 🇺🇸 , Amy 🇺🇸 , SJClark🇺🇸 and Brian 🇺🇸). A huge thanks to everyone involved with foundation work.
Let’s do a retrospective of what our team delivered in 2019 on two major areas: Business Central and Kogito Tooling. I hope you guys enjoy it!
Kogito Tooling
Kogito, the newest project from the KIE group, is cloud-native business automation for building intelligent applications, backed by battle-tested capabilities.
Kogito is designed from the ground up to run at scale on the cloud. By taking advantage of the latest technologies (Quarkus, Knative, etc.), you get amazingly fast boot times and instant scaling on orchestration platforms like Kubernetes.
The project also goes beyond technology, and it also provides a focus on the domain. It replaces traditional generic API by a domain-based API, avoiding unnecessary abstraction leaks of the underline tools used.
Kogito focuses on developers, and this is where the BPMN Extension for VSCode plays an important role. Additional to the extension, Kogito also provides:
Tooling embeddable wherever you need it
Code generation taking care of 80% of the work;
Flexibility to customize, only use what you need;
Simplified local development with live reload.
To learn more about Kogito, please visit the kogito.kie.org website.
In the second semester of 2019, our team was responsible for building the infrastructure of the kogito tooling.
We had a remarkable year for the kogito tooling! I’m proud of the excellent work and innovation that we were able to accomplish on this initiative. Let’s take a look:
BPMN/DMN VSCode Extension
This release in September marked the first piece of the new tooling infrastructure for the Kie Group Team. With VSCode support, our goal was to streamline the dev workflow of our platform, making it easy for developers to create BPMN diagrams and push them straight to kogito runtimes.
In October, we released our GitHub Chrome Extension. With this new tool, users are able not just to edit but also visualize DMN/BPMN diagrams — which is especially cool on Pull Request reviews.
See the original blog post, and the posts describing the cool improvements [1] [2] that we launched in December:
The Kogito Online Editor provides a simple way to edit DMN and BPMN files directly on your browser. You can create a file from scratch or upload an existing one from your device.
Kogito Tooling Resource Content API
In December, we also launched Resource Content API, a common API able to fetch content across different Channels(VSCode/Chrome/Online), an example of this need is the DMN Editor that imports another DMN model. Another example is a Runtime Process Admin UI that needs access to the Process Definition (the content of a BPMN file). See the full blog post for details.
That is it for the Kogito Tooling side! Let’s take a look at what we achieved on Business Central?
Business Central
KIE (Knowledge Is Everything) is an umbrella brand offering a complete portfolio of solutions for business automation. It contains a group of related projects, including Drools (business rule management system), jBPM (a flexible Business Process Management Suite), and OptaPlanner (constraint solver).
The web tooling to interface and integrate with those projects is called Business Central: a web UI to author, manage and track business rules and processes.
Let’s take a look at what our team delivered in this area.
High Availability
Some weeks ago, KIE Foundation Team achieved a significant milestone: Build a cloud friendly-production-ready HA infrastructure for jBPM, Drools, and Optaplanner tooling (Business Central).
This journey, targeting a fail safe infrastructure of Business Central, required a refactoring and redesign of some pieces of our codebase, with significant changes on the filesystem, index engine, distribution of events, and on Business Central UI. Still, in the end, we are delighted with the result, seeing Business Central running smoothly on a clustered environment (especially on OpenShift). Take a look at the full blog post.
Streamline the Dev Workflow on Business Central
Based on feedback from our community, one of our main goals in BC this year was to enhance the developer workflow for rules/BPM. In that area we delivered:
Git Hooks Samples
Some cool examples to make it easier for you to automate git workflows via git hooks. You should take it a look.
Git Hooks Execution Feedback Messages
Based on your feedback on our git hooks integration, we included a way to customize your messages on script execution.
Business Central now provides a mechanism to enable users getting feedback about the git hooks execution using customized messages based on the hook exit code.
SSH Keystore
To provide better automation on our platform, we delivered in Business Central an SSH Keystore. This means that users can store their public keys inside the workbench and use them to authenticate their automation scripts via ssh keys.
Contributors on Business Central
We’ve created three roles integrated directly into spaces and projects (Contributor/Admin/Owner). These roles allow you to fine-tune the permissions on your spaces and projects, and soon this will deprecate Security Permissions for spaces and projects. Take a look at the full blog post.
Role-Based Access Control for Branches
Built upon Contributors feature, we are proud to release role-based access branches on Business Central.
With this new feature, we provided a new UI on project settings to allow users to restrict the access for a target branch for a specific collaborator type. This is pretty useful for instance when you want to freeze a release branch for some roles. Take a look at the demo:
Import specific branches on Business Central
Sometimes you don’t want to import all the branches from your repo. We made it possible to import just some specific branches from your repo.
Speed up the Developer Workflow
As I already mentioned, one of our goals was to enhance the developer workflow for rules/BPM. And we included on BC two things that we believe will be game-changers for those developers who work with Business Central on a daily basis.
The first feature was the ‘Build and Install’ button on authoring. This allows users to do a build on their projects without the need for a deploy.
The second valuable addition was that we created two new Decision/Process Server Modes. DEVELOPMENT (the new default) and PRODUCTION that (Blocks “-SNAPSHOT” deployment).
These changes will allow users to quickly build, deploy our projects, making it easier (and saving many clicks) to test their changes on Kie Server while we can preserve the production consistency. First, let’s take a look at our production mode.
Workspace collaboration via change requests
We also included a cool new feature targeting business central collaboration: Change Requests support between Business Central branches.
This new workflow allows users to submit their changes for approval from one branch to another, as well as the ability to review the changes prior to the integration.
See details about this feature on this post and a sneak peek on this video:
Form modeler support for class models from external dependencies
We created a way to allow users to load models from external dependencies on forms. This allows us to generate forms for processes that use classes from external dependencies (not created on the same project with Data Modeler but that are added as dependencies of the project).
Upload of multiple documents for human task forms
We created in the forms designer a widget called “Document Collection”. It enables you to upload multiple documents to a process or task form. You can also use the “Document Collection” widget for process or task forms that have a variable with the type `org.jbpm.document.DocumentCollection`. Additionally, it also supports the legacy type `org.jbpm.document.Documents`.
Importing and exporting Dashbuilder data
On Business Central 7.25 (August) 7.25 we added a feature that allows users to import and export all data related to Dashbuilder: Datasets, Perspectives, and Navigation.
CSS Editor on Form Builder
We also created a cool new CSS Editor on our Form Builder. Take a look on your demo:
React components in Red Hat Business Central
We also created a way to write Business Central components based on React. Take a look at the full blog post.
Business Central Brand Consolidation
To consolidate and promote better our technology stack, since 7.18 version we consolidated and rebranded the workbenches to a single distribution called to Business Central (accessible via /business-central). Take a look at this post to do a deep dive on this migration.
Offline Charts
The Dashbuilder default chart implementation was replaced from Google Charts to C3.js. The main reason behind this move is to allow our users to use Business Central in an offline environment (Google Charts requires internet connection).
UX improvements
With the excellent collaboration from the UX team, we are able to improve a lot of Business Central UX. Some highlights:
AF-2176 — Error messages need to be able to be copied
In the past, error messages often overrun the size of the field on the bottom of the web page. The only way to read them was hovering over the field and there was no way to copy the text of a message. We fixed this formatting the error message and also adding a copy button that copies all the errors on CSV format.
AF-2244 — UX support/guidance after new deployment
After deploying a new project, the user is now able to ‘View deployment details’ with a useful link.
AF-2151 — Modify asset Save button behavior and presentation
Instead of asking the user every time to add a comment to the file commit on the Save action, we now provide a split button with the primary (default) option being simply Save the file, with no dialog presented before saving — streamlining the dev workflow.
If the user wants to comment on the save operation, he/she can still rely on our new option “Save with comments”, that will be presented with a dialog asking for the file save the comment.
AF-2214 — User confirmation when closing the workbench with unsaved stuff
If the user had unsaved changes and closed the browser, then they would lose all changes without being warned about it. Thus, as a last resort, after this PR the browser will ask the user for confirmation when closing the tab, closing the browser or refreshing the page. (Currently, this functionality is only supported on Google Chrome).
AF-2215 — When you add or remove a field from the form, it scrolls up.
This fixes the issue that every time that you added or removed a field from the forms, it would scroll to the top. Now we keep the form at the same position after every edit.
AF-2216 If I shutdown the server, the Web UI just spins and spins w/o an error message
After this PR, if the server is shutdown, we show an appropriated pop up saying that the server is being shut down, instead of a generic error message.
AF-2177 — Add Project button should also allow importing
If the URL has leading or trailing spaces when importing a git project, the import fails. Field validation should handle this automatically for the user.
Improvement in generic error dialogues
We added a bunch of new features to improve generic error dialogues in business central. See a full blog post about it.
Thank you to everyone involved!
I would like to thank everyone involved in this remarkable year, from the awesome Foundation Team Engineers, to the lifesavers QEs, Docs, and to all the UX people that help us make our work look fantastic!
Recently, we pushed a lot of cool new features on Business Central added by Foundation Team. Those features will be available soon at 7.30 release[1]. This post will do a quick overview of those. I hope you guys enjoy it! [1] As we deliver incrementally, some of these features are already released on previous versions. HighRead more →
Recently, we pushed a lot of cool new features on Business Central added by Foundation Team. Those features will be available soon at 7.30 release[1].
This post will do a quick overview of those. I hope you guys enjoy it!
[1] As we deliver incrementally, some of these features are already released on previous versions.
High Availability on Business Central
Some weeks ago, KIE Foundation Team achieved an important milestone: Build a cloud friendly-production-ready HA infrastructure for jBPM, Drools and Optaplanner tooling (Business Central).
Business Central HA Architecture
This journey, targeting a fail save infrastructure of Business Central, required a refactoring and redesign of some pieces of our codebase, with major changes on filesystem, index engine, distribution of events and on Business Central UI, but in the end, we are really happy with the end result, seeing Business Central running smoothly on clustered environment (especially on OpenShift).
Soon, Adriel Paredes (our engineer leading this effort) will share a detailed blog post of this new architecture. Stay tuned.
Workspace collaboration via change requests
Some release ago, we also included a cool new feature targeting business central collaboration: Change Requests support between Business Central branches.
This new workflow allows users to submit their changes for approval from one branch to another, as well as the ability to review the changes prior to the integration.
See this feature on detail on Guilherme’s post and a sneak peek on this video:
UX improvements
With the great collaboration with UX team, we are able to improve a lot of Business Central UX. Some highlights:
AF-2176 — Error messages need to be able to be copied
In the past, error messages often overrun the size of the field on the bottom of the web page. The only way to read them was hovering over the field and there was no way to copy the text of a message. We fixed this formatting the error message and also adding a copy button that copies all the errors on CSV format.
AF-2244 — UX support/guidance after new deployment
After deploying a new project, the user is now able to ‘View deployment details’ with a useful link.
AF-2151 — Modify asset Save button behavior and presentation
Instead of asking the user every time to add a comment to the file commit on the Save action, we now provide a split button with the primary (default) option being simply Save the file, with no dialog presented before saving — streamlining the dev workflow.
If the user wants to comment on the save operation, he/she can still rely on our new option “Save with comments”, that will be presented with a dialog asking for the file save the comment.
AF-2214 — User confirmation when closing the workbench with unsaved stuff
If the user had unsaved changes and closed the browser, then they would lose all changes without being warned about it. Thus, as a last resort, after this PR the browser will ask the user for confirmation when closing the tab, closing the browser or refreshing the page. (Currently, this functionality is only supported on Google Chrome).
AF-2215 — When you add or remove a field from the form, it scrolls up.
This fixes the issue that every time that you added or removed a field from the forms, it would scroll to the top. Now we keep the form at the same position after every edit.
AF-2216 If I shut down the server, the Web UI just spins and spins without an error message
After this PR, if the server is shut down, we show an appropriated pop up saying that the server is being shut down, instead of a generic error message.
AF-2177 — Add Project button should also allow importing
AF-2213 — Import project URL cleanup
If the URL has leading or trailing spaces when importing a git project, the import fails. Field validation should handle this automatically for the user.
Improvement in generic error dialogues
We added a bunch of new features in order to improve generic error dialogues in business central. See a full blog post from Rishiraj about it.
Other bug fixes and improvements:
We also fixed several bugs and did some performance improvements on Business central:
AF-2324 Performance issues when opening assets with open Project Explorer
JBPM-8826 — Forms — 10 Listeners get added each time a form is rendered and memory leaks appears
AF-1768 Errors on Windows when login user account contains special character
AF-1919: Upgrade Bootstrap to 3.4.1
AF-2292 Remove Angular and Knockout from Business Central War
AF-2223 Filter by asset type displaying no results
AF-2245 Dashbuilder not closing ResultSets and Statements
AF-2384: Cloning from remote git repo that requires credentials does not work
AF-2125: Splitting ace and core editors from base widgets
AF-2283: Cannot open standalone perspective in Firefox
AF-2162: Roles permissions are not persisted and reset
AF-2054 :Open asset is not updated for user who push the change
Thank you to everyone involved!
I would like to thank everyone involved with this release, from the awesome Foundation Team Engineers, to the lifesavers QEs and all the UX people that help us make our work look awesome!
We are happy to announce a fresh new Kogito Tooling release that includes a major milestone for our team — the DMN support for VSCode and GitHub Chrome extension. We also added some important enhancements to our GitHub Chrome Extension. Now users will be able to not just edit but also visualize DMN/BPMN diagrams — what is especially coolRead more →
We are happy to announce a fresh new Kogito Tooling release that includes a major milestone for our team — the DMN support for VSCode and GitHub Chrome extension.
We also added some important enhancements to our GitHub Chrome Extension. Now users will be able to not just edit but also visualize DMN/BPMN diagrams — what is especially cool on Pull Request reviews.
First, as always, let’s take a look at the demo! Please pay special attention to the pull request workflow.
Awesome isn’t it? Alex Porcelli wrote a detailed post about how this feature is a game changer to the BPMN/DMN developer workflow. Take a look on his blog.
Decision Model and Notation Support
With a great collaboration of the Drools Tooling team (with highlights for the impressive work of Michael Anstis, Gabriele Cardosi and Guilherme Carreiro Gomes) we reached a major milestone on Kogito tooling. Now you will be able to create your DMN files inside your VSCode and on GitHub Chrome Extension.
Chrome Extension New Features
As promised in the last post, we also added on this release two new major improvements on our Chrome Extension. The ability to visualize the BPMN/DMN diagram on any repo (not only on editing) and also a great integration to the GitHub PR review mechanism.
You are now able to not only visualize the current diagram but if you click on the original button, you will be able to see the current state of the file in the repository. We hope that this will be helpful to review your models directly on the GitHub Pull Request in the same way you do for any source code.
NOTE: When editing a file directly on GitHub’s interface and committing it, GitHub takes a while to make the new file available on raw.githubusercontent.com. Since that’s where we fetch the files from, you might see outdated versions for a while. Don’t panic! After a few moments the files will be in sync.
Are you tired of (impossible) big XML code reviews of BPMN diagrams on GitHub Pull Requests? We know your pain and that is exactly the reason why we just released a new Chrome Extension that allows visualizing and editing BPMN files directly on GitHub’s interface. 🎉 Before diving on details, let’s take a look atRead more →
Are you tired of (impossible) big XML code reviews of BPMN diagrams on GitHub Pull Requests?
We know your pain and that is exactly the reason why we just released a new Chrome Extension that allows visualizing and editing BPMN files directly on GitHub’s interface. 🎉
Before diving on details, let’s take a look at a quick demo of this feature:
Pretty cool isn’t it?
With this chrome extension, our goal is to streamline even more the dev workflow, making Kogito the most developer-friendly business automation platform.
How to set up the extension on Chrome
During this alpha stage, you will have to download the extension from the GitHub releases page and install it manually on Chrome. Soon we will publish this extension on Chrome Web Store, but for now, those are the installation steps:
Features
With the BPMN Github Chrome extension installed, every time that you are editing a BPMN/BPMN2 file, instead of seeing the huge XML file you will be using our BPMN graphical editor. After you modify the file you will be able to commit or send a PR directly from the GitHub interface.
We also provide some advanced features, like full-screen visualization for big diagrams and also you can click on “See as source” to edit the XML manually. You can always go back to the diagram by clicking on “See as diagram”.
What’s Next
We have big plans for our extension, that will include visualizing the BPMN diagram on any repo (not only on editing) and also more integration to PR review mechanism.
Soon we will release a DMN editor extension under the same platform, stay tuned!
Known issues
We are really happy with the results of this alpha release and we want to get the community involved in this as early as possible, that is why we released this with some known issues:
KOGITO-342 Check why BPMN editor shows error on page closing.
One of our major goals in 7+ series of Business Central is to gradually move towards a cloud-ready environment. (Porcelli and I will talk about this on next Oracle Code One). In that direction, in 7.0 we did a major rewrite of BC clustering technology, moving away from Zookeeper and Helix in order to simplify theRead more →
One of our major goals in 7+ series of Business Central is to gradually move towards a cloud-ready environment. (Porcelli and I will talk about this on next Oracle Code One).
In that direction, in 7.0 we did a major rewrite of BC clustering technology, moving away from Zookeeper and Helix in order to simplify the setup and take advantage of provided infrastructure, especially in a containerized environment like OpenShift.
This post gives a quick overview of the new Business Central cluster setup and also explains some implementation details for those who would like to go more in depth.
Photo by Douglas do Vale
This series of posts provides a description of the inventions that I’m proud of as a Computer Scientist. I talk about the decisions that I made and the steps that I took to figure out the solution for these problems.
Most of these contributions are a result of long conversations between me and the awesome members of Foundation Team (especially my friend and architect Alex Porcelli).
New Cluster Setup
Before diving in some details, let’s have a quick overview and do a basic hello world in the new clustered setup of business central.
Cluster Overview
The new Business Central cluster has three major components. A Shared File System Infrastructure to store our git filesystem (i.e. a Network File System), an indexing engine (used i.e. for listing and searching for assets) and a JMS based messaging system (used to share cluster messages i.e. NIO2 WatchEvents). In this post, we will explore storage and messaging aspects. Indexing subsystem will be a topic of a future blog post.
Business Central Cluster Basic Architecture
The old cluster setup was based on Zookeeper and Helix (for Global Lock and Intra Cluster Message). This setup is indeed powerful but the trade-off was an extra burden of setup and maintenance complexity on our users. Our goal for 7.0 was to provide the same functionality in a simpler, and yet container friendly architecture. Before diving into the details of this architecture let’s do a quick hello world?
Basic Cluster Setup
Let’s create basic configuration of Artemis for messaging and two business central instances running on the same machine.
For messaging, the first step is to download Apache Artemis 2.3.0. After downloading it, unzip it and create a broker:
Inside mybroker/bin directory run this broker with:
./artemis run
Artemis Live
Please note that Artemis itself could be also configured in a clustered high availability mode. Take a look at artemis docs.
Let’s configure wildfly instances. For this demo, we will use standalone mode but you can also use domain mode if that fits your use case.
Basic Cluster Setup
On wildfly1, Copy business central war into standalone directory of Wildfly At the time of writing we support Wildfly 11.0.0.Final. Support for Wildfly 14 is on the way) and run it with:
Let’s give some basic details about the parameters:
appformer-jms-connection-mode: we have two connection modes for messaging in the cluster, REMOTE (to connect to a remote message provider — this is our case) and JNDI (to use messaging provider in the container itself).
appformer-jms-url: the remote message provider URL
appformer-jms-username: the remote message provider username
appformer-jms-password: the remote message provider username
On the wildfly2, also copy business central war on standalone directory of Wildfly:
I have to to change some default ports of business central because we are running two instances on the same machine, but the most import thing that I would like to highlight is that both wildfly instances points to the same nio git dir (org.uberfire.nio.git.dir). This is a central requirement for business central clustering.
How can I check if my cluster is ready? Open business central on both nodes, import Mortgages project from samples and open the same file on both nodes (i.e. Dummy rule.drl). As soon as you start editing the file on one node it will lock the file on the other node. Locking a file is one of the cluster messages use case that we will explore on details in the next session.
Cluster Hello World
Simpler than the 6.x version isn’t it? But how does this work under the hood? How do we keep the niogit state synced? How do we trigger messages in this new infrastructure?
Architecture and Implementation
That is always my favorite part. Let’s understand how we implemented this solution. In order to achieve this let’s split this into two areas: messaging and global locking.
Messaging
The new ClusterService interface could have multiple implementations and is responsible for connecting with message systems and also consume and broadcast messages.
For now, we only have one implementation of this interface that provides support for JMS (ClusterJMSService.java). But where is this service used?
Our backend provides a GIT java NIO2 implementation (uberfire-nio2-jgit). Following the NIO2 implementation, our filesystem provides a WatchService implementation and each filesystem event triggers specific WatchEvent.Kind<T>.
The responsibility of Business Central foundation platforms to extend this model to a cluster environment. In general, a File System change in Business Central should send this event via cluster messaging and regenerate it on each node.
The beautiful part of that solution, and maybe this can help you to understand the guidelines of how foundation team builds Business Central platform, is that from developer’s perspective when triggering or consuming watch events he doesn’t need to worry if he is running on a single instance or in a cluster environment.
(Please don’t expect that this Watch Service event distribution works on cluster environment for regular NIO2 implementations. This is not the default NIO2 behavior and afaik we are the only NIO2 implementation doing this).
The WatchService and WatchService events will work transparently because we are following the same NIO2 programming model and we do all the cluster magic behind the scenes. (We took the same approach on CDI Events distribution).
Pretty cool isn’t it? 😗
So every time that we do an FS operation we publish the regular watch events for same instance nodes and if the Business Central is on a cluster we trigger this message on cluster service:
For each FS, we also create some consumer for the cluster messages. In that case, as soon as we receive a cluster message that contains a watch event, we process it and retrigger in the correct file system:
This code follows the NIO2 watch service spec but also receives WatchService messages generated on the other nodes in a transparent way. 😉
Locking
The second problem that we have to solve is locking. Basically, in a single instance environment, we avoid multiple threads changing the file system state concurrently (in our case, do a commit) by having a ReentrantLock for each filesystem. But how do we approach locking when we have multiple instances of the same filesystem? Basically, how to ‘share’ a lock among all nodes of our cluster?
Do you remember that all nodes share the same network filesystem? In order to have this lock, for each filesystem, we create a physical lock. Basically, we create a simple file in the root of git repository (we use bare git repositories) and before doing any write, a node request for java FileChannel APIs acquires a lock for this file.
In that way, we have two layers of lock for each filesystem. The physical lock that guarantees that just one instance writes on the FS at a specific time and also the ReentrantLock, that prevents concurrency access to the same FS.
With this simple and elegant solution and using native Java APIs and a Shared File System, we are able to reproduce locking functionality of zookeeper and helix in clustered business central
Take Aways
In the end, I was really happy when we figured out this simple and elegant solution for our cluster stack. With this invention, our team was able to simplify the setup and take advantage of provided infrastructure, especially in a containerised environment like OpenShift.
Although this architectural approach might have some limitation (probably we are not able to scale to hundreds or thousands of nodes — but we already know how to fix this), applying this solution, we were able to remove the extra burden on Helix/Zookeeper setup and maintenance complexity from our users, providing the same functionality in a simpler and yet container friendly architecture.
Thanks for reading! I hope this could be useful for you — or just fun to read 😉 ! 💖
In my opinion, one of the great features of the JavaEE/Jakarta EE programming model is the CDI event mechanism. But how this mechanism works in a cloud environment? Have you ever wondered what you can achieve if you are able to fire a CDI event on one machine and observe it on the other node?Read more →
In my opinion, one of the great features of the JavaEE/Jakarta EE programming model is the CDI event mechanism.
But how this mechanism works in a cloud environment? Have you ever wondered what you can achieve if you are able to fire a CDI event on one machine and observe it on the other node? What if I tell you that we achieved this in almost a transparent way?
Photo by Douglas do Vale
These post series are the descriptions of the inventions that I’m proud as a Computer Scientist. I talk about the decisions that I made and the steps I took to figure out what to do for these contributes and then do them.
Most of these contributions are the result of long conversations between me and the awesome members of Foundation Team (especially my friend and architect Alex Porcelli).
Problem Statement
On Business Central (web tooling for Drools and jBPM projects) we make extensive use of the CDI programming model.
With Errai project we take the CDI programming model even further because Errai allows us to observe CDI events on the browser. So basically we have the same programming model working on client and backend of our application.
For instance, when we fire a NewProjectEvent on the backend of the platform, the same event is observed to all clients (browsers) connected (in order to quickly update the UI).
I’ll talk more about this on other blog posts but we are gradually moving Business Central to a cloud-ready architecture. This movement gave us an interesting problem:
“Having the same event programming model on backend and frontend save us a lot of time and CDI proves himself as a great way to deal with events in a monolith. Is it possible to extend the same model to the cloud? Can an event fired on one node be triggered to all other nodes and, via Errai CDI, to all connected clients in all nodes?”
How
As I already mentioned, IMO one of the great features of the JavaEE/Jakarta EE programming model is the CDI event mechanism. This model is one of the cleanest ways to decouple your applications.
However, this mechanism was designed to work in a single instance mode and doesn’t fit well for clustered environments use case. Basically, there is no way to observes an event fired in another machine in the cluster, requiring to use some other event technology and event manual translation.
The main goal of this invention is to extend the CDI event mechanism to a clustered environment, making easy and almost transparent for the users firing an event on one node and observes it on another node. But how?
Metaprogramming to the rescue
First, let’s create a new annotation called Clustered and let’s add it for the events that we want to propagate on the cluster.
2. If this event has @Clustered annotation, we serialised it and sent a serialised cluster message with all event data. (on Business Central we use AMQ/Artemis for this) [check the code]:
3. In each other node, we receive this event, deserialise it and fire as a regular CDI event (reproducing it): [check the code]
So basically, what my invention do is to observe all CDI events and if a CDI event object type has @Clustered annotation, we sent a serialised cluster message with all event data, deserialise on other nodes and regenerate a new CDI event.
CDI Events Distribution in a Cluster Environment via Metaprogramming
Using that invention doesn’t matter if the event was fired on the local instance or fired in other nodes because all CDI observables will receive the same event data, making the same CDI event programming model that works on a single instance works almost transparently on cloud environments.
Pull Request
If you are curious about the full details, you can take a look on the full PR of this solution.
Take Aways
In the end, I was really happy when I figure out this simple and elegant solution.
As I already mention with this invention our team are able to use the CDI programming model to distribute events in single node and in cluster environments, reducing the complexity of our codebase.
A future improvement would be to use the annotation processing framework in order to generate specific observers instead of observing all cdi events. This could be a good and fun contribution to our codebase and if you are interested ping me!
Thanks for reading! I hope this could be useful for you — or just fun to read 😉 ! 💖
Last week, Alex Porcelli and I had the opportunity to present at JavaOne San Francisco 2017 two talks related to our work: “5 Pillars of a Successful Java Web Application” and The Hidden Secret of Java Open Source Projects. It was great to share our cumulative experience over the years building the workbench andRead more →
It was great to share our cumulative experience over the years building the workbench and the web tooling for the Drools and jBPM platform and both talks had great attendance (250+ people in the room).
In this series of posts, we’ll detail our “5 Pillars of a Successful Java Web Application”, trying to give you an overview of our research and also a taste of participating in a great event like Java One.
There are a lot of challenges related to building and architecting a web application, especially if you want to keep your codebase updated with modern techniques without throwing away a lot of your code every two years in favor of the latest trendy JS framework.
In our team we are able to successfully keep a 7+ year old Java application up-to-date, combining modern techniques with a legacy codebase of more than 1 million LOC, with an agile, sustainable, and evolutionary web approach.
More than just choosing and applying any web framework as the foundation of our web application, we based our web application architecture on 5 architectural pillars that proved crucial for our platform’s success. Let’s talk about them:
1st Pillar: Large Scale Applications
The first pillar is that every web application architecture should be concerned about the potential of becoming a long-lived and mission-critical application, or in other words, a large-scale application. Even if your web application is not exactly big like ours (1mi+ lines of web code, 150 sub-projects, +7 years old) you should be concerned about the possibility that your small web app will become a big and important codebase for your business. What if your startup becomes an overnight success? What if your enterprise application needs to integrate with several external systems?
Every web application should be built as a large-scale application because it is part of a distributed system and it is hard to anticipate what will happen to your application and company in two to five years.
And for us, a critical tool for building these kinds of distributed and large-scale applications throughout the years has been static typing.
Static Typing
The debate of static vs. dynamic typing is very controversial. People who advocate in favor of dynamic typing usually argue that it makes the developer’s job easier. This is true for certain problems.
However, static typing and a strong type system, among other advantages, simplify identifying errors that can generate failures in production and, especially for large-scale systems, make refactoring more effective.
Every application demands constant refactoring and cleaning. It’s a natural need. For large-scale ones, with codebases spread across multiple modules/projects, this task is even more complex. The confidence when refactoring is related to two factors: test coverage and the tooling that only a static type system is able to provide.
For instance, we need a static type system in order to find all usages of a method, in order to extract classes, and most importantly to figure out at compile time if we accidentally broke something.
But we are in web development and JavaScript is the language of the web. How can we have static typing in order to refactor effectively in the browser?
Using a transpiler
A transpiler is a type of compiler that takes the source code of a program written in one programming language as its input and produces equivalent source code in another programming language.
This is a well-known Computer Science problem and there are a lot of transpilers that output JavaScript. In a sense, JavaScript is the assembly of the web: the common ground across all the web ecosystems. We, as engineers, need to figure out what is the best approach to deal with JavaScript’s dynamic nature.
A Java transpiler, for instance, takes the Java code and transpiles it to JavaScript at compile time. So we have all the advantages of a statically-typed language, and its tooling, targeting the browser.
Java-to-JavaScript Transpilation
The transpiler that we use in our architecture, is GWT. This choice is a bit controversial, especially because the GWT framework was launched in 2006, when the web was a very different place.
But keep in mind that every piece of technology has its own good parts and bad parts. For sure there are some bad parts in GWT (like the Swing Style Widgets, multiple permutations per browser/language), but keep in mind that for our architecture what we are trying to achieve is static typing on the web, and for this purpose the GWT compiler is amazing.
Our group is part of GWT steering committee, and the next generation of GWT is all about JUST these good parts. Basically removing or decoupling the early 2000 legacy and keeping only the good parts. In our opinion the best parts of GWT are:
Java to JavaScript transpiler: extreme JavaScript performance due to compiling optimizations and static typing in the web;
java.* emulation: excellent emulation of the main java libraries, providing runtime behavior/consistency;
JS Interop: almost transparent interoperability between Java <-> Javascript. This is a key aspect of the next generation of GWT and the Drools/jBPM platform: embrace and interop (two way) with JS ecosystem.
Google is currently working on a new transpiler called J2CL (short for Java-to-Closure, using the Google Closure Compiler) that will be the compiler used in GWT 3, the next major GWT release. The J2CL transpiler has a different architecture and scope, allowing it to overcome many of the disadvantages of the previous GWT 2 compiler.
Whereas the GWT 2 compiler must load the entire AST of all sources (including dependencies), J2CL is not a monolithic compiler. Much like javac, it is able to individually compile source files, using class files to resolve external dependencies, leaving greater potential for incremental compilation.
These three good parts are great and in our opinion, you should really consider using GWT as a transpiler in your web applications. But keep in mind that the most important point here is that GWT is just our first pillar implementation. You can consider using other transpilers like Typescript, Dart, Elm, ScalaJS, PureScript, or TeaVM.
The key point is that every web application should be handled as a large-scale application, and every large-scale application should be concerned about effective refactoring. The best way to achieve this is using statically-typed languages.
This is the first of three posts about our 5 pillars of successful web applications. Stay tuned for the next ones.
[I would like to thank Max Barkley and Alexandre Porcelli for kindly reviewing this article before publication, contribute with the final text and provided great feedback.]
The Uberfire Framework, has a new extension: Kie Uberfire Social Activities. In this initial version this Uberfire extension will provided an extensible architecture to capture, handle, and present (in a timeline style) configurable types of social events. Basic Architecture An event is any type of “CDI Event” and will be handled by their respective adapter.Read more →
The Uberfire Framework, has a new extension: Kie Uberfire Social Activities. In this initial version this Uberfire extension will provided an extensible architecture to capture, handle, and present (in a timeline style) configurable types of social events.
Basic Architecture
An event is any type of “CDI Event” and will be handled by their respective adapter. The adapter is a CDI Managed Bean, which implements SocialAdapter interface. The main responsibility of the adapter is to translate from a CDI event to a Social Event. This social event will be captured and persisted by Kie Uberfire Social Activities in their respectives timelines (basically user and type timeline).
That is the basic architecture and workflow of this tech:
Basic Architecture
Timelines
There is many ways of interact and display a timeline. This session will briefly describe each one of them.
Another cool stuff is that an adapter can provide his pluggable url-filters. Implementing the method getTimelineFilters from SocialAdapter interface, he can do anything that he want with his timeline. This filters is accessible by a query parameter, i.e. http://project/social/TYPE_NAME?max-results=1 .
B-) Basic Widgets
Social Activities also includes some basic (extendable) widgets. There is two type of timelines widgets: simple and regular widgets.
Simple Widget
Regular Widget
The “>” symbol on ‘Simple Widget’ is a pagination component. You can configure it by an easy API. With an object SocialPaged( 2 ) you creates a pagination with 2 items size. This object helps you to customize your widgets using the methods canIGoBackward() and canIGoForward() to display icons, and forward() and backward() to set the navigation direction.
The Social Activities component has an initial support for avatar. In case you provide an user e-mail for the API, the gravatar image will be displayed in this widgets.
C-) Drools Query API
Another way to interact with a timeline is throught the Social Timeline Drools Query API. This API executes one or more DRLs in a Timeline in all cached events. It’s a great way to merge different types of timelines.
Followers/Following Social Users
A user can follow another social user. When a user generates a social event, this event is replicated in all timelines of his followers. Social also provides a basic widget to follow another user, show all social users and display a user following list.
It is important to mention that the current implementation lists socials users through a “small hack”. We search the uberfire default git repository for branch names (each uberfire user has his own branch), and extract the list of social users.
This hack is needed as we don’t have direct access of the user base (due the container based auth).
Persistence Architecture
The persistence architecture of Social Activities is build on two concepts: Local Cache and File Persistence. The local cache is a in memory cache that holds all recent social events. These events are kept only in this cache until the max events threshold is reached. The size of this threshold is configured by a system property org.uberfire.social.threshold (default value 100).
When the threshold is reached, the social persist the current cache into the file system (system.git repository – social branch). Inside this branch there is a social-files directory and this structure:
userNames: file that contains all social users name
each user has his own file (with his name), that contains a Json with user data.
a directory for each social type event .
a directory “USER_TIMELINE” that contains specific user timelines
Each directory keeps a file “LAST_FILE_INDEX” that point for the most recent timeline file.
Inside each file, there is a persisted list of Social Events in JSON format:
Separating each JSONs there is a HEX and the size in bytes of the JSON. The file is read by social in reverse order.
The METADATA file current hold only the number of social events on that file (used for pagination support).
It is important to mention that this whole structure is transparent to the widgets and pagination. All the file structure and respective cache are MERGED to compose a timeline.
Clustering
In case that your application is using Uberfire in a cluster environment, Kie Social Activities also supports distributed persistence. His cluster sync is build on top of UberfireCluster support (Apache Zookeeper and Apache Helix).
Each node broadcast social events to the cluster via a cluster message SocialClusterMessage.NEW_EVENT containing Social Event data. With this message, all the nodes receive the event and can store it on their own local cache. In that point all nodes caches are consistent.
When a cache from a node reaches the threshold, it lock the filesystem to persist his cache on filesystem. Then the node sends a SOCIAL_FILE_SYSTEM_PERSISTENCE message to the cluster notifying all the nodes that the cache is persisted on filesystem.
If during this persistence process, any node receives a new event, this stale event is merged during this sync.
Stress Test and Performance
In my github account, there is an example Stress Test class used to test the performance of this project. This class isn’t imported to our official repository.
The results of that test, find out that Social Actitivies can write ~1000 events per second in my personal laptop (Mb Pro, Intel Core i5 2.4 GHZ, 8Gb 1600MHz DDR3, SSD). In a single instance enviroment, it writes 10k events in 7s, writed 100k in 48s, and 500k events in 512s.
Demo
A sample project of this feature can be found at my GitHub accountor you can just download and install the war of this demo. Please take a note that this repository moved from my account to our official uberfire extensions repository.
Roadmap
This is an early version of Kie Uberfire Social Activities. In the nexts versions we plan to provide:
A “Notification Center” tool, inspired by OSX notification tool; (far term)
Integrate this project with dashbuilder KPI’s;(far term)
A purge tool, able to move old events from filesystem to another persistence store; (short term)
In this version, we only provide basic widgets. We need to create a way to allow to use customized templates on this widgets.(near term)
A dashboard to group multiple social widgets.(near term)
If you want start contributing to Open Source, this is a nice opportunity. Fell free to contact me!