Herewith are presented three integration patterns here. All have worked well in large scale deployments, which are described. Every deployment is subject to a certain amount of revision after the initial implementation, and I have described those revisions too: please read these carefully because they may help your deployment run more smoothly.
In addition to that, I have described our ‘Sales Process’. This is not a sales course, but part of the process that must take place for a successful implementation. If there is no ‘sale’ in your implementation, these steps still need to be carried out, in some form or other.
The Sales Process
The client expresses their need for a Rule-Based system through a document such as an RFI or RFP. It has to be verified that the client really needs a rules-based system. If the primary benefits of a rules-based approach (discussed in-depth elsewhere on this site) are not present, then the need for a BRMS needs to be discussed with that client.
Next the scale and complexity of the proposed application needs to be reviewed. If there are less than 30 simple rules, then a BRMS is probably overkill. Equally, if the client is expecting vision recognition or to predict stockmarket prices, the project is a unlikely to succeed within a reasonable budget.
If the rules already exist, of a number and type that is well suited to the BRMS in view, then a proposal can be prepared. As part of the proposal, we always prepare a prototype, or PoC (proof of Concept).
Generally, with a BRMS, this is practical: it is an order of magnitude easier to enter rules than code up an application with UI/UX and database.
Our PoC is there to prove two things: Complexity and Scale: can the BRMS cope with the sheer complexity of the proposed application, and can it cope with the scale of the application. To put a convincing case forward, we ask the client for the fragment of the application that is the most complex. Equally, we ask for a part of the application that is complete, and self-contained. We the build both as PoCs.
For the PoC the rules are built to the final level of quality, and indeed are often absorbed into the project as completed rulesets. The other components are generally stubs, except where it is necessary to prove some connectivity. So, for example, if the connectivity was to be through FUSE or MQ, then that would be connected to the BRMS so as to pass and receive transactions from stub endpoints.
Once that is done, performance figures, stability statistics, and so on can be provided to the client.
Some factors lie outside these POCs, and have to be covered differently.
Memory Leaks, Crashes, Dumps, Loops
All BRMS at sometimes will experience these kind of failures. Typical causes can be:
- New version of BRMS has a bug
- Develop hits an obscure bug with a new rule
- Developer creates a rule that consumes excessive resources
- Developer creates a rule that fails to release resources on completion
All these events have to be handled in a graceful way, and explained in the proposal, so infrastructure can be put in place and tested.
Typical requirements might be:
- Reboot the servers once a day
- Staggered reboot on demand possible
- Able to switch back to prior version quickly
- Able to stop/start rules engine gracefully without losing transactions
- Able to rollback and rerun transactions
- Performance Testing
For larger systems we recommend clients avail themselves of a test lab. Credible hardware vendors have test labs where an application can be tried on various configurations to find the best one. The PoC provides an ideal tool for doing this, because it is a good model for the final system.
Cloud based solutions can simply try out various configurations on the vendor cloud.
Architecture 1: Isolated stateless SOA
Here the rules engine runs on a series of servers connected with an XML queue/pipe. MQ is an excellent product for this, along with the various flavours offered by other vendors and cloud suppliers.
It is critically important that the pipe queues transactions, so that none are ever lost.
It is also important that the rules engine is not coupled in any way with the external systems. If that happens, then one of the major advantages of the BRMS will be lost: the ability to make quick changes. Why? Because the BRMS has become dependant on external components that cannot be changed easily.
XML is an excellent transport mechanism because it decouples any external references. A great strength of OO languages is the ability to pass objects by reference rather than copying them around, but for a BRMS it is a terrible weakness: the rules become inextricably linked with class definitions, to the point where rule changes can only proceed at the same slow pace as ordinary software changes: the major BRMS advantage of rapid innovation is lost.
XML is the method of choice because of its robustness to change.
This architecture is well suited to applications that require a lot of information to process, but give very little back: usually accept/refuse, a single monetary amount, and rich error/guidance messages. This is well suited to Government departments calculating eligibility, and insurance companies calculating premiums.
It is unsuitable for configuration applications.
Deployments:
Govt Tax Department. 100 rulesets, thousands of rules. Various front-ends, including public website. Big Volumes
Govt Health Dept. Calculates all payments (doctors, pharmacies, hospitals…) 64M transactions p/a
Architecture 2. Tight Integration
This is an alternative to Architecture 1, applicable to the same types of systems: lots of data in, a complex assessment, and a little data back. The difference is that the entire system is built around the rules engine. To do this requires a much smaller build, but has to have a highly competent team, and a highly qualified product: probably only ILOG fits the bill. A great deal of analysis is required to be sure that supplementary functions can fit around and closely integrated with the rules engine, with possible compromises in some areas. The upside of this architecture will be a lightweight solution at very low cost. Some Govt departments have taken this risk, and the result can be a system widely recognised as best of breed.
Govt EU CAP processor: recognised as best in Europe. Modest volumes but complex and time critical
Architecture 3. Configurator
This architecture is for a different type of application, where the user interacts intensively with the rules and data. Architectures 1 and 2 are unsuitable because they assume a transaction in – transaction out style of processing. The person using a configurator is interacting with very mouse click or gesture. A key pointer to determining if this type of architecture is required is expressing the need for controls on the screen to gray out dynamically.
There are very few BRMS that can address this type of application successfully.
[Ideally one might conjectur that an engine based on javascript could seamless move rules out to the client via AJAX to service these dynamic requirements would be best, but sadly AFAIK that does not exist.]
Successful solutions have run server farms of windows based apps, delivered over CITRIX Winframe, and others have used silverlight. CINCOM has a strong lead in this specialist area.
Deployments
Alcatel telephone Exchnge design and provisioning tool
.