Erudine is a British company a few years old and has released some new technology in a new process context – the Erudine Behaviour Engine (yes, the British spelling). Like many technologies, Erudine is targeting the business-IT divide, focusing on problems like those of translating requirements into systems, integrating the expertise of lots of people (analysts, designers, developers) and communication. Besides the problems these things cause in building a first version, constant change tends to cause functionality to drop behind requirements steadily over time. This is exacerbated by problems of knowledge retention – through the lifetime of a commercial system knowledge is lost (by retirement or resignation but also by the passage of time) and so must be revamped for each new release at an additional cost. At the heart of this problem is the basic fact that there is lots of knowledge that must be extracted and turned into the new system – legacy code, expertise, policies, regulations etc. Their perspective also is that while writing code is quick, checking it and confirming it is complete and correct is much harder and slower especially when one has to consider the implications and consequences of a chance.
Erudine focus on tacit knowledge (rather than explicit knowledge) and develop the behavior model of an application by looking at real cases and asking those specifying the system to say what the system should do in that case and then justify it. This is test-driven design on steroids – developing business behavior starting from the answer we want and moving to why that is the answer one functional point at a time. This is a very different approach to the more explicit knowledge approach taken in business rules management systems. Some critical facts about Erudine:
- Each functional point can be an example case, a test case or a real transaction.
- Development rapidly focuses on the exceptions and exceptions to exceptions – and the consequences of those exceptions
- Conceptual graphs are used to present data in a visual way
- This data is described using an ontology
A demonstration of a customs border example showed how some of this worked. The system is designed to help a customs officer decide what action to take in response to a particular person trying to cross a border. In many ways this whole example is a decision service. First design step would be to layout the data flow- specifying how to get data from data sources, take data cleansing, enrichment and integration activities etc. This decision flow, if you will, also handles sequencing of steps and the specification of behavior steps or decisions. The decision node in the example is designed to choose one of four actions – arrest, deny, accept, detain.
Before the “rules” can be specified, an ontology must exist. This can be loaded from OWL if you already have it defined and can be completed during development – you could start with a basic one and then refine it as you worked through cases, for instance. Data can be mapped directly from databases (although more complex entities must be mapped in using Java Hibernate classes). So far this seems like development work but we have not got to the clever bit yet.
Once you reach this point, the decision node can be specified by non-technical users. Business experts can take a list of situations (prior instances from a legacy application, test cases, formal examples or whatever) and then view each one using the conceptual graph. For each instance the user specifies the decision they would take – what their conclusion would be for this instance – and then explain why. This is done in a point-and-click way using the conceptual graph. For instance they might take a record representing a person in this example and say they are allowed in because they have a visa. This creates what they call cornerstone or unit test. Both the structure of the data and values can be used in these rules. This is a powerful approach because it is often much easier for an expert to explain an example and their reasoning than to specify a general rule or requirement.
The expert then goes on to repeat this for subsequent instances. Each subsequent rule must be compliant with all previous cornerstones (unless you wish to change your mind about the rule) – the first case cannot be changed to a different result by the second set of behavior for instance. The editor won’t allow a subsequent condition to contradict a prior one.
The ontology comes into play by allowing the user to generalize a reason. For instance, a person might be arrested rather than allowed in. This person might be carrying Cocaine but the expert knows that Cocaine is an illegal cargo (in the ontology) and so specifies that carrying an illegal cargo gets you arrested. Similarly they might specify that it does not matter that this particular case was a truck and that it could be any vehicle.
As you watch the tool work it seems pretty clear that common cases would be found quickly – the 80/20 rule would play in your favor – and that you would rapidly get all the basic conditions handled. Using Eurdine to clone and replace a legacy system allows you to compare current definitions to logs or results tables. This shows historical entries with differences between Erudine and the current system allowing specification of clarification rules to eliminate them.
The resulting “rules” can be very complex – but specified “by example” remember, so this complexity would not necessarily be visible. Examples of rules might be:
If there is a School with a Child over the Age of 8
– and that Child has a sister in the school below the Age 7
– and the Sister shares a Class with a Boy who has a Grade A average
– and this Boy is in the same sports team as the first child
If there is a network Node under attack
– and the type of attack is a Denial of Service
– and the attack originates from outside our Secure Network
– and the Node hosts a Technical Service
– and that Technical Service supports a Business Service
– and that Business Service has an SLA level of Critical
– and we have Backup Virtualization Servers available
Then rehost the Technical Service onto a new server
The combination of the ontology and the graphical environment for specifying rules by example allows for complex objects to be manipulated using complex rules.
The tool had another nice feature allowing you to may these cornerstones or rules to a requirements document defined in the tool. As you create cases you can refine the requirements and link it and you can see requirements without tests and vice versa. This combined with the other features has results in some customers using Erudine purely as a behavior or requirements capture tool or to learn the behavior of an existing system with which they are less familiar than they desire.
Erudine behavioral ‘services’ are stored in a Knowledge Model (KNM) file that contains all the behavior, ontology and requirements links. Access to resources is through logical links with actual links defined in a config file for deployment. Generally these resources will change through the various staging environments of a project whilst the logical connections do not. This allows the same KNM file to be staged through environments without change. Versioning is usually handled through a standard versioning repository, providing fairly coarse-grained version control (though finer grained control is under development). Debugging is very visual- the path through the data flow model can be examined interactively for a problematic transaction. At each node the behavior rules that fire can be interrogated and even the requirements that the behavior satisfies queried.
I found the product very intriguing and I hope to work with it some more. Check it out at www.erudine.com
Comments on this entry are closed.
The type of case-based incremental development outlined here and described in the Erudine white papers is elsewhere known as Ripple-Down Rules, and has been well validated in a number of application areas. In fact, Erudine was previously known as Rippledown Solutions (try accessing http://www.rippledownsolutions.com or looking for it on the Wayback machine). Perhaps the Erudine approach was developed independently, but the core idea so closely matches Ripple-Down Rules, that reference to earlier Ripple-Down Rule outcomes, might strengthen Erudine’s claims about the value of this sort of incremental development. I have written to them about connections with Ripple-Down Rules, but did not hear back.
A search for Ripple-Down Rules finds thousands of links. There are also a few starter papers on my web page as well as links to other companies with Ripple-Down Rule systems. One of these, Pacific Knowledge has focussed mainly on medical applications. Some of its customers have developed systems with over ten thousand rules using its Labwizard product. Sonetto, a system integrating Ripple-Down Rules and conceptual graphs, like Erudine’s, has been developed by Ivis. A paper by Sarraf and Ellis describes how Sonetto is used byTesco.com in customising its B2B and B2C systems. Ripple Down Rules are also used in workflow systems, text processing and range of areas. The key difference of the Erudine system from Ripple Down Rule developments seems to be that it is targeted at general system development, for example in rebuilding legacy systems. This is an interesting new application area for the Ripple-Down Rule approach.
ps I have a small shareholding in Pacific Knowledge Systems
I added a link to this story as a comment to a blog posting “review” I provided for the HP NonStop (Tandem) community …. it’s made some points a lot better than I had done.