Notes from DecisionCAMP-2019

DecisionCAMP-2019 in beautiful Bolzano on Sep. 17-19 will go down in history as one of the most successful DecisionCAMPs. As usual, it was packed with interesting presentations and even more interesting formal and informal discussions. I will try to write down some notes from this event while my memory is still fresh.

What Are We Doing?

In my brief kick-off presentation, I tried to address this question by quoting Prof. Gene Freuder:

We are in business of helping computers to help   people make better decisions
In Pursuit of the Holy Grail”, Gene Freuder, 1997

Modern Decision Management products help customers develop decision-making applications by creating Operational Decision Services. So, I suggested the following high-level overview:

These days decision-making applications need to be highly intelligent and run perpetually handling live streams of “big” data, reacting to real-time events, maintaining smart business processes and orchestrating multiple decision services. “Our customers don’t want to wait seconds for our services to start up anymore. They are not satisfied anymore with 100 milliseconds per transaction and request rules-based decisions to be produced within 10 mills.” New requirements along with super-fast performance include low memory footprints, high-availability, security, migration from monolithic architecture to well-orchestrated microservices, support for new cloud-based pricing models. Applying these requirements to operational decision services, I suggested looking at them from 3 perspectives:

  • Business Knowledge Representation (before automation)
  • Executable Business Decision Models
  • Deployed Decision Services.

Here some key issues to be considered and how they were presented at the DecisionCAMP-2019:

How are those 3 dimensions connected? Unfortunately, despite some progress with NLP, we still have no practical tools to move Business Knowledge to Executable Decision Models. Most decision engines that execute business decision models rely on run-time interpretation. Such implementations are not acceptable anymore if we want to address new performance requirements. That’s why we see a strong movement toward code generation that supports super-fast start-up and transaction execution time. The deployment options are changing with the speed of light: from deployment on web servers we quickly moved to Cloud Containers and now the Serverless Architecture is taking over the world.


I won’t comment on every presentation for two reasons:

All presentations slides are available here. Thanks to Sandy Kemsley, this year we had almost real-time blogs that you can find here:

However, nothing can replace the live presentations with tough questions and answers that led to heated discussions. There are several topics and catchphrases that stay in my memory and I will try to cover them below.

DMN 2.0?

Like in the previous years, the DMN standard once again was a frequently discussed topic. While some people complained that DMN is still unknown to many potential customers, others stressed the real-world use of DMN and its supporting tools (even if they do not match the CL3 compliance level). To defend DMN’s popularity, I noticed that in the last few years the majority of RFPs we as vendors receive from major potential customers always include DMN related questions.

Many presenters made concrete suggestions on how to empower DMN further. But during the QnA panel on Sep. 17, Prof. Jan Vanthienen warned that “a standard is always a compromise between easiness and power” and a shift in any direction may dramatically affect its usability. I was really glad to hear when in his presentation Jan said that he would limit DMN single-hit and multi-hit policies to maximum one or two.

Then Gary Hallmark, the main force behind DMN FEEL and also an implementer of Oracle DMN Modeler, presented a long-awaited talk “DMN 2.0? Experience from DMN 1.0-1.3”. It was Gary himself who added a question mark to the title. He listed many possible new features that could be added to DMN based on the requests kept in the DMN Revision Task Force backlog for years. I’d especially recommend looking at Gary’s suggestions for “a common item definition model” that comes close to an explicit definition of the business glossary and a new representation of loops (see Slide 17). I’d still prefer to do it using regular DMN-like decision tables, but this is certainly progress. Gary also spoke about recursion, case insensitive names, impossibility of constraint solving within the DMN scope, and other interesting topics. However, he didn’t insist on any of them and presented them more as discussion points.

Then the biggest surprise came during the panel “Ask a Vendor” when the moderator (Sandy Kemsley) asked all vendors to answer the question “Do we need DMN 2.0?” We had 7 vendors on the panel: Larry Goldberg (Sapiens), Fernando Donati (FICO), Gary Hallmark (Oracle), Denis Gagne (Trisotech), Mario Fusco (Red Hat), Guilhem Molines (IBM), and Jacob Feldman (OpenRules). While Larry started to say that he never liked DMN FEEL but still would support DMN 2.0 anyway, I began to worry that once again I’d have to break the peacefulness by saying that I DO NOT think we need DMN 2.0 at least for now. These thoughts were interrupted by a sudden answer given by Gary Hallmark: “I don’t think we should rush with DMN 2.0”. His very specific reasons were supported by Denis Gagne who said that his long-time experience with different OMG standards tells him that jumping to 2.0 now would only harm the DMN acceptance. Guilhem also was pessimistic about 2.0 and recommended to consider other business logic expression facilities already implemented and successfully used by major vendors. For me, it was a relief and I just added that we should rather consider what to remove from the current DMN instead of what to add to it. Many new powerful features that vendors like to see in the standard don’t have to be added to the standard – they could be considered as their product differentiators.

Several times during the DecisionCAMP a question of DMN compliance levels was discussed. Keith Swenson, the chairman of DMN TCK, insisted that Compliance Level 3 that includes loops, functions, recursion, and other programming constructs, should be mandatory for a vendor to claim DMN compliance. Other people, including representatives of major vendors, argued that if we want to achieve a wider real-world acceptance of DMN, there should be a more flexible definition of compliance as the same capabilities that are expressed in DMN’s complex boxed expressions can also be provided by vendors in a way their customers love much more.


Every time Daniel Schmitz-Hübsch and Ulrich Striffler from Materna share their real-world consulting experience with the DMN standard, they provoke plenty of controversies.  If naming their presentation “FEEL, Is It Really Friendly Enough?” was not enough, they also stated: “Having the business users model the details of decisions in FEEL is the biggest issue: basically, you’re asking business people to write code in a script language. Given that most requirements are documented by business users in natural language, there are some obstacles to moving that initial representation to DMN instead.” Naturally the audience exploded with questions and arguments.

Somebody quoted a lawyer who once said: “We write regulations, but we would never interpret them”. While most attendees agreed about those “evil lawyers” who create documents that could be interpreted in many different ways, Ron Ross later said: “I want to defend the lawyers”. I will try to explain Ron’s point in my own words: the lawyers spent 12 years learning how to correctly express different regulations in such a way that other people can understand them. Why aren’t modern software tools capable to do it? This is the question we should concentrate on instead of offering a “scripting language” to lawyers and other subject matter experts.

Daniel and Ulrich suggested several improvements to the DMN FEEL and in the end, stated that “FEEL is a real benefit for business users”. But calling FEEL “business-friendly” doesn’t make it “business-friendly”, and we certainly should continue our pursuit of “the holy grail” in representing business logic.

Later on, during the QnA panel there was a question about using natural language for business logic representation. And Denis Gagne responded: “There is no such thing as natural languages!”. He added that the current decision management systems all use structured languages. Most of DM practitioners agreed with Denis. However, in the last two years we heard from different sources about serious progress in the Natural Language Processing (NLP) but not so much about Natural Language Understanding (NLU). Hopefully, next year we will have real NLP/NLU experts such as Paul Haley among DecisionCAMP’s presenters to learn what to expect in the nearest future.

On the subject of user-friendliness, I mentioned that DMN-like decision tables are a great way to represent business decision logic that is intuitive for business users and easily understandable for our decision engines. Having a common denominator for different decision tables is already a big achievement of DMN! Many people agreed that we should continue to find similar intuitive representations for more complex decisioning constructs, especially those that deal with collections.

During related discussions, I also tried to express a slightly different view. Along with attempts to simplify rules expressions, we should help our business users avoid the necessity to describe all possible situations in rules and to minimize the number of rules they must write. To do this, we need more powerful modeling techniques and more powerful decision engines – see below.

Integrated Use of Rules and Machine Learning, Explainability

The keynote “The Future of Enterprise AI and Digital Decisions” given by Forrester’s Mike Gualtieri became an important event by itself that was discussed during the breaks and afterward by attendees of all BRAIN-2019 events. Of course, Machine Learning (ML) dominated the discussed topics, but we also had interesting presentations devoted to the integrated use of BR & ML given by Edson Tirelli (DMN +PMML) and researchers from KU Leuven and Saint-Gobain. They led to interesting discussions about decision explainability (Jan Purchase,  Silvie Spreeuwenberg, and others).

From Business Knowledge to Executable Decision Models

This was the first time that Ron Ross presented at a DecisionCAMP. People know that Ron is not a fan of DMN because it mainly concentrates on the implementation aspects of decision management and “fails to address a broad range of needs for rule-based solutions”. However, I managed to persuade Ron to come and share what he considers important from a pure business perspective, and I am glad I did. During his presentation “Brainstorming Next-Generation Rule Platforms“, Ron said that instead of being critical he wants to offer specific suggestions for how BR&DM vendors could close the gaps between decisioning rules and behavioral rules.  In particular, he specified two concepts that are usually beyond the scope of traditional decision services:

  1. Flash Points – which I interpret as the events which may lead to violation of the behavioral rules, e.g. the rule states that a client should have an assigned agent, but this agent has just retired (“flash!”)
  2. Watcher – a program, similar to a soccer referee, that automatically handles the flash points and related rules violations.

The behavioral rules require maintaining the state and consistency of the environment within which our operational decision services are being invoked. They should be evaluated automatically when relevant ‘flash points’ occur.

I believe Ron’s “watchers” can be naturally implemented as separated services (they could be called “behavioral services”) that will be invoked from stateful decision-making applications under control of state machines.

This approach may also help bring together SBVR and DMN camps. And I was really glad when afterward I received this email from Ron: “DecisionCamp was certainly an interesting experience. The group of people in the room were most accommodating and engaged. Thank you again for the invitation. I enjoyed interacting and now have a much better feel (not FEEL) for where things stand.

Talking about Stateful vs Stateless. In an informal discussion, I was asked: “So, how does OpenRules plan to implement Stateful decision services?” My answer was: “We don’t!” I believe that it is much more practical for operational, or behavior, or other services to remain stateless and to be invoked upon certain events from stateful decision-making applications.

From Dynamic Rules Interpretation to Code Generation

In my kickoff presentation, I’ve already stated that modern decision-making applications require super-fast decision services. Most of today’s rules engines are already very fast, but our customers want them to become even faster. It’s not possible to improve performance and start-up time if we continue to rely on the dynamic features that gave us a lot of flexibility in run-time, e.g. Java’s reflection and dynamic class loading.

Two open-source vendors (Red Hat and OpenRules) almost at the same time decided to create new versions of their rule engines that do not rely on dynamic run-time features of Java. It was great to hear when Mario Fusco presented their new product “Kogito” and announced that “Drools is now reflection-free!”. I loved talking to Mario and his colleagues and congratulated them on this serious achievement. Later, during the “Vendor Announcements” session, I also announced public availability of our new “OpenRules Decision Manager”.

While our products have different input rule formats, and use different parsers and generators, the basic approach is the same: we transform all rules into automatically generated Java code in design time and essentially minimized overhead in run-time!  Now both products don’t even need access to rule repositories to start-up and execute our decision services. The result: super-fast start time, essential performance improvement for every run, and low memory footprint. Besides, it allows our products to become good citizens of the brave new polyglot world with very interesting tools such as GraalVM and Quarkus.

I never was a fan of code generation in situations when you force your customers to deal with the generated code instead of the original business-oriented rules. The most important fact is that both Red Hat and OpenRules keep all necessary references to the original rules inside the generated code that allows them to report errors and to produce explanations in business terms used by users who wrote the original rules.

Cloud-based Decision Services and Serverless Revolution

The Serverless revolution is upon us! “As your business deals with more and more data, you need to handle all of it at higher speeds to keep up. Whether you are processing high-speed transactions, looking for fraudulent behavior, or transforming streaming data into an analytics-ready format, you can’t meet your SLAs by just throwing more of the same resources into your data architecture. You need a technology shift that will significantly change the way you pursue your data initiatives.Hazelcast.

Our customers are aggressively migrating from monolithic to a microservices architecture. This means many of today’s operational decision services will soon be deployed on clouds as microservices.

Most BR&DM vendors already announced their support for microservices and stressed that they can work with almost any cloud environment. However, there is a big elephant in this room called “AWS” (Amazon Web Services). By expert evaluations, today AWS occupies almost 70% of the cloud market, and AWS Lambda is de-facto the most popular serverless architecture. So, I was proud to announce our new OpenRules One-Click Deployment mechanism that allows business analysts (!) to deploy their business decision models as AWS Lambda functions.

We may assume that in the nearest future more and more operational decision services will reside on AWS Lambda and the decision-making applications will be mainly concerned with their orchestration. We had many informal discussions about this during the DecisionCAMP.

One related question was raised by Keith Swenson: do we need a standardized decision service API? Decision services are already included in DMN 1.2, but do they need a standard API? What is so special in decision services when we compare them with other microservices for which such APIs are already defined by major cloud vendors? Many of us remember JSR-94 API which took a lot of effort for vendors to implement but rarely (if ever) is in use today.

Model-Based Optimization

This year Decision Optimization was well-presented at DecisionCAMP. Last year I actively promoted a “model-based optimization” approach to decision modeling and referred to the work of Prof. Bob Fourer. This year Bob kindly accepted my invitation and made a very interesting presentation “Model-Based Optimization for Effective and Reliable Decision-Making”. I was a bit concerned that the use of math formulas for modeling may confuse our audience, and I was relieved to see how well Bob’s presentation was received and raised interesting discussions.

Like optimization experts, business rules specialists always promote the declarative approach concentrating on “WHAT” instead of the procedural “HOW”. However, our DMN-style decision models usually force business analysts to write many rules to cover all combinations of decision variables to make sure that the model defines one and only one decision.  I believe Bob successfully made a very important point: we don’t need to describe the method (“HOW”) that leads to a solution – in the optimization world such methods are called “heuristics”. Instead of defining heuristics in rules, our decision model can define only major business constraints (rules) and let an off-the-shelf solver (such as AMPL) find the solution that optimizes certain business objectives or minimizes rules violations.

It was interesting to see that the same approach was proposed by KU Leuven researchers in their presentation “Saint-Gobain Digital Engineer”. They said that instead of defining exact thresholds, their customers want to define only allowed intervals for certain decision variables [min..max]:

They used a combination of DMN rules and a constraint solver to model and solve their complex construction problems. Similar ideas were applied by Marjolein Deryck to create a decision model for “The Notary Case”.  I think BR+CP/LP approach has a promising future in the decision management domain.

When afterward we discussed an integrated use of rule engines and constraint/linear solvers with Jan Purchase, I suddenly learned that 20 years ago Jan actively worked in the UK with CP/LP tools developed by the great French company ILOG, the same job that I happily did in USA as the first American consultant and trainer for ILOG between 1993 and 1999. Today the next generation of ILOG (now with IBM) developers successfully continue these traditions offering great CP/LP optimization tools.

Real-world Use Cases

This year we had many presentations “from the trenches” – a great demonstration of the real-world use of the decision management technologies. Sandy already blogged about them, so I will just list them here:

I want to stress that this year we had many young and highly talented practitioners among our presenters.

Vendor Announcements

This year we gave all attending vendors 5 minutes each to announce the latest achievements in their product offerings. We also created a special webpage “Vendor’s Corner”. Afterward, people told me it was a good opportunity for both vendors and practitioners and we plan to continue it next year.

Integration with RuleML+RR

DecisionCAMP-2019 once again was co-located with the RuleML+RR conference. We had a common keynote by Mike Gualtieri and good informal discussions during lunches and coffee breaks. I saw a lot of RuleML+RR attendees sitting at our sessions and asking good questions – this is a good sign. I spoke with several RuleML+RR academics about possible cooperation. Hopefully, we will finally move beyond good intentions to practical common development. Here are related papers from RuleML+RR:

Human Networking

At events like DecisionCAMP, networking with colleagues is always one of the most valuable components. We had many formal and informal discussions that were important professionally but frequently went far beyond technological issues. It was Ritchie McGladdery from RapidGen who initiated discussions about the ethics during our QnA panels and who brought a human touch to many talks. Ritchie, David, Jan, Gediminas, Ulrich, Alan, Gary, Matteo, and many many others created the atmosphere when a highly technical discussion could be naturally transferred into a very personal story.

There were so many interesting people at DecisionCAMP! Attendees from 12 different countries, people of different ages, backgrounds, and life experiences found themselves talking about really important subjects over a coffee or a beer. I am always surprised how openly we sometimes share with almost strangers our personal life stories which we rarely share with our kids, family, or close friends.  Something good was in the air during the week of September 16 in beautiful Bolzano that made this event really memorable for many of us.

Early morning of Sep. 20 I was checking out of the Citta hotel and the concierge first greeted me in Italian, then switched to English, and noticing my accent asked about my background. In 5 minutes I learned that he perfectly speaks 10 languages and grew up in Moscow on the same street where I defended my dissertation in 1986. Thank you, Bolzano, for many memorable moments!


We still have no final commitment to where we will have DecisionCAMP-2020. Our attempts to find a US or Canadian host haven’t been successful. Now we have two European candidates for next year: Oslo and Prague. The final decision will be made soon. Still, if you know a university or an organization that is willing to host DecisionCAMP and RuleML+RR in 2020, 2021, or 2022 please let me know.

What People Say

Here are some quotes from attendees:

  • Thank you for organizing such an interesting conference. I’m enjoying it and learning a lot.” Mike Gualtieri
  • This is my first time at DecisionCAMP, and I’m totally loving it. It’s full of technology practitioners — vendors, researchers and consultants — who more interested in discussing interesting ways to improve decision management and the DMN standard rather than plugging their own products.” Sandy Kemsley
  • Thank you for arranging a splendid DecisionCAMP. We value your hard work and enterprise to support the Decision Management community.  I’m sure that everyone in attendance found it Informative and educational. “ John Ritchie McGladdery
  • The DecisionCAMP had a very impressive panel of speakers” Marco Montali
  • Thanks for all the great work in putting this event together. Decision Camp is one of my favorite event to attend.” Denis Gagne
  • Had an amazing time last week in Bolzano!” Fernando Donatti
  • I wanted to compliment you on a fantastic and well organized conference” Kedar Kulkarni
  • Many thanks for the great event and the forum for meeting peers to discuss and exchange ideas” Gediminas Vedrickas
  • It was a pleasure to attend the event. I am amazed by the exceptional passion shown by so many of the attendees and presenters” Harrier Parkinson
  • “Thank you again for organizing DecisionCamp 2019. It was very interesting to meet so many capacities in that field and have the opportunity to have great discussions” Stephan Schoenberger
  • “What a fantastic conference!” Oliver Clark


I want to thank all DecisionCAMP-2019 presenters and attendees for coming and making this event a success. Special thanks go to our hosts from Bozen-Bolzano University for a great organization of the BRAIN-2019 conference.


Jacob Feldman, DecisionCAMP Chair

About jacobfeldman

CTO at
This entry was posted in Events, Trends. Bookmark the permalink.

3 Responses to Notes from DecisionCAMP-2019

  1. Matteo says:

    I enjoyed reading this summary, I believe I found a typo however as it’s meant to be speaking about conformance level 3 (not 2)

    The typo is present in two places:

    (even if they do not match the CL2 compliance level)
    insisted that Compliance Level 2 that includes loops, functions, recursion, and other programming constructs, should be mandatory for a vendor to claim DMN compliance

    For reference, we discussed how also the RTF believes CL2 and in extent S-FEEL simplified variant is so limited that is not advisable to further promote it.

    So I am pretty sure we always discussed Conformance Level 3, that would translate to “100% the specification”.

    As for another sentence:

    there should be a more flexible definition of compliance as the same capabilities that are expressed in DMN’s complex boxed expressions can also be provided by vendors in a way their customers love much more

    I would like to highlight, that TCK offers this as a side-effect: that is because a Vendor while submitting TCK results can always add a comment about why a particular feature or DMN construct is not supported or supported in part in its own product. That allows user of said product to evaluate in a fine-grain detail if the Vendor’s product chosen can support all the required DMN features, rather than just a global CL3 yes/no. I am pretty sure also Keith highlighted this during his presentation.

  2. Pingback: Summary of DecisionCAMP from @DecisionMgtCom – Column 2

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s