Sales Promotions – After Challenge Discussion

In this post I’m arguing that the use of database tables to represent sales promotions is not a good idea to compare with the use of rules or decision tables. The promotion example in the Jan-2018 Challenge was a very simple one. Unfortunately, in the real world, business users are often not so accommodating. Towards the end of this document I’ve listed some examples of promotions that are in production at real customer sites. The main thing to notice is the wide variety of the various promotions. 

In modeling this kind of problem there are essentially two parts that need to be accommodated:

  1. The Customer-Order-Item data structure i.e. the core business data
  2. The Promotion rules

Almost certainly the Customer-Order-Item structure will already have been designed and expressed as some sort of database table(s). It is still needed to run the business even if we don’t offer rebates or discounts.

Databases are indispensable for the core business objects. And generally the structure is relatively stable. We don’t normally see the core business data model changing every few weeks. In fact making changes to database structures can be quite a slow process requiring the involvement of IT and DBAs for approval.

However, promotions are quite a different kind of animal. The business is typically changing the sales promotions  quarterly or even more frequently. If the promotion structure never changed, a table would work just fine. The values in the database can easily be changed at a moment’s notice – and the logic can be coded once in java and since it’s entirely generic it should rarely change. And you wouldn’t need an expensive rule engine.

But modeling promotions in a database may not give the business the flexibility it would like. The whole idea of a sales order promotion is to let the business users frequently and arbitrarily change their rules – and that means not just the values but the entire structure.

Every time they make a change to the structure (as opposed to the values) of a promotion you could be faced with a potential database redesign – or more likely a brand new database. Or worse still the business users simply are told that the database doesn’t support that kind of promotion so it can’t be implemented.

Along with that new database will likely be a new user interface – and that will probably require some programming. And putting the values in the database still means you have to put the semantics in some kind of code or decision table.

So now you have the business rules split into two parts – the values are in the database but the interpretation of those values is coded somewhere else. Very hard to see what’s going on since neither place gives the complete picture.

This means that a business user, looking at the values in some kind of web UI which has to be custom built for the purpose, will need something to explain what the various data values actually mean. And if this explanation is provided in English there is the very real possibility of misunderstanding (is the value in a column interpreted as “greater than” or “at least”. Providing the code probably wouldn’t help most business users.

Similarly if you only look at the code you see something very generic that often gives little clue as to what actual values may be in play.  You also have to look at the values in the database to make complete sense of the rules. For example to know that lemons get a discount if you order more than 6.

A compromise is the use of in-memory lookup tables defined inside the rule model and created during the execution of the rules rather than being stored on an external device.

In-memory tables can be more efficient (no need to read an external data source) and more flexible (very easy to change). In practice most database lookup tables are stable for short periods and end up getting cached in memory anyway to improve performance so why not  eliminate the overhead entirely?

But what if applications other than the rules need access to that lookup table defined in the rule model?

You simply allow other applications to call the decision service to get the data rather than making a SQL call to a database – it might actually be quicker – and you have a convenient way to isolate the application from the physical database. Note that this approach is most useful for read-only data such as lookup tables for pricing etc. You still need physical databases for the core business data which is typically in a constant state of flux.

In contrast, by putting it all in a decision table (e.g. Corticon, OpenRules and others) you get these benefits:

  1. Everything is one place – values and semantics – no need to cross reference the database when reading the rules. And you can still use in-memory tables for the promotion details if you want to – though then you miss out on the ability of the decision table to automatically detect ambiguities and omissions in the rules. Though it’s certainly possible to write rules to do this.
  2. There are no constraints on what kinds of weird rules you can create (and business users do come up with some pretty crazy rules for their promotions – see examples further on)
  3. The decision table is a familiar concept to most business people; it has been around longer than programming languages (and if you have a natural language form of the decision table like Corticon it’s even easier)
  4. When you update the decision table with new values, ranges or conditions, a good rule engine will be able to tell you if you have missed any combinations of conditions or if you have any conflicting rules – this is something that’s very hard to accomplish when you simply allow business users to change values in a database through a web interface (unless you want to write all the validation logic yourself – in which case you now find yourself replicating in the web UI a lot of the semantics that are already in the rules/code that makes the decision in order to know whether values conflict or are missing.


While it is certainly possible to define database structures to represent sales promotions it may be better to use rules or decision tables to support the wide variety and frequent change that seems to occur in the real world.

Examples of Actual Promotions

Promotions Based On Historical Data

Rebates for the last quarter of 2017 are given for product sales in certain categories providing they are more than 5% over prior year’s sales. Prior year sales is actually a composite of parts of 2016 and parts of 2017 based on sales date and various other factors. So the logic for determining what prior sales to count is somewhat complex and arbitrary.

Promotions Based on the Customer’s Sales Ranking

Each customer’s total eligible sales are ranked in relation to other customers in the same category.  Rebates are given based on the customer’s relative ranking, not on absolute sales figures.

Promotions based on Knowledge of the Customer’s Business

A customer’s segmentation is determined by using information about their business (e.g. the number of X, Y or Z items the customer already has) to calculate an estimate of the sales that will be made in the forthcoming year (for example the sales of new items needed for the maintenance of X, Y and Z). Rebates are then given depending on the customer segmentation.

Goal Based Promotions

Customers are required to provide estimates of their likely purchases in each of several categories for each quarter in the upcoming year.

Then each quarter the actual sales are compared with the estimate. Rebates are given when they exceed the estimate. Any amount over the estimate can be carried forward from the prior quarter to offset any low sales in that quarter. Q1 may not use any excess from the prior year Q4.  For any given quarter the number of categories that met the quota is counted. If all categories meet the quota then the customer gets 100% of the rebate.  If one or two categories fail to meet the quota then those categories get no rebate but others that meet the quota get 100%. If three categories fail to meet the quota then those categories get no rebate but others that meet the quota get 50%. If more than three categories fail to meet the quota but the aggregate quota was reached then they get 50% bonus. If the aggregate quota was not reached then there is no rebate even if some categories meet their quota. Customers are encouraged to set higher quotas to get better unit prices (otherwise they would all make estimates of zero and always exceed their quotas). The quota is reduced by any backordered amounts.

Variable Item Pricing Promotions

Customer orders 250 items          total=$1400

Over 100 is priced at $5                  (150×5=$750)

51-100 is priced at $6                      (50×6=$300)

1-50 is priced at $7                           (50×7=$350)

If anyone else has examples of actual promotions used in production it would be interesting to see some details.

This entry was posted in Challenges, Decision Modeling. Bookmark the permalink.

5 Responses to Sales Promotions – After Challenge Discussion

  1. Bob Moore says:

    Mike, I agree with a great deal of what you say, but equally there are points I must take issue with. While I agree wholeheartedly with you that you cannot hope to use database tables to represent sales promotions, we clearly have divergent ideas of what ‘represent’ means here.

    It sounds like you have lot more experience than I with contemporary sales promotions, but some years back I led a project to build a decision system calculating the remuneration of third party resellers of financial products. At heart it’s a similar problem to promotions – with remunerations replacing discounts and the additional twist that the resellers can be rewarded on their own performance, and/or on the performance of the people they sell to. For this project there were as I recall twelve distinct types of remuneration scheme each driven by several parameters. The third-party resellers (typically with overall revenues in billion-dollar range) each would negotiate which scheme(s) they wanted to use, and what the scheme parameters should be. So, each remuneration scheme (equivalent to a promotion), was (at least in principle) unique to both the customer and the product.

    My thinking on the topic of promotions is heavily influenced by my experiences designing and building the solution. To give a feel for the parallels, the first, fourth and fifth examples you list in your post had almost exact equivalents in the remuneration schemes we worked on (while we could have incorporated the second and third examples easily enough, they don’t really make sense for third party remuneration). One or two of the schemes we had to build were sufficiently complex even the people who invented them struggled to explain how they were supposed to work.

    The first problem I have with your analysis is that in my view there are essentially three parts which need to be modelled, not two. To explain what I mean, consider decision tables. One way to look at them is just to say each row is a rule. But for me, this fails to capture why decision tables are so expressive. I view the structure of a decision table as defining a set of rule templates (in simplest case only one) – which is the ‘real’ business knowledge – how we do business regardless of with who or what. The rows then provide the ‘facts’ to instantiate a set of rules which defines how we do business ‘at this moment’.

    When the business first defines a decision table their primary concern should be the structure not the content. They need to identify the inputs, the conditions applied to the inputs and the outputs. They cannot be over concerned with the contents of the individual cells, since these are expected to change over time, nor the number of rows (unless the table is going to be very large, when some additional design considerations are often needed). This separation of concerns also is reflected in how maintenance is done. The skill set, and knowledge needed to design and create a decision table from scratch or to alter the structure of an existing decision table is different to the skill set and knowledge needed to update cell contents or to add or remove rows. Additionally adding/removing/editing rows will occur much more frequently than changes to the structure. So, in my option we have three essential parts to the system:

    The Customer-Order-Item data structure i.e. the core business data
    The logic defining how each type of promotion works (rules, rule templates etc.)
    The facts/parameters defining specific promotion instances

    If one is prepared to accept that decision tables are a combination of business logic (rule templates) and facts (the actual values of cells in the table), there is an obvious mapping between any decision table and a set of concrete rules (corresponding to the rules templates) and a set of objects (whose attributes map to the values of the cells of the decision table). And there is generally little problem in mapping a set of objects to entries in a database (but importantly there is no obvious mapping of the rules). So we have different ways of representing the same decision logic and/or the same supporting facts. Each representation has strengths and weaknesses.

    It is my contention is that regardless of whether the ‘facts’ are in a decision table (cell values), or are attributes of objects, or are stored in a database (as database fields) the ‘real’ business logic is in the rule templates or rules which manipulate these ‘facts’. As long as the logic is in the decision system, that is where your sales promotions or remuneration schemes are defined/represented.

    The ‘real’ logic is defined and maintained by people with both an understanding of how the business wants to do things and the hands-on skills to actually build decision systems. The ‘facts’ are defined and maintained by people defining detailed promotion policy, but they have no need of understanding the details of how decisions are actually implemented. There is no reason the same person cannot do both jobs, but they are using different skill sets for each if they do.

    Having split out the facts from the rules and recognised different groups are responsible for maintaining the two things, there is no obligation that we store them in the same system. We certainly can if we want, but equally we can choose not to. We should do what makes most sense.

    This leads onto the second point you make, with which I have to take issue. Suppose I do externalise my ‘facts’ and put them in the database. When the business dreams up a new kind of promotion do I need to redesign my database for new facts? Well in general no, at least not if you design the database properly. For the remuneration scheme project, we designed two or three tables, to hold the ‘facts’. The design supported all the existing types of schemes, three or four additional (and rather esoteric) schemes which were being considered and with plenty of flexibility to support more variations.

    I finished feeling that whatever new schemes came along later I was at least 90% certain no database schema changes would be needed, only new database records (this is compared to being 100% certain new rules, decision tables and/or procedural logic would be needed in the decision system to support any new scheme). If I’d wanted to be a 100% certain I could have gone for a tagged database format with the facts represented as entity/attribute/value triplets, which can represent pretty much anything. Another option, not available to us at the time, but commonplace now, would be to have stored the facts in a NoSQL database like MongoDB, which support the storage of arbitrarily complex object structures.

    I should perhaps explain at this point why, when building the remuneration system, we took the design decision to store our facts in the database, not the decision system itself. Well the truth is it wasn’t really our idea! Firstly, as I said earlier all the remuneration schemes were all customer specific and the business made it very clear it did not want us to have customer information stored in the decision system. We might have been able to work around this, but the business also expressed concerns about having to redeploy the decision service each time a fact was updated, because of the draconian approach taken to application updates by their operations team. I commented one way around both issues would be to put the facts in the database, and they loved the idea. So that’s what we did.

    And doing things this way did have some nice side effects. With the facts in the database along with all the customer and product information, users needed no access to the decision service to find out what remuneration schemes existed, which customers and product they were tied to, and what kind of benefits they provided. It also made managing history was easy, tracking what promotions looked like last month, last year etc. The database was a one-stop-shop for everything the business needed to know about remuneration (since they didn’t need to know the logic needed to actually do the calculations).

    So, going through your summary points

    “Everything is one place – values and semantics” – I’ve argued there is a natural split, so while is a bit neater to have everything in one place it’s not a problem if you don’t. It’s more work for the people building the system, but my experience is the people managing the ‘facts’ are often more comfortable with using a UI onto a database like they use for most other applications. Consistency checks are a potential problem, but with ‘weird’ rules my gut feel is that the range and coverage checking capabilities of decision tables will fall short anyway. Finally, I must admit to being very dubious about using in-memory tables. On the one hand if the decision system is customer facing, it is likely on the wrong side of the DMZ and firewall, so you need to jump through hoops to make it visible from the back office. On the other hand, if it is batch oriented (as was the case of the remuneration system), in-memory tables are only accessible in the dark hours when the batch is running!
    “There are no constraints on what kinds of weird rules you can create (and business users do come up with some pretty crazy rules for their promotions)” – as I hope I’ve explained, this will true regardless of where you put the facts – the weirdness is in the rules and other logic not the facts they use.
    “The decision table is a familiar concept to most business people” – I might mischievously suggest it’s familiar to ‘business people’ nowadays because they’ve been looking at relational tables for the past thirty years. Human beings have been using tables to represent relationships since Babylonian times, and decision tables are certainly no easier but probably no more difficult to understand than relational database tables.
    “When you update the decision table with new values, ranges or conditions, a good rule engine will be able to tell you if you have missed any combinations of conditions or if you have any conflicting rules” – this is valid and cogent point, but as I say above ‘weird’ rules may need additional logic. However, I have a couple of further observations on this. Firstly, I sometimes find actually needing to make consistency checks is almost a consequence of using decision tables to represent your knowledge in the first place. This is particularly true with ranges. The SQL geeks I’ve worked with seem keen to only specify one end of a range, not both, so you never get gaps or overlaps. I made use of their approach when looking at the soldier’s pay as well as the sales promotions challenge. We also used it in the remuneration system for the schemes with tiers like your “Variable Item Pricing” example. Secondly, as a more general observation the way we designed the logic/facts structure when building the remuneration system, the opportunities for ‘inconsistency’ were few and far between. There were plenty of opportunities for the users to define promotions which made no business sense, but not ones which did not make ‘logical’ sense.

    So, my initial response to your discussion would be:
    I heartily agree with you in that it is not possible to define database structures which represent sales promotions for the kinds of promotions I would regard as interesting. It is absolutely a requirement to use rules and other knowledge representation mechanisms to define them.

    However, I disagree with a number of the points you make because in my view promotions fall into two parts:

    The first part is the business logic used to determine if a promotion is applicable, and how to calculate what kind of discount, rebate or other benefit it provides. This logic must (in my view) be implemented in a decision system
    The second part are the instances of a promotion, which define, to which customers and products a discount or other benefits apply, the thresholds and other conditions to be met for the promotion to apply, and the benefit rates to apply. These are just facts (number, percentages, product ids, customer ids, etc.). These facts can sit in the decision system, on the file system, or in a database. However, if – and on balance I would argue this is generally true – systems outside promotion decision service want to know what these facts are, a database is probably a more natural way do this.

  2. I like Bob’s fighting spirit and enjoyed reading how he defends the use of objects vs rules in his solution or the placing of rule values in a database in this discussion. Bob explained how a mix of rules and DB was justified in some legacy situations in 90’s. However, I am sure Bob knows better than many other people that today he wouldn’t find support for the battle lost long time ago. It’s hard to agree with Bob’s main contention about decision tables:

    It is my contention is that regardless of whether the ‘facts’ are in a decision table (cell values), or are attributes of objects, or are stored in a database (as database fields) the ‘real’ business logic is in the rule templates or rules which manipulate these ‘facts’. As long as the logic is in the decision system, that is where your sales promotions or remuneration schemes are defined/represented.”

    Here is why. The decision table logic is defined not only by the table structure (“template”) but also by the rows themselves that frequently represent inter-related rules. The content of decision table cells makes the rules mutually exclusive (or not), utilizes accumulation, rule overrides, defines possible defaults, and much more – the DMN standard is good explaining these issues in detail. Even the values inside decision table cells are not just “data or parameters” which you can keep somewhere else (like a DB table). They could, for instance, be presented as a FEEL expression defined by other decision variables not even specified in the columns of this table. The decision table authors and maintainers must see the entire table to understand how it actually works. So, nowadays we don’t fight this battle anymore.

    I’d like to push this “after the challenge” discussion in a more practical direction and to discuss how DMN-based solutions can handle multiple iterations required by this problem. In his solution Bob wrote: “lacking access to a level 3 compliant DMN tool, I leap straight to implementing the table in Drools” using for-loops and accumulate-statements. What if Bob had such an access?

    Thanks to Bruce Silver’s solution now we can see how the same (or slightly more complicated) problem can be implemented using DMN Conformance Level 3 (CL3) tools. Most of us, decision modeling practitioners, like the DMN standard to succeed and become an everyday reality. However, we do have our differences. For example, Bruce usually promotes the DMN boxed expressions and explains how simple it is to use them in general and for iterations in particular. At the same time, I frequently criticize those boxed expressions showing how to replace them with more traditionally looking decision tables of the new types “Iterate”, “Sort”, etc. – see my solution to this challenge and a few links below. I don’t want to sound too negative again, but if I must choose between Bob’s Drools loops and Bruce’s boxed expressions, I’d choose the Bob’s approach (or simply show how to do it in basic Java). To judge yourself, please take a look at the boxed expressions “Get item discounts” and “Discounted item prices” in the Bruce’s solution. I trust we may find people who can really understand how these boxed expressions work. However, putting for-loops and other programming constructs in boxes do not make them more graphical or friendlier. This time even Bruce himself wrote:

    The tricky part is when item A is eligible based on the quantity of item B or category C, involving nested loops, table joins, and similar nasty things that FEEL can do but is a little obscure”.

    Everybody knows about these DMN issues, we discussed them at several recent forums (DecisionCAMP and BBC-2017), and agreed that we need a better user-facing graphical notation for iterations. I am not suggesting eliminating these boxed expressions completely from the DMN (e.g. they could be used in the automatically generated DMN XML) but hopefully this discussion along with solutions for similar previous DMCommunity’s challenges will finally push DMN RTF to extend the standard with more user-friendly alternatives. Here are the related links:

  3. Bob Moore says:

    Ouch! The dig about systems from the ’90s is a low blow. I evidently need to come out fighting again.

    To try and put the record straight the remuneration system is circa 2009 and given Mike’s examples it seems as relevant today as it was a decade ago. And the last time I had occasion to advise someone to replace a decision table with a database table was around 2015 (I don’t do this often, after all it’s only occasionally the right thing to do). In this case the gentleman in question took my advice and got a big pat on the back as a direct result. A little while later he rang to say was also a factor in him getting a promotion.

    I’m not talking legacy systems, I’m talking here and now.

    I think there are a couple of points here I need to address here:

    Firstly, logic and facts:

    To be polite, the statement “The decision table authors and maintainers must see the entire table to understand how it actually works” is nonsense.

    Depending on the application, maintainers may or may not need to see the entire table, but the authors cannot hope to. The rows and facts to be incorporated into the table can change between analysis and design, design and implementation and implementation and deployment. They will certainly have changed six months down the line long after the authors have gone onto to other things. I’ve heard of cases where decision table authors were simply not allowed to know what the actual rows and facts were going to be (I have to leave it to your imagination why this was the case).

    Authors know the logic; but only the maintainers know the facts. And in general the maintainers only need to understand a subset of the logic to do their jobs properly

    While the facts are absolutely essential to making the correct decision, it is the decision-making logic which is at the core of the decision system.

    The simple question to ask oneself here is if you add a row to a decision table (or remove or edit one), have you fundamentally altered the decision system in the same way as if you had added a new decision table, or added a new input column, or a new output column to an existing table?

    In my view – and I’d expect most in the decision management community would agree – the answer is simply no.

    The hard-won understanding of what decisions the business want to make is not about what rows the business had when you were doing the analysis, or when you were doing the design, or what they were when the system first went live, or even what they are now, but how the decision tables interpret those contents regardless of the actual rows and cell values.

    Indeed, in a very real sense the actual rows you create in your decision tables and the specific values you store in their cells are the last things you need to know when building the system. The design doesn’t depend on them, it is the run-time system which does.

    Of course, the decision table rows may be interrelated. If you have a ‘unique’ table, there are (static) consistency considerations. You may have additional consistency or coverage constraints. If you use ‘first hit’, there are ordering considerations. If you want accumulation and grouping operations, there are other considerations. But you’ll find tables in relational databases with identical characteristics and nobody argues structure vs facts there. It is the interpretation not the specific content which is important from the decision-making process.

    But the distinction between logic and facts is deeper than this. What happens if you look under the hood? When I create a decision table in a tool like OpenRules, or Blaze Advisor, or Drools or ODM, I get an Excel file or an XML document, and when I open any of these up what do I see? I see a top-level structure – the decision table – which the execution engine uses to work out how to interpret the content, and then a list of row data with values. Embedded FEEL expressions? They are condition templates and part of the structure, in the same manner as the condition templates in the decision table I used in the first part of my challenge solution. Looking at the Excel or reading the XML the separation of logic and facts is quite plain.

    And last but by no means least, you just have to look at a decision table to see the facts, the underlying logic might be hidden in embedded FEEL, or complicated rule templates, but what you see on the screen are numbers, and strings (yes there are also operators, but they are simply rule template selectors – which may or may not fully explain the logic).

    And of course this is what the decision table maintainers actually want. They don’t care much about FEEL expressions, or hit strategies and other such implementation details. What they are interested in is if they change ‘5’ to ‘6’ in this cell of this row the customer now gets a $6 discount instead of a $5 discount when they buy 3 bags of coffee and 4 of tea.

    For efficiency a rule compiler can and probably should merge logic and facts before executing them, but at analysis time, at design time and at edit time they are most definitely distinct. I’m not hearing anything which gives me the slightest inclination to budge from that point of view.

    Secondly, why did I abandon decision tables for objects and databases in the challenge?

    Maybe the point was missed, but databases are a red herring with regard to my solution to the challenge and the solution didn’t use one. True, I stored objects in text files but that was mainly to make testing easier, and true I said if I had objects I could manage them in a database to make definitions generally accessible. But neither of these points had anything to do with the choice of object and rules vs decision tables. It’s purely about things you can do with the first you can’t do with the second.

    Abandoning decision tables for objects was very problem specific. I was considering scenarios with overlapping promotions. I wanted to make the natural logic separation of determining if which promotions an order qualifies for, followed by a separate step to decide which of the qualifying promotions should be applied (since they might conflict).

    This led to an implementation challenge using decision tables because I ended up having to repeat (some of) the defining characteristics of promotions in two separate tables. If you end up with decision tables which are interdependent in this way, an edit to one implies immediately going off to the other table to figure out what the corresponding changes need to be. I used objects to avoid duplication – avoiding having to put the facts like “Promo 1a” involves products “1001” and “1002” in two distinct decision tables.

    I’d be delighted to see a clean solution to this duplication problem using decision tables, but until then I’m going to have to defend using objects in a such scenario.

  4. jacobfeldman says:

    Dear Bob,
    The response like yours really makes our discussions interesting. And people who want to hear similar live discussions in person, have a chance to do it by attending the DecisionCamp-2018 where hot discussions are a norm.

    My comments and your response is a typical example of how we do not hear each other. If you (and others) look at my solution of this Challenge, you will be surprised that I actually separated the definition of promotions and the logic (decision tables) that defines if an order is eligible to a promotion. I used Excel tables of the type Data to specify Promotion Items and Promotions (BTW, to allow overlapping items I may simply use both item’s ID and SKU). The Data tables is the way OpenRules represents “objects” directly in Excel. So, I actually did what you recommend doing: my decision table “CompareActualAndMinimalQuantities” does not have any promotion data and simply checks if Qty of Promotion Item Inside Current Order < Promotion Item Minimal Qty.

    Reading your solution, I was initially surprised why you put all promotional item SKUs and minimal quantities directly to your decision table. It was obvious for me that they belong to promotion data, but if you decided to mix it with logic, you have a right to do it as well (while it is not a good practice). As you can see, in our solutions for this particular challenge we actually represented points of views opposite to those it seems we defended in this discussion.

    So, why do we misinterpret each other statements? When we make generic statements like your “contention about decision tables” or my “nonsense”-generalization that “The decision table authors and maintainers must see the entire table to understand how it actually works” we have in mind completely different examples, which we base such statements on. When I am thinking about a multi-hit decision table with ~7-9 columns and ~50-100 rules (rows), I cannot be too optimistic that “if you add a row to a decision table (or remove or edit one), you will not fundamentally alter the decision system”. Probably you had in mind a decision table in which all rules are mutually-exclusive that unfortunately rarely happens in real world. And you know perfectly well that when a customer changes some values in conditions of a relatively complex decision table, she could be quite surprised not to get the expected result. And she will have to look for explanations of which rules were actually executed.

    When I wrote, that these days people do not usually keep the decision table structure (“template”) and actual rules in different places (e.g. in Excel and in DB), it fits your own argument against it: “you now are using two tools a DBMS and a BRMS”. But at the same time our own solutions for this challenge, show that data like promotional items (but not rules!) can come from a database and be used by decision services. So, actually we are mainly on the same page, contrary to what people may think by reading our previous comments.

    And finally, I want to confirm that when you expanded the problem to handle multiple overlapping promotions, I really like your analysis and the suggested approach – I’d do something similar keeping it in a separate decision service for promotion definition and maintenance that works with a database.

    Thank you,

  5. mikeparishcorticon says:

    Jacob Feldman wrote this in reference to the Sales Promotion challenge:
    “I’d like to push this “after-the-challenge” discussion in a more practical direction and to discuss how DMN-based solutions can handle multiple iterations required by this problem. In his solution Bob wrote: “lacking access to a level 3 compliant DMN tool, I leap straight to implementing the table in Drools” using for-loops and accumulate-statements. What if Bob had such an access?”

    We first need to ask why iteration and loops are necessary.

    Typically the reason is because the data objects are related to one another. Somewhere in the data structure there is a one-to-many or many-to-many association, either explicit or implied. In the case of the Sales Promotions we had an order which consists of one or more items. Hence there is an implied loop if we want to apply some logic to all (or some) of the items in an order. We might even go further and imagine that we have customers who have one or more orders which it turn have items, so that now we have nested loops.

    This type of construct is so fundamental to all types of decision that it really needs to be an integral part of the decision table metaphor, not an after-thought.

    From its very beginnings back in 2000 Corticon has included the concept of associations and the implied loops needed for processing. In fact it’s such a fundamental part of the decision table that a rule author doesn’t actually need to do any explicit looping in the decision table in order for iteration to occur. This is really powerful. It enables the author to focus on what, not how.

    The key is in the data model or vocabulary. By defining not only the entities and attributes but also their relationships (associations) Corticon is able to automatically figure out when iteration is necessary.

    In Corticon the decision table has, in addition to conditions and actions, two other sections that make this possible. (Take a look at any of the Corticon solutions to the Sales Promotion challenge problem for examples of this)

    The scope section defines what structural parts of the data model are relevant to this particular decision table – sometimes it’s just a small slice of the data structure, sometimes it’s the whole thing. For example: the rules may deal with a single item or the set of items in an order or the set of orders and items for a customer. Based on this Corticon can automatically figure out the necessary iterations.

    The filter section defines what data values (at execution time) are relevant to this particular decision table. You can also specify any “joins” between objects that may not have an explicit association.

    And, because collections of objects are so fundamental to data processing, Corticon also provides built-in operators that implement many of the more common functions on sets:
    ->sum, ->size, ->max, ->min, ->first, ->last, ->avg, ->exists, ->forAll etc
    These functions can be used in conditions, actions and filters.

    Perhaps the DMN standard should also make associated objects and iteration an integral part of the decision table so that boxed expressions, FEEL and other programmer artifacts become unnecessary. I’m curious to know what others think.

    You might want to look back through all the various challenge problems to see which solutions were able to solve them all without going outside the decision table metaphor and without requiring explicit looping constructions.


    BTW Corticon does allow you to explicitly mark a rule sheet or rule flow as “iterative”. In this case it keeps executing until there are no further changes in the data.
    See for examples of this use.
    So if you are more comfortable creating and controlling your own loops you can still do it.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s