MIT Technology Review published this article Giving algorithms a sense of uncertainty could make them more ethical: “Algorithms are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.”
“We make decisions as human beings in quite uncertain ways a lot of the time. But when we try to take the ethical behavior and apply it in AI, it tends to get concretized and made more precise. Why not explicitly design our algorithms to be uncertain about the right thing to do?” Link
“Algorithms are typically programmed with clear rules about human preferences. We’d have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers—even if we weren’t actually sure or didn’t think that should always be the case. The algorithm’s design leaves little room for uncertainty.
The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty. You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn’t specify a preference between friendly soldiers and friendly civilians.
In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers.
The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says. Say the AI system was meant to help make medical decisions. Instead of recommending one treatment over another, it could present three possible options: one for maximizing patient life span, another for minimizing patient suffering, and a third for minimizing cost. Have the system be explicitly unsure, and hand the dilemma back to the humans.”
See how OpenRules uses Business Rules with Probabilities to deal with uncertainty https://openrules.wordpress.com/2013/08/14/solving-rule-conflicts-part-2/ …