The ability to ask the right questions is the key to successful decision modeling

Prof. Warren Powell wrote: “What all of us do, and I think it is without exception, is look at problems through the lens of the modeling frameworks that we have been trained in. We are prototypical hammers looking for nails.

The solution: We need to educate people in how to ask the right questions. These people should *not* be trained in any analytical methodology to avoid the bias that this unavoidably introduces. Instead, they need to learn how to ask the right questions, without any bias toward a solution approach
.” Link

Posted in Decision Modeling | Leave a comment

Moving away from “vibe decisioning”

David Pidsley, a decision intelligence leader at Gartner, posted today this warning: “GenAI tools instantiate flaws across business decision networks with frightening efficiency when requirements are ambiguous. They will cause growing concerns about uneven decision quality, decision debt (inferred decision traces without explicitly decision models), lucky but fragile decision outcomes disguising AI sycophancy as decision logic, and jeopardizing economic viability by adding unspecified additional contextual data to enterprise decisions.

Enterprises will be moving away from experimental “vibe decisioning” toward decision architecture-first platforms with governance and quality controls. That’s another reason for the demand for decision intelligence platforms.
Link

Posted in Architecture, Artificial Intelligence, Decision Intelligence, Trends | Leave a comment

More about Decision Reasoning Traces

Tony Seale: “Real decisions are never made in a single system. They are made by stitching together signals from CRM, finance, operations, support systems, policy documents, Slack threads – often with human judgement applied at the seams. The most valuable data for enterprise AI is not just what happened, but how and why a decision was reached. This mirrors exactly what foundation model companies discovered when they started building reasoning models. Performance didn’t improve just by scaling data – it improved when they began collecting reasoning traces.Link

Posted in Decision Intelligence, Decision Making, Decision Tracing, Reasoning | Leave a comment

2025 LLMs in Review by Peter Norvig

Peter Norvig wrote today: “I am now done comparing three LLMs to my own coding on the Advent of Code problems. The LLMs did great! They couldn’t have done it last year.” Here are his main conclusions after asking 3 different LLMs to solve 12 puzzles:

  • Overall, the LLMs did very well, producing code that gives the correct answer to every puzzle.
  • I’m beginning to think I should use an LLM as an assistant for all my coding, not just as an experiment like this.
  • This is a huge improvement over just one year ago, when LLMs could not perform anywhere near this level.
  • The three LLMS seemed to be roughly equal in quality.
  • The LLMs knew the things you would want an experienced software engineer to know

See the full analysis at https://lnkd.in/gCc2iuPK

Posted in LLM | Leave a comment

2025 LLM Year in Review by Andrey Karpathy

Andrey Karpathy published a not-too-technical review of technical developments in generative AI this year: “2025 was an exciting and mildly surprising year of LLMs. LLMs are emerging as a new kind of intelligence, simultaneously a lot smarter than I expected and a lot dumber than I expected.” Link

Posted in Gen AI, LLM | Leave a comment

Causal Understanding

Today Pieter van Schalkwyk posted “Decision Traces for Agentic Operations: Why Agents Need Operational Memory“. Here are just a few quotes:

True agency requires causal understanding: Not just knowing what happened, but why it happened and what could happen next. This is what separates genuine AI agents from workflows with chatbot interfaces.

Rules vs. Decision Traces: Rules tell an agent what should happen in general. Decision traces capture what happened in each specific case and why it happened. Link

Posted in Agents, Decision Intelligence, Reasoning | Leave a comment

Optimization as a Decision Intelligence tool

“Does having a working optimization model guarantee business impact? The uncomfortable answer is no. You can follow every best practice, deploy on the latest technology, and satisfy every stakeholder requirement, and still fail to drive the outcomes your company needs.

Why? Because optimization is a means to an end, not the end itself.

Today, we’re stepping back to see the bigger picture: Decision Intelligence. This is the framework that places optimization within the broader context of engineering decisions that lead to desired outcomes and then operationalizing those decisions in systems that ensure the intended actions are actually taken.” Link See also

Posted in Decision Intelligence, Optimization | Leave a comment

DecisionCAMP-2026

The year 2026 is swiftly approaching. We’ve just published a new website for DecisionCAMP-2026. This is a major annual event devoted to Decision Intelligence Technologies. It is scheduled to take place online from August 26 to 28, 2026. It is organized by the Decision Management Community and will take place concurrently with the Declarative AI 2026 conference. The Registration is FREE. Call for Presentations is open – you may submit your abstract via EasyChair. Contact us if you plan to present and have any questions.

Posted in Decision Intelligence, DecisionCAMP, Events | Leave a comment

Aristotle & Dantzig: How a 2000 year old ethos aligns with 20th century mathematics

This post is about a bridge between Aristotelian ethics and George Dantzig’s work. Here is where these two great thinkers align:
– Aristotle: Practical wisdom is the ability to deliberate well about what is possible.
– Dantzig: Optimization is the discipline of finding what is best(subject to what is possible = the constraints) Link

Posted in Decision Intelligence, Scientists | Leave a comment

Will LLMs replace optimization solvers?

“It’s a tempting story. After all, LLMs can write code, generate documentation, and even produce what looks like a mathematical model. But LLMs are pattern generators. They predict the next word, token, or code snippet based on what they’ve seen in their training. This makes them extraordinary for drafting, summarizing, or translating ideas into a different form. But they don’t prove anything. They don’t guarantee feasibility, optimality, or even correctness.

Optimization solvers live in a very different universe. A solver takes a clearly defined objective and constraints, then searches (often through billions of possibilities) using decades of algorithmic advances. When a solver returns an answer, you can test it, verify feasibility, and often prove that it is optimal. That rigor is the very reason we trust solvers to make billion-dollar decisions in supply chains, energy systems, finance, and beyond.

Rather than thinking of LLMs as replacements, the real power comes when we combine the two. An LLM can help a planner articulate a problem in natural language, suggest new constraints, or explain why a model is infeasible. The solver then does what it does best: provide mathematically rigorous solutions.” Link

Posted in LLM, Optimization, solvers | Leave a comment