DecisionCAMP-2025 Program has been published!

This year, we received an overwhelming number of submissions from Decision Intelligence professionals, and most of them look good and intriguing. DecisionCAMP Organization Committee accepted 22 submissions, and to accommodate them, we decided to add an extra day to our Camp. So, it will take 4 days on September 22, 23, 24, and 25. Here is the preliminary Program:

https://decisioncamp2025.wordpress.com/program/

Presentation slides will be published in September before the start of DecisionCAMP. The event will run 100% online. Please register for FREE https://decisioncamp2025.wordpress.com/registration/ to receive personal invitations and all updates. DecisionCAMP-2025 promises to become a very important event for those who are interested in the practical application of Decision Intelligence technologies.

Posted in Decision Modeling | Leave a comment

“You won’t lose your job…”

From Stéphane Dalbera‘s post:

Posted in Misc | Leave a comment

Anthropic Copyright Ruling May Spur More AI Licensing Deals

The first federal court decision on the fairness of taking copyrighted material to train generative artificial intelligence is a mixed outcome for tech companies and content creators that could prompt both parties to seek coexistence, according to attorneys, with the judge concluding that while the technology is “spectacularly” transformative, using pirated material is inexcusable. Link

Posted in Legal, LLM | 1 Comment

Rodney Brooks’s 1988 article “Al: great expectations”

He just posted on LinkedIn: “Every so often a new AI development comes along and great excitement ensues as people stumble over themselves convinced that the key to intelligence has been unlocked.” Me writing about AI overhype 37+ years ago. Old dogs, old tricks. Yes, I did look like that. https://lnkd.in/gXMr5ERn

Posted in Artificial Intelligence | Leave a comment

Gartner: “AI is not doing its job”

Speaking at the firm’s Data & Analytics Summit in Sydney, Australia, Gartner’s global chief of AI research Erick Brethenoux: “AI is not doing its job today and should leave us alone.” Brethenoux said the current wave of AI hype is fueled in part by conflation of the terms “AI agent” and “generative AI” – and use of fuzzy definitions for both. Link

Posted in Artificial Intelligence, Gen AI | Leave a comment

“Decision-making is not prediction. It is structure.”

Adam DeJans Jr. posted on LinkedIn: We have never had more data, more compute, or more machine learning. But most systems still fail to make good decisions. Why? Most “AI” systems today forecast something and hand it off to a spreadsheet or a planner. That’s not intelligence. That’s a blind pass.
But decisions are made over time, not in isolation. And intelligence is not a static output, it is an evolving policy that learns and adapts.
Until we design systems that close the loop (information to decision to outcome to update) we will keep confusing modeling with thinking. The future of decision intelligence is not better AI. It is better structure.
Link

Posted in Artificial Intelligence, Decision Intelligence, Decision Making | Leave a comment

Modeling Logic Isn’t for Everyone

An article with this name was posted today by Stefaan Lambrecht, a frequent presenter at DecisionCAMP. “Why can’t everyone on the business side just model decisions, cases, and processes using DMN, CMMN, and BPMN?”

Ah. If only it were that simple. You see, that’s like asking, “Why can’t everyone write a symphony?” Well… we all enjoy music. Some of us can play an instrument. Many of us hum along. But composing a full-blown symphony? That’s a métier. A craft. A skill honed over time. And so is modeling logic. Decision models, case management models, process models—this isn’t just drawing fancy diagrams. It’s engineering. With a splash of storytelling. Link

Posted in Art, Artificial Intelligence, Business Logic, Decision Intelligence | Leave a comment

Collapse of Reasoning Models?

Apple’s ML scientists put to the test the latest “reasoning” models, like Claude, DeepSeek-R1, and o3-mini. They made these models solve classic puzzles: Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World. Their findings “reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds.” This puts in question the reasoning capabilities of these systems, showing that instead of “reasoning,” they just really well memorize patterns. Check out Apple’s original research paper, “The Illusion of Thinking“.

Posted in Artificial Intelligence, Challenges, Reasoning | Leave a comment

AI Yes-Men

Pieter van Schalkwyk posted on LinkedIn “When AI Agents Tell You What You Want to Hear: The Sycophancy Problem“. In particular, he says: “Modern AI models learn to maximize user satisfaction metrics. This training creates a fundamental bias toward telling people what they want to hear. When businesses deploy single AI systems, this presents manageable risks. The problem explodes when multiple AI agents collaborate. Each agent’s tendency to agree reinforces the others, creating false consensus. What looks like unanimous support often masks critical flaws that no agent dares to surface.
Consider a simple scenario: five AI agents evaluating a risky investment. If each agent has a 30% chance of providing agreeable rather than accurate analysis, the probability of getting genuine dissent drops to near zero. The math is unforgiving.”

Posted in Artificial Intelligence, Decision Making, Human-Machine Interaction | Leave a comment

ML: Pros and Cons

A right-to-the-ground discussion about Machine Learning (ML) is happening on LinkedIn. After François Piednoel de Normandie explained why “ML can neither be safe nor secure“, Philippe Kahn wrote: “First off, you’re right: ML isn’t perfect (yet). But neither is my coffee maker, and yet I still trust it not to flood my kitchen—most days. What keeps both from going rogue? Guardrails! In ML, these aren’t just buzzwords; they’re built with everything from rule-based checks to AI agents that monitor, validate, and correct outputs before they reach the wild. You can think of them as the seatbelts and airbags of the AI world.

And about those “very large datasets”—there’s real wisdom there. Big data helps models generalize, spot edge cases, and avoid overfitting. It’s like training a chef with every recipe in the world, not just the ones from their mom’s cookbook. Sure, sometimes the soufflé still collapses, but with robust safeguards, you’re much less likely to serve raw eggs. There are frameworks, and protocols designed to catch anomalies—like the ML-On-Rails protocol, which flags weird inputs before they cause trouble (so your robot doesn’t recommend yoga poses during a sensor meltdown). And let’s not forget, even human brains make mistakes—sometimes with coffee makers and sometimes with math.
Link

Posted in Machine Learning | Leave a comment