Gartner: “AI is not doing its job”

Speaking at the firm’s Data & Analytics Summit in Sydney, Australia, Gartner’s global chief of AI research Erick Brethenoux: “AI is not doing its job today and should leave us alone.” Brethenoux said the current wave of AI hype is fueled in part by conflation of the terms “AI agent” and “generative AI” – and use of fuzzy definitions for both. Link

Posted in Artificial Intelligence, Gen AI | Leave a comment

“Decision-making is not prediction. It is structure.”

Adam DeJans Jr. posted on LinkedIn: We have never had more data, more compute, or more machine learning. But most systems still fail to make good decisions. Why? Most “AI” systems today forecast something and hand it off to a spreadsheet or a planner. That’s not intelligence. That’s a blind pass.
But decisions are made over time, not in isolation. And intelligence is not a static output, it is an evolving policy that learns and adapts.
Until we design systems that close the loop (information to decision to outcome to update) we will keep confusing modeling with thinking. The future of decision intelligence is not better AI. It is better structure.
Link

Posted in Artificial Intelligence, Decision Intelligence, Decision Making | Leave a comment

Modeling Logic Isn’t for Everyone

An article with this name was posted today by Stefaan Lambrecht, a frequent presenter at DecisionCAMP. “Why can’t everyone on the business side just model decisions, cases, and processes using DMN, CMMN, and BPMN?”

Ah. If only it were that simple. You see, that’s like asking, “Why can’t everyone write a symphony?” Well… we all enjoy music. Some of us can play an instrument. Many of us hum along. But composing a full-blown symphony? That’s a métier. A craft. A skill honed over time. And so is modeling logic. Decision models, case management models, process models—this isn’t just drawing fancy diagrams. It’s engineering. With a splash of storytelling. Link

Posted in Art, Artificial Intelligence, Business Logic, Decision Intelligence | Leave a comment

Collapse of Reasoning Models?

Apple’s ML scientists put to the test the latest “reasoning” models, like Claude, DeepSeek-R1, and o3-mini. They made these models solve classic puzzles: Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World. Their findings “reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds.” This puts in question the reasoning capabilities of these systems, showing that instead of “reasoning,” they just really well memorize patterns. Check out Apple’s original research paper, “The Illusion of Thinking“.

Posted in Artificial Intelligence, Challenges, Reasoning | Leave a comment

AI Yes-Men

Pieter van Schalkwyk posted on LinkedIn “When AI Agents Tell You What You Want to Hear: The Sycophancy Problem“. In particular, he says: “Modern AI models learn to maximize user satisfaction metrics. This training creates a fundamental bias toward telling people what they want to hear. When businesses deploy single AI systems, this presents manageable risks. The problem explodes when multiple AI agents collaborate. Each agent’s tendency to agree reinforces the others, creating false consensus. What looks like unanimous support often masks critical flaws that no agent dares to surface.
Consider a simple scenario: five AI agents evaluating a risky investment. If each agent has a 30% chance of providing agreeable rather than accurate analysis, the probability of getting genuine dissent drops to near zero. The math is unforgiving.”

Posted in Artificial Intelligence, Decision Making, Human-Machine Interaction | Leave a comment

ML: Pros and Cons

A right-to-the-ground discussion about Machine Learning (ML) is happening on LinkedIn. After François Piednoel de Normandie explained why “ML can neither be safe nor secure“, Philippe Kahn wrote: “First off, you’re right: ML isn’t perfect (yet). But neither is my coffee maker, and yet I still trust it not to flood my kitchen—most days. What keeps both from going rogue? Guardrails! In ML, these aren’t just buzzwords; they’re built with everything from rule-based checks to AI agents that monitor, validate, and correct outputs before they reach the wild. You can think of them as the seatbelts and airbags of the AI world.

And about those “very large datasets”—there’s real wisdom there. Big data helps models generalize, spot edge cases, and avoid overfitting. It’s like training a chef with every recipe in the world, not just the ones from their mom’s cookbook. Sure, sometimes the soufflé still collapses, but with robust safeguards, you’re much less likely to serve raw eggs. There are frameworks, and protocols designed to catch anomalies—like the ML-On-Rails protocol, which flags weird inputs before they cause trouble (so your robot doesn’t recommend yoga poses during a sensor meltdown). And let’s not forget, even human brains make mistakes—sometimes with coffee makers and sometimes with math.
Link

Posted in Machine Learning | Leave a comment

It’s better to be approximately right than precisely wrong

This statement is from the post by Adam DeJans Jr: “I’ve worked with many companies (from logistics to manufacturing) and one pattern stands out. There’s often too much emphasis on improving forecast accuracy, and too little thought about what really matters: the decisions being made and the cost of being wrong. Instead of asking how far off the forecast was, I ask how much it cost me. I want a metric that reflects business realities. If I miss high on a forecast, what does it do to my bottom line? If I miss low, what opportunities did I lose? The right loss function captures that asymmetry.

And perhaps most importantly, I never assume my forecasts are correct. Uncertainty is always part of the problem. Ignoring it doesn’t make it go away. Every planning system should explicitly account for uncertainty, and every good decision process should be built to handle it.
Link

Posted in Forecasting, Uncertainty | Leave a comment

Building Business Capability conference: June 9-12

The Building Business Capability (BBC) conference will take place on June 9-12, 2025, Phoenix, AZ. This conference focuses on enhancing leadership skills, digital transformation, and various business methodologies, including Business Analysis and Business Architecture. AI from the Business Analyst’s perspective will be among the most popular topics. Website

Posted in Events | Leave a comment

The glorification of mediocrity

Stéphane Dalbera posted on LinkedIn: There’s a troubling narrative spreading across LinkedIn:
“To help beginners, we must dumb everything down.”

Strip down the language.
Avoid abstractions.
Stick to the bare minimum.
Pretend the standard library barely exists.

Let’s call this what it is: The glorification of mediocrity.
Link

Continue reading
Posted in Trends | Leave a comment

Can AI Agents Replace Professional Engineering Intelligence?

Pieter van Schalkwyk: “Most current AI agents are sophisticated chatbots with language models. They excel at content creation but cannot make professional decisions. This is like asking a talented writer to perform brain surgery.” “When Microsoft Copilot writes a maintenance report, it creates text without understanding stress cycles or failure modes. When Salesforce agents handle customer complaints, they optimize conversations without knowing the difference between service issues and safety problems. These systems look competent but have no real substance.” Link

Posted in Artificial Intelligence, Human Intelligence | Leave a comment