Fun Fact

Fermat’s Library shares interesting facts on LinkedIn. Here is one of them. In the 1780s a German schoolteacher gave his 8-year-olds a problem to keep them busy. He asked them to add up all the numbers from 1 to 100: 1 + 2 + 3 + … + 98 + 99 + 100 = ?

One student came up with the answer in just 2 minutes. The boy told the teacher that he had just “folded” the numbers so that 1 joins with 100, 2 joins with 99… Each pair of numbers added up to 101. He counted 50 pairs of numbers and so he multiplied 50 by 101 which gave him the answer 5050.

That boy’s name was Carl Friedrich Gauss.

Posted in Human Intelligence | Leave a comment

Marvin Minsky

Stéphane Dalbera posts brief biographies of the most influential figures in the development of AI and computer science in general. Here are quotes from his latest post about Marvin Minsky with comments by Philippe Kahn: “Minsky helped develop early AI systems for solving mathematical problems, playing chess, and simulating visual perception. He focused on creating machines that could replicate human thought but remained skeptical of the idea that machines could quickly achieve true intelligence… In The Society of Mind (1986), Minsky expanded his theory that intelligence emerges from the interactions of simple processes, or “agents,” each responsible for a different aspect of thought.

In the 1980s and 1990s, Minsky remained a leading figure in AI, advocating for interdisciplinary approaches to understanding intelligence.

Despite early successes, Minsky was critical of AI’s progress and skeptical of claims that machines were close to true intelligence. He recognized the challenges of creating machines capable of understanding and reasoning like humans. Link

Posted in Artificial Intelligence, Most Influential | Leave a comment

AI only sees …

Posted in Decision Modeling | Leave a comment

Mike Gualtieri about AI Decisioning Agents

Mike Gualtieri wrote: “All of AI Agents need a decisioning capability. Having said that, the breadth and depth of decision intelligence technologies are so broad that specialized AI Decisioning Agents can be much more sophisticated and subject to more direct human governance. Why? 1) Take a decisioning technique like mixed integer programming. That is super valuable. Sure an Agent can call a “tool” to do that but it also requires deep integration and configuration. 2) AI Decisioning platforms also enable human decision-logic also known as rules. They provide pretty nice tools that business experts can use to combine these rules. And, there is also decision optimization, champion challenger, a/b testing, simulation and more.

It is true that general AI agents could gain these capabilities, but I think there is a strong case for AI Decision Agents that in turn would be used as a “tool” by other AI Agents because of the human tooling to configure, change, and monitor those decisions.
Link

Posted in Agents, Decision Intelligence | Leave a comment

Gartner To Publish New Magic Quadrant for Decision Intelligence Platforms

David Pidsley from Gartner announced today that Gartner is ready to publish a new Magic Quadrant for DI Platforms. Here is a list of featured vendors who offer a decision intelligence platform:

4Paradigm 第四范式, ACTICO, Aera Technology, Airin, Inc., Cogility Software, Corridor Platforms, CRIF, Decisions, Diwo, Cloverpop, Elemental Cognition, Faculty, FICO, FlexRule, IBM, InRule, Merlynn Intelligence Technologies, o9 Solutions, Inc., OpenRules, Inc., Palantir Technologies, paretos, Quantexa, Rainbird Technologies, Rulex, SAS, Sparkling Logic, Inc, Spindox, Trisotech, Bamboo Rose, XpertRule Software. Link

Posted in Decision Intelligence, Products, Vendors | Leave a comment

“All Business Logic Will Go To AI Agents”

Posted in Agents, Artificial Intelligence, Business Logic | Leave a comment

LLMs in the Research Space

An interesting discussion started on LinkedIn by Prof. Arvind Narayanan: “AI is already accelerating scientific production… Producing papers, for the most part, is a game researchers must play for status and career progress. Its value is relative. It’s like thinking that AI is going to help traders make a lot more money. If everyone has access to the same capabilities, there is no alpha. In every scientific field I’m familiar with, the amount of published stuff exceeds the community’s collective bandwidth to absorb and build upon ideas by a factor of 100x or more. Inevitably, the vast majority of what’s published makes zero impact. Yet we pretend that publication itself has some value. It doesn’t.” Link See also

Posted in LLM, Science | Leave a comment

Is AI the new UI?

“Salesforce made its name by offering a great UI atop a database. But if AI really is the next generation of software interaction, we might see something far more radical—something that shifts focus away from the user meticulously clicking around, to the system doing most of the work on its own.” Link

Posted in Artificial Intelligence, Human-Machine Interaction | Leave a comment

Continuing Education of Decision-Making Systems

It is interesting to look at the latest Decision Intelligence trends from the 2015 perspective: “You don’t program a system, you educate it. Rather than coding into the system, you merely provide a large set of training examples.
Business people will continue to enhance and manage their decision model by doing the following:
– Adding more business concepts and decision variables
– Covering more complex relationships between decision variables inside business rules
– Defining and executing more complex test cases.

This way decision management becomes continuing education of already working decisioning system!

Posted in Decision Intelligence | Leave a comment

Pavlov’s dog and LLM

Martin Milani posted: “Human intelligence created language to express thoughts—but language itself does not create the thoughts it expresses. Perceptual learning (as seen in neural networks and LLMs) represents a lower-order form of intelligence, rooted in simple pattern recognition, correlation, and basic classification. It’s how both humans and animals learn to detect patterns—like Pavlov’s dog correlating the sound of a bell with the expectation of food, a classic example of conditioning or “training” driven by simple perceptual cues. This is the foundation of how LLMs operate—they identify statistical patterns in data and predict likely outcomes based on past examples. While impressive in scope, this process is not true thinking, and certainly not reasoning. LLMs do not “reason” in the human sense; they simulate and mimic reasoning by retrieving and recombining memorized patterns that resemble logical processes.Link

Posted in LLM | Leave a comment