Diagram-based Engineering

Vincent Lextrait started an interesting discussion at LinkedIn about why “diagram-based engineering”, an approach Low code/No code belongs to, has been tried every 20 years, and always failed. “Just ask Grady Booch who co-invented UML. He’ll tell you that the failure is due to the fact that diagrams are inherently imprecise. Granted they are more precise than natural language, but they still fall short to capture the complexity of business applications. Nobody will try as hard as Grady. It’s just that the industry has forgotten now, it was 20 years ago (the same idea with flow charts 20 years before failed too). Oh you can deliver stuff, but it’ll be simple, not future-proof and you’ll enjoy short happiness. And bad performance. To reach the right level of finesse (and exceed it), you need text-based input: code. This is why math people invented their own language, because natural language was an obstacle to progress.” Link

Posted in Diagramming, LowCode/NoCode | Leave a comment

Decision-making under uncertainty

Making decisions under uncertainty is hard. The best course of action can be very counter-intuitive. Meinolf Sellmann provided a good example that illustrates this. His article “A Tale of Two Coffees” showcases a tool that cuts through the uncertainty, even when pursuing multiple objectives at the same time. Link

Posted in Decision Making, Uncertainty | Leave a comment

Smarter Decisions for a Better World

Operations Research and the Management Sciences (OR/MS) is the computer science technology with probably the most impressive real-world success stories in decision optimization. However, being “too scientific”, OR/MS remains a well-kept secret for many years. The Institute for OR/MS, known as “INFORMS” has decided to switch to a new branding. While some people try to take advantage of the booming “AI”, INFORMS introduced a new tagline: “Smarter decisions for a better world”. It stresses Operations Research as the scientific process of transforming data into insights to making better decisions. The second half of the tagline – “for a better world” – alludes to the slogan: “Saving lives. Saving Money. Solving problems.” Read more

Continue reading
Posted in Decision Making, Decision Optimization | Leave a comment

GPT for Criminals

Criminals are known to be good to take advantage of a new technology. It’s only naturally that hackers have already started to apply different variations of Generative AI tools. For instance, WormGPT is the Generative AI tool used by cybercriminals to launch business email compromise attacks. WormGPT is described as “similar to ChatGPT but has no ethical boundaries or limitations.” Read more and more

Posted in Artificial Intelligence, ChatGPT, Security | Leave a comment

Challenge “Organ Transplants”: one more solution and ChatGPT

Our Mar-2019 Challenge “Organ Transplants” continues to genera an interest among DM practitioners. Jack Jansonius just submitted a new solution based on the integrated use of decision tables and SQL. We wonder if somebody tries to produce a working solution for this challenge using a Generative AI tool (see below what ChatGPT has offered). As always, our challenges do not have expiration dates, and more solutions to old challenges are always welcome.

Continue reading
Posted in Challenges, Database, Decision Modeling | Leave a comment

Objective-Driven AI

Objective-Driven AI refers to the idea proposed by Yann LeCun of creating AI systems that are explicitly designed and constrained to optimize particular objectives. The key aspects are: Architecture with different modules — perception, world model, action planning, cost functions. Today LeCun tweeted: “Instead of scaling current systems 100x, which will go nowhere, we need to make these Objective-Driven AI architectures work.” Watch his MIT talk “Objective-Driven AI: towards AI systems that can learn, remember, plan, reason, have common sense, yet are steerable and safe“. Read also “The Future of AI is Goal-Oriented

Continue reading
Posted in Artificial Intelligence, Goal-Oriented | Leave a comment

On the Road to Universal Learners

Peter Norvig tweeted on Oct. 10: “Given example inputs and outputs of any function that can be computed by any computer, a neural net can learn to approximate that function.” He refers to this paper “Auto-Regressive Next-Token Predictors are Universal Learners“: We demonstrate that even simple models such as linear next-token predictors, trained on Chain-of-Thought (CoT) data, can approximate any function efficiently computed by a Turing machine. We introduce a new complexity measure — length complexity — which measures the number of intermediate tokens in a CoT sequence required to approximate some target function, and analyze the interplay between length complexity and other notions of complexity. Our results demonstrate that the power of language models can be attributed, to a great extent, to the auto-regressive next-token training scheme, and not necessarily to a particular choice of architecture. Link

Posted in LLM, Machine Learning | Leave a comment

Peter Norvig: AGI Is Already Here

Peter Norvig wrote today at LinkedIn: “AGI is not solved, but it has arrived. Like how ENIAC was the first general purpose computer in 1945 (but computing was not “solved”) or like how powered flight arrived at Kitty Hawk in 1903 (but aviation was not “solved”)“. Read his article “Artificial General Intelligence Is Already Here“.

Continue reading
Posted in Artificial Intelligence, LLM | Leave a comment

The importance of handwriting

The Economist published an interesting article “The importance of handwriting is becoming better understood”: “In modern life, writing means typing. Writing by hand has become an endangered species. But one series of studies has found a big advantage in note-taking by hand. The very inefficiency of the medium is its advantage: it seems to force writers to think and compress information as they jot, rather than mindlessly transcribing verbatim.” Link Read also “Is it possible to write using speech-to-text software?

Posted in Human-Machine Interaction, Trends | Leave a comment

Explainable Constraint Solving

Explainable constraint solving is a sub-field of explainable AI (XAI) concerned with explaining constraint (optimization) problems. Although constraint models are explicit: they are written down in terms of individual constraints that need to be satisfied, the solution to such models can be non-trivial to understand. This hands-on tutorial demonstrates the type of questions a user can have about (non)-solutions, as well as reviewing what kind of computational tools are available today to answer such questions. It covers classical methods and more recent advances in the field such as step-wise explanations, constraint relaxation methods and counterfactual solutions. Link

Posted in Constraint Programming, Explanations | Leave a comment