AI Pollution

“The amount of AI-generated content is beginning to overwhelm the internet. Or maybe a better term is pollute. Pollute its searches, its pages, its feeds, everywhere you look. I’ve been predicting that generative AI would have pernicious effects on our culture since 2019, but now everyone can feel it. Back then I called it the coming semantic apocalypse. Well, the semantic apocalypse is here, and you’re being affected by it, even if you don’t know it.” Link

Posted in Artificial Intelligence, Trends | Leave a comment

Are we solving the correct problem?

Deepak Mehta: “As problem-solvers, we have all been there. You find the perfect solution to a problem, only to realize that the problem you were trying to solve was different from the one you were presented with. Or worse yet, you discover that the solution you found can’t be used in production because of existing processes or system restrictions. What can we do to avoid these situations? Taking into account the perspectives of all stakeholders involved can help us identify relevant constraints and ensure that our proposed solutions are practical and feasible. So, before you rush to implement a solution, take the time to collaborate with all stakeholders and get a comprehensive understanding of the problem at hand.” Link

Posted in Decision Modeling | Leave a comment

Unlocking multimodal understanding across millions of tokens

Today Google announced Gemini 1.5 that supports “millions of tokens of multimodal input. The multimodal capabilities of the model means you can interact in sophisticated ways with entire books, very long document collections, codebases of hundreds of thousands of lines across hundreds of files, full movies, entire podcast series, and more”. Report Demo

Posted in LLM | Leave a comment

Learning Decision Rules with GPT

On Feb 26 Simon Vandevelde, a frequent presenter at DecisionCAMPs, will talk about how to combine learning and reasoning in AI. Register for this free webinar. Here is his abstract: “Operational decisions are an important part of knowledge-intensive organizations, as these are taken in a high volume on a daily basis. However, describing these decisions in a standardized format such as DMN is a time-consuming task, as various textual sources need to be analyzed. In this talk, we present the results of our experiments on an automated approach to generating decision tables from natural language based on the GPT-3 LLM. Through a total of 72 experiments over six problem descriptions, we evaluated GPT-3’s decision logic modeling and reasoning capabilities. While GPT-3 demonstrates promising abilities in extracting decision context and identifying relevant variables from natural language, further enhancements are needed to improve its decision table capabilities for efficient automation of DMN modeling.” Link

P.S. The Recording shows quite negative results of generating DMN tables using GPT.

Posted in Decision Intelligence, Decision Modeling, DMN, Gen AI, GPT-4, LLM, Machine Learning | Leave a comment

Removing Ambiguity in Business Rules

In his recent article “Being Unambiguous Beyond Reasonable Doubt in Expressing Rules” Ron Ross gave an example of the kind of ambiguity that policy interpreters, business analysts, and IT professionals deal with daily. It’s a sentence from the California 2014 Paid Sick Leave Policy: “Accrued paid sick leave shall carry over to the following year of employment and may be capped at 48 hours or 6 days.” Let’s look at the ambiguities:

Continue reading
Posted in Business Rules, Human-Machine Interaction, Natural Language Processing | Leave a comment

The seven levels of artificial intelligence by Warren B. Powell

Link

Posted in Artificial Intelligence, Business Analytics, Business Rules, Decision Optimization, Knowledge Representation, Machine Learning, Optimization | Leave a comment

Gartner’s Predicts 2024

David Pidsley, Decision Intelligence Advisor at Gartner: “We’ve just published our ‘Predicts 2024: How Artificial Intelligence Will Impact Analytics Users‘:

🔵 By 2025, 60% of ABI platforms will claim to enable decision intelligence, but only 10% will have a decision-centric UI to model and track decisions.

🔵 By 2025, 90% of current analytics content consumers will become content creators enabled by AI.

🔵 By 2025, 40% of ABI platform users will have circumvented governance processes by sharing analytic content created from spreadsheets loaded to a generative AI-enabled chatbot.

🔵 By 2027, 75% of new analytics content will be contextualized for intelligent applications through generative AI, enabling a composable connection between insights and actions.

🔵 By 2027, 50% of data analysts will be retrained as data scientists, and data scientists will shift to AI engineers. Link

Posted in Artificial Intelligence, Business Analytics | Leave a comment

How the AI Boom Went Bust in the late 1980s

This article in Communications of the ACM describes the fallout from an exploding bubble of hype triggered the real AI Winter in the late 1980s. It explains the Rise and Fall of Expert Systems in the context of the re-invention of the label AI. Link

Posted in Artificial Intelligence, Trends | Leave a comment

“There will be no programmers in 5 years”

Prof. Warren Powell: “Sorry, this is simply laughable. I would file this alongside Geoffrey Hinton’s prediction (circa 2016) that we will not need radiologists in 5 years (didn’t happen). Don’t these people ever learn? LLMs today certainly are useful to programmers, but only for filling in boilerplate code which can be learned from existing code. Software requires the creative guidance of a programmer to specify what task is being solved. Even with five more years of development, LLMs are never going to be able to guess what a piece of software needs to do, and my guess is that programming languages and tools will continue to evolve. My guess is that LLMs will make programmers more productive. You would think this might translate into fewer programmers, but I wouldn’t bet on this – I think we will use the same number of programmers to do more.” Link

Posted in LLM | Leave a comment

LLMs can have malicious “sleepers”

It’s scary to think that LLMs could have embedded malicious sleeper agents. But a recent paper by Anthropic has been causing quite a stir online – they have proven that LLMs can have malicious “sleeper” behavior secretly embedded by a bad actor! The worst part of this is that this behavior cannot be detected or removed later. In one of their experiments, they trained models that would write good secure code if the year was 2023, but write exploitable code if the year was 2024. Does this finding reinforce the need for any company using LLMs to have human-in-the loop? Link

Posted in LLM | Leave a comment