LLMs in the Research Space

An interesting discussion started on LinkedIn by Prof. Arvind Narayanan: “AI is already accelerating scientific production… Producing papers, for the most part, is a game researchers must play for status and career progress. Its value is relative. It’s like thinking that AI is going to help traders make a lot more money. If everyone has access to the same capabilities, there is no alpha. In every scientific field I’m familiar with, the amount of published stuff exceeds the community’s collective bandwidth to absorb and build upon ideas by a factor of 100x or more. Inevitably, the vast majority of what’s published makes zero impact. Yet we pretend that publication itself has some value. It doesn’t.” Link See also

Posted in LLM, Science | Leave a comment

Is AI the new UI?

“Salesforce made its name by offering a great UI atop a database. But if AI really is the next generation of software interaction, we might see something far more radical—something that shifts focus away from the user meticulously clicking around, to the system doing most of the work on its own.” Link

Posted in Artificial Intelligence, Human-Machine Interaction | Leave a comment

Continuing Education of Decision-Making Systems

It is interesting to look at the latest Decision Intelligence trends from the 2015 perspective: “You don’t program a system, you educate it. Rather than coding into the system, you merely provide a large set of training examples.
Business people will continue to enhance and manage their decision model by doing the following:
– Adding more business concepts and decision variables
– Covering more complex relationships between decision variables inside business rules
– Defining and executing more complex test cases.

This way decision management becomes continuing education of already working decisioning system!

Posted in Decision Intelligence | Leave a comment

Pavlov’s dog and LLM

Martin Milani posted: “Human intelligence created language to express thoughts—but language itself does not create the thoughts it expresses. Perceptual learning (as seen in neural networks and LLMs) represents a lower-order form of intelligence, rooted in simple pattern recognition, correlation, and basic classification. It’s how both humans and animals learn to detect patterns—like Pavlov’s dog correlating the sound of a bell with the expectation of food, a classic example of conditioning or “training” driven by simple perceptual cues. This is the foundation of how LLMs operate—they identify statistical patterns in data and predict likely outcomes based on past examples. While impressive in scope, this process is not true thinking, and certainly not reasoning. LLMs do not “reason” in the human sense; they simulate and mimic reasoning by retrieving and recombining memorized patterns that resemble logical processes.Link

Posted in LLM | Leave a comment

The U.S. Copyright Office has spoken!

Cassie Kozyrkov shared:

Your Creative Edits = ✅ Copyrightable
Your Creative Edits + AI Output = ✅ Copyrightable
Unedited AI Output = ❌ Not Copyrightable
Your Prompts + Unedited AI Output = ❌ Not Copyrightable

Here’s the actual text from their published PDF: https://lnkd.in/ev2bv2Hv

Posted in Gen AI, Legal | Leave a comment

Warren Powell about DeepSeek

Warren Powell just posted “The emergence of DeepSeek“: How could the Chinese do this so quickly? Because the technology is not that hard… I am not minimizing what OpenAI and the other LLM developers have created, but the core technology is a neural network, which is well understood by a large community. But let’s face it, compare this to inventing a breakthrough battery, creating fusion power, curing Alzheimers, or making good decisions over time for complex systems in transportation, energy, and health. Making good decisions over time, under uncertainty, is on an entirely different level in terms of analyticsLink Comment

Posted in Decision Intelligence, Gen AI, LLM | Leave a comment

Creating a Simple Knowledge Graph (and a Pizza) with AI

Kurt Cagle just shared his experience of using an LLM tool for building a knowledge graph. He asked DeepSeek to “Generate a list of all of the object types that may be relevant to running a pizza shop” and after some prompts received a quite comprehensive ontology. He concluded: “After a few years of working with both LLMs and KGs, I’m still not convinced that an LLM can act as a broad-scale knowledge graph out of the box (there are still some big unresolved issues about the limitation of latent spaces and the mapping of narrative to conceptual models, for instance). However, as a tool for building knowledge graphs, an LLM can dramatically reduce both the complexity of constructing a knowledge graph and make it easier to test and visualize what that knowledge graph is capable of.” Link

Posted in Gen AI, LLM, Semantic Web | Leave a comment

Don’t Centralize AI agents under IT 


David Pidsley from Gartner posted on LinkedIn: “Centralizing AI agent management under IT could stifle adaptive governance and innovation by focusing too much on operational “how” rather than strategic “why.” Instead, decision-making authority should remain distributed across business units, ensuring alignment with customer needs and strategic goals.

AI agents should not be viewed as “workers” or “interns” but rather as high-agency decision systems—designed to automate or augment specific decision model, executed and monitored for self-improvement.”

A design principle for AI agents should be decision-centricity. AI agents are more akin to “the new apps” than human employees, and “managing” them requires a product management approach, not a talent management framework.” Link

Posted in Decision Intelligence | Leave a comment

Bob Kowalski on What is AI?

In 2017 Prof. Bob Kowalski, the famous expert in Logical AI including Prolog, presented “Logic and AI” at the joint session of DecisionCAMP and RuleML+RR. It is interesting to hear his recent thoughts about today’s symbolic and sub-symbolic AIs – here is the link.

Posted in Artificial Intelligence, Scientists | Leave a comment

Turning insurance contracts into code

Sam Burrett posted today: Insurance policies are a nightmare for most consumers. It’s hard to know what’s covered and what isn’t. Could AI help? The Stanford CodeX team tested GPT4o’s ability to turn an insurance policy into code, which could be queried in a simple question/answer format. They found GPT4 produced tangled code and misunderstood key provisions.

However, OpenAI’s new ‘reasoning’ model was much better. o1-preview correctly encoded and insurance policy and hit 83% accuracy on coverage questions. Is that good enough for the real world? Not yet.

But as the researchers say: “We are on the cusp of an exciting era where AI can make legal solutions more accessible by applying human-like thinking, including planning and reasoning.” That’s exciting.
Link

Posted in Insurance Industry, LLM | Leave a comment