A History of AI

Case-Based Reasoning: Intelligence Needs Memory

Library of Congress card catalog
Image: Ted Eytan, CC BY-SA 2.0, via Wikimedia Commons

“Artificial intelligence must be based on real human intelligence, which consists largely of applying old situations—and our narratives of them—to new situations.” — Roger Schank


Intelligence requires memory. You cannot expect machines to learn without remembering what worked. Everyone building enterprise AI is grappling with the same problem. How do we give AI systems persistent memory without the bloat problem?

AI researchers have been working on this problem for decades. Building commercially successful systems that got better every time they were used. Have we forgotten these approaches?

Roger Schank’s core insight that started it all was that intelligence builds from lived experience. We learn from specific experiences, what happened, what worked, what failed. This was the 1970s and Schank was proposing something radical. Why encode general rules when you could learn from what actually happened?

Schank’s dynamic memory idea spread and the field of Case-Based Reasoning was born. In 1980, Janet Kolodner built CYRUS, a system that stored episodic memories of diplomatic meetings. She showed that specific experiences beat general rules.

The field formalized around what became known as the 4 Rs: Retrieve similar past cases. Reuse what worked before. Revise the solution for the new situation. Retain the experience for future use. Sound familiar? It should. Every RAG system follows this exact pattern. Take a help desk system for example. First, we Retrieve similar past support tickets from a vector database using semantic matching. Then we Reuse and Revise by adapting the previous fix through an LLM prompt. Finally, we Retain by embedding the case back into the vector database for next time.

But memory creates a problem. How do you prevent bloat? As the case base grows, retrieval slows. Worse, redundant cases add noise without adding capability. In 1995, Smyth and Keane published “Remembering To Forget” which solved the bloat problem. They devised a competence-preserving deletion process that keeps useful memories and forgets the rest. The problem described in that paper reads like it was written in 2025 not 30 years earlier.

Case-Based Reasoning worked commercially. GE invested in a customer support system that preserved institutional knowledge that would have walked out the door when expert mechanics retired. Lockheed used the techniques to successfully configure aircraft manufacturing layouts from past projects. These systems were good long term investments and got better with use because they remembered what worked.

In July 2024, a Case-Based Reasoning conference highlighted how existing methods outperformed LLMs on structured problems and how hybrid approaches could substantially reduce hallucinations. Decades old research suddenly became relevant again. Following the conference, Ian Watson, who has studied AI memory since the 1980s, issued a challenge. Case-Based Reasoning should step up and influence how modern AI handles memory.

Maybe the rest of us building with modern AI should also step up. Let’s Retrieve what has already been discovered then Reuse and Revise it, Retaining what works.


Further Reading