In 1994, a young French designer named Olivier Duvelleroy walked into Business Objects as the company's first product designer. Over the next three decades, he would help shape enterprise software used by millions of people—from early BI dashboards to the sophisticated analytics platforms that SAP offers today.
But this isn't a story about enterprise software. It's about what happens when someone who has spent their career building tools for others finally builds something entirely for themselves.
The Problem Nobody Talks About
Like most knowledge workers in 2026, Olivier uses AI tools daily. ChatGPT, Claude, Copilot—the usual suspects. They're powerful, transformative even. But they all share a frustrating limitation that's easy to overlook until it stops you cold:
They can only see what you show them.
ChatGPT Plus lets you attach 10 files per prompt, 40 per project. Sounds generous until you're a PMM sitting on 1,000+ research documents—Gartner reports, IDC analyses, McKinsey studies, competitive briefs, internal strategy decks. The kind of institutional knowledge that takes years to accumulate.
"I had an AI survey with about 40 interview transcripts. I have 20 questions like that every day. File limits mean I will always miss the document that best answers the question."
Most people just work around it. They manually select which files to upload. They split questions across sessions. They accept that AI will never have the full picture.
Olivier decided to solve it.
One Rainy Saturday
The goal was simple: build a system that could search his entire knowledge base—every research document, every internal brief, every analyst note—and pull exactly the right context for any question. No more file limits. No more guessing which documents to include.
The approach: Retrieval-Augmented Generation (RAG)—a technique that retrieves relevant information from a knowledge base before sending it to an AI model. Instead of hoping the AI knows something, you give it exactly the context it needs.
The timeline: one Saturday.
"Can I do it completely for free? Yes, I did."
With ChatGPT as his coding assistant, Olivier installed Ollama (a local AI runtime), pulled an open-source model, created a Python environment, set up a vector database, and ran his first RAG queries—all before dinner.
He called it NEXUS: a local, private Second Brain for context-aware intelligence.
What Changed
Before NEXUS, finding the right supporting quotes for a book chapter meant multiple prompts, manual file selection, and a lot of context-switching across 40 interview transcripts and a 20-chapter structure.
After NEXUS: one 30-second prompt returns the three best grounded quotes for each chapter, pulled from his entire corpus.
The biggest win is speed: quickly identifying and prioritizing the right documents and snippets to ground an AI task. Not just faster—but the confidence that you're not missing the document that matters most.
For his book versioning and reviews, market trend analysis, and internal document synthesis, the system acts as what Olivier calls a "personal context accelerator"—not replacing his judgment, but making sure he's always working with the best available information.
The Unexpected Lessons
Building NEXUS taught Olivier something important about the current moment in AI:
"At no time should we be afraid of diving into the details these days. AI tools can guide you through implementation; you are mostly limited by your curiosity, creativity, and intent."
Things that seemed too hard a year ago—running CLI commands, setting up GitHub, creating Python environments from scratch—are now achievable in an afternoon with AI guidance. The barrier isn't technical skill. It's having a clear intent and the willingness to try.
Local inference also makes the tradeoffs tangible. Running a model on your laptop means experiencing the latency, the CPU cost, the fan spinning up. It demystifies what "AI" actually means and surfaces real engineering decisions about when local processing makes sense versus when to call an API.
What This Means for PMMs
Olivier's experiment points to something bigger than one person's productivity hack. Product marketers sit on enormous amounts of institutional knowledge—customer research, competitive intelligence, win/loss analyses, positioning documents, sales call transcripts. That knowledge currently lives in scattered drives, folders, and the memories of individuals.
RAG systems offer a way to make that knowledge queryable. Not just searchable (we've had that for years) but actually usable by AI in the context of specific questions.
Imagine asking:
- "What did our customers say about pricing in the last 20 interviews?"
- "How does our positioning compare to Competitor X's latest messaging?"
- "What patterns do we see in lost deals from Q4?"
And getting answers grounded in your actual documents—with citations, with context, with the ability to verify.
That's the future Olivier built for himself in a weekend. The question is: why aren't more of us building it?
Want to Build Your Own?
Olivier documented his entire process. We've turned it into a step-by-step guide.
Read the How-To Guide →