Mike Cunningham had a vision that was ten years early and, in the end, exactly right about the problem and completely wrong about the solution.
In 2015, Mike was Chief Digital Officer at Keurig Green Mountain, and he walked into our offices with an idea that still lives in my head rent-free. Put a $5 chip in every Keurig coffee maker. Optical scanner to identify which K-Cups are being used. Bluetooth beacon to detect nearby mobile devices. An LCD screen for targeted messages. The whole apparatus would turn a countertop appliance into a first-party data collection machine.
"Can you tell me what it's worth?" he asked.
We ran the numbers. The math was extraordinary. Keurig sold millions of brewers, but their actual customer intelligence was almost entirely mediated by retail partners. They knew Walmart ordered 50,000 boxes of Green Mountain Breakfast Blend. They did not know that Dad in Westchester drinks dark roast Sumatra at 6am, that Mom prefers French Hazelnut at 7:30, and that their twelve-year-old sneaks a Donut Shop K-Cup on Saturday mornings when nobody's watching.
That gap — between knowing your distribution partner's order volume and knowing your actual customer's behavior — was the defining problem of consumer packaged goods in the 2010s. Mike understood it viscerally. The chip was his answer.
Keurig decided not to build it. The reasons were sensible: hardware cost at scale, privacy concerns, uncertain ROI timeline, the organizational complexity of turning a beverage company into a data company. All valid. All reasonable. All, in retrospect, beside the point.
The Problem Was Real. The Solution Was Inverted.
Mike's fundamental insight was correct: the brands that understand individual consumption patterns win. Where he was wrong — where everyone in the data industry was wrong in 2015 — was in assuming that the brand needed to build the sensing infrastructure.
The customer's agent already has all of it.
Think about what a personal AI agent knows about a household's coffee consumption in 2026. It has the purchase history from every grocery delivery, every Amazon subscribe-and-save order, every one-off Instacart run. It has the calendar data that reveals the morning routine. It has the health app data that correlates caffeine intake with sleep patterns. It probably has conversational data — "I'm trying to cut back to one cup a day" or "I want to try something new, maybe a light roast."
The $5 chip would have given Keurig a partial, permission-gated, hardware-dependent view of consumption. The customer's agent has a comprehensive, continuously updated, contextually rich understanding of the entire household's relationship with coffee. And it doesn't need Keurig's help to get it.
The question isn't "how do we discover the customer?" anymore. It's "how do we become the brand the customer's agent chooses by default?"
From Surveillance to Service
There's a deeper shift here that goes beyond the tactical. The $5 chip model — and frankly, most of the first-party data strategies of the 2010s — was fundamentally a surveillance model. The brand instruments the customer's environment. The brand captures signals. The brand builds the profile. The brand decides what to do with the information. The customer is, at best, a willing participant. At worst, they're unaware of how much is being collected.
The agentic model inverts this entirely. The customer — through their agent — controls the data. The agent decides which brands get access to preferences. The agent evaluates competing offers. The agent negotiates terms. The power dynamic has flipped from brand-as-observer to customer-as-broadcaster.
For a company like Keurig, this means the competitive battleground has moved. In 2015, the advantage would have gone to whoever built the best sensing infrastructure — the most chips, the richest data, the smartest analytics. In 2026, the advantage goes to whoever builds the best agent-facing interface. Whoever makes it easiest for a customer's agent to understand their product catalog, evaluate freshness and quality signals, compare pricing, and execute a frictionless reorder.
That's a fundamentally different capability. It's not a data problem. It's an accessibility problem.
What Keurig Should Build Now
If Mike Cunningham walked into my office today with the same question — "Can you tell me what it's worth?" — the conversation would go in a completely different direction.
First: make your entire product catalog machine-readable. Not a PDF spec sheet. Not a marketing website optimized for humans. A structured, API-accessible data layer that an agent can query in milliseconds. Every SKU, every flavor profile, every origin, every roast level, every compatibility specification. If a customer's agent asks "find me a medium-roast, single-origin, fair-trade K-Cup under $0.75 per pod," your product data needs to answer that query instantly and accurately.
Second: build for preference matching, not persuasion. The old model was about convincing humans to choose your brand. The new model is about matching your products to agent-expressed preferences with surgical precision. The agent doesn't care about your brand story. It cares about whether your dark roast Sumatra has the flavor profile its principal prefers, at the price point they've set, with the delivery timeline they need.
Third: create feedback loops that agents value. When a customer's agent reorders the same K-Cups for the fourteenth consecutive time, that's a signal worth reinforcing. When the agent tries a competitor and comes back, that's worth understanding. When the agent stops ordering coffee entirely — maybe the customer switched to tea, or bought an espresso machine — that's worth detecting early. These feedback loops aren't surveillance. They're service infrastructure.
Fourth — and this is the part nobody in the industry is talking about yet — build for agent-to-agent negotiation. The customer's agent and Keurig's supply agent should be able to have a conversation: "My principal drinks three K-Cups per day. What's the best subscription price you can offer for Green Mountain Breakfast Blend with automatic delivery every two weeks?" That negotiation used to happen in a human's head while scanning the grocery aisle. Now it happens programmatically, and the brand that makes it easiest wins.
The Lesson That Compounds
I've told the $5 chip story dozens of times over the past decade — in conference keynotes, in board presentations, in both of my previous books. It's always been a story about the gap between ambition and execution, about a vision that was ahead of its time.
But writing about it now, I realize it's actually a story about the obsolescence of the brand-as-sensor model. For twenty years, the data industry told brands they needed to build their own intelligence infrastructure — DMPs, CDPs, identity graphs, data clean rooms, the whole apparatus. Invest in sensing. Invest in collection. Invest in resolution. Build your own picture of the customer.
The customer just built a better picture of themselves. And they hired an agent to manage it.
Mike's chip would have cost $5 per unit and generated incomplete data that decayed the moment the customer left the kitchen. The customer's agent costs nothing per unit, generates comprehensive data that updates continuously, and operates across every brand interaction simultaneously. The economics aren't close.
The companies that win the next decade won't be the ones with the most customer data. They'll be the ones that are most useful to the customer's agent. That's a sentence I couldn't have written in 2015. It might be the most important sentence in this entire series.
Mike was right about one thing, though: it all comes back to knowing that Dad drinks dark roast Sumatra at 6am. The difference is who's doing the knowing — and whose interests they serve.