Leaving Nvidia GTC26, this is my reflection of where AI is heading in 2027:
The next model class.
The most valuable data in business has never been linguistic. It is transactional: purchases, logistics decisions, clicks, cancellations. Real choices, made by real people, in real time. Large language models cannot learn from it. A different class of AI model is now emerging that can.
Train a model not on text but on sequences of human action and it begins to predict what a customer, a supply chain, or a market will do next. An insurer can flag fraud before a claim is filed. A retailer can reorder stock before a shelf empties. A lender can reprice risk between one heartbeat and the next. Call them large behavioral models. The term is new; the commercial logic behind it is not subtle.
Language captures what people say. Behavior captures what they do. For any enterprise trying to make better decisions about pricing, inventory, advertising, personalization, the gap between those two signals is the gap between opinion and action. Models trained on the latter sit closer to revenue than anything an LLM can offer.
Nvidia appears to agree. Behavioral models run continuous inference over live data streams: every transaction a token, processed in real time. Compute demand scales not with the number of prompts a user types but with the velocity of the real economy, a far larger and more persistent surface. At GTC this week, Nvidia positioned itself as the provider of entire AI computing systems built for always-on, economy-scale inference. That is not a bet on chatbots. It is a bet on the kind of workload behavioral models generate. When the company with the clearest view of where compute demand is heading builds for this future, the signal is hard to ignore.
Yet most enterprises remain absorbed by the current wave: LLM agents, copilots, application-layer tooling. That is understandable. It may also prove expensive. The advantage in behavioral models compounds unusually fast. The underlying data is generated inside enterprise systems, not scraped from the open web. It is proprietary, high-frequency, and difficult to replicate. Each month of training improves the model while deepening the moat. By the time the wider market recognizes the value, the leaders may already be unreachable.
The question, then, is who builds the horizontal platform. History suggests an answer. In the early years of LLMs, every sizable company tried to train its own. Most quietly retreated once the economics of scale became clear. Anthropic, OpenAI, and Google DeepMind won because breadth reveals patterns no single firm's data can. There is little reason to think behavioral models will be different.
The enterprises that move first will not merely adopt a new tool. They will accumulate a form of institutional knowledge that their competitors cannot easily buy later. In AI, as in most things, the gap between seeing clearly and acting early is where fortunes are made.