Context Before Chunks
Retrieval answers what is similar. Field models help answer what is happening.
Most LLM context systems retrieve relevant documents. Dynamic reasoning also requires a model of the current field.
Retrieval-augmented generation works well for a specific task: find the documents most relevant to this query, then generate an answer. It works less well when the task is to understand what is actually happening in a complex, fast-moving domain. A dynamic field — markets, geopolitics, competitive technology — doesn't just contain relevant documents. It has structure: narratives strengthening or collapsing, claims being confirmed or contradicted, actors in relation to each other. The field has state.
This piece describes a different framing. What if context assembly started not with retrieval, but with a model of the field? And what would that model actually contain?
Why chunk-first context breaks down
Semantic retrieval surfaces topically similar chunks. This works for recall. It works less well for situational awareness — four structural limitations recur in practice:
The problem isn't that retrieval is wrong. It's that retrieval answers “what is similar?” not “what is happening?” For dynamic domains, those are different questions.
What a field model represents
A field model is a structured representation of what is currently happening in a domain — not just what happened. It captures the live state of competing narratives, which actors are driving them, how claims are being validated or contradicted, and where the boundaries of any current state are.
Rather than a single structure, this is better understood as a family of primitives — each representing a different type of field property:
Together, these primitives describe a domain that has shape — not just content. The field has leaders and followers, acceleration and resistance, coherent structures and fragile ones. The specific families a system needs will vary by domain; what matters is that the model exists before retrieval.
What a field model enables
Each capability follows from having a representation of field state — not from better retrieval. They are transformations of the reasoning process.
There is a second point, less obvious: a field model is valuable not only because it introduces higher-order primitives, but because it separates different interpretive functions. Dynamic reasoning gets harder when historical tendency, current interpretation, recent change, and the conditions for resolution are all rendered by the same surface. A system that conflates them can appear informative while making each individual claim difficult to evaluate. Keeping them distinct is not a presentation preference — it is a representational constraint. Each layer answers a different question; together, they constitute a situation rather than a summary.
The specific layers a system needs will vary by domain. What matters is that they are defined separately and held separately — so that a reasoning layer can use them individually, not only in combination.
Retrieval policy as one downstream use
Retrieval still matters. But in a field-model architecture, it becomes a downstream policy conditioned on what the field model has already determined. Instead of “retrieve what is relevant to this query,” the policy becomes: “retrieve what is relevant to this query, given that the field is in state X, under narrative Y, with actor Z showing elevated pressure.”
The field model sits between evidence ingestion and retrieval — not after it.
The field model doesn't replace retrieval. It gives retrieval a better question to answer.
Worked example: the AI ecosystem
TopicSpace tracks narratives across ~36 actors in the AI and adjacent technology ecosystem. The system evolved iteratively — each stage adding field structure that changed what could be reasoned about.
The progression wasn't planned in advance. Each stage was added to answer a question the previous stage couldn't — which is a reasonable description of how field models get built in practice. But there was a second lesson embedded in the process: adding more layers wasn't sufficient on its own. Each layer had to be given a distinct interpretive function. Early representations tended to let multiple surfaces handle the same kind of meaning — historical tendency, current situation, recent change, and forward path could all appear in the same context block, restating each other. The improvement came from enforcing separation: what has this state historically meant; what kind of situation is this now; what just changed; what would resolve it. When these are kept distinct, each one becomes independently evaluable — and the reasoning that builds on them becomes more precise.
Actor states, dislocation scores, and setup rankings across the tracked field.
Benchmarked state, setup type, narrative pressure, and validation path for a single actor.
Named cross-actor narrative threads — higher-order field objects with coherence and phase tracking.
Why pricing helped
One useful discovery in building TopicSpace: adding price data made the narrative model better. Not because price is the target — but because price is an independent validation signal with no retrieval bias. In this domain, price served as an external confirmation surface; other fields may use different layers — survey data, policy text, network activity.
When narrative and price diverge, one of them is wrong about the current state. That divergence — the Narrative Dislocation Score (NDS) — became a first-class field property. It surfaced states the system hadn't been able to label before, and those states turned out to carry different implications for retrieval, synthesis, and claim evaluation.
“Story not being paid” and “price ahead of story” are structurally different situations — and they require different reasoning, not just different documents. The divergence is information; encoding it explicitly, rather than letting it average out in retrieval, is what makes it useful.
Limits and open problems
Field models have their own failure modes:
Closing
Retrieval-augmented generation is a memory architecture — it gives a model access to relevant information it wouldn't otherwise have. Field models are a situational awareness architecture — they give a model a representation of what is currently happening, not just what has been written about. Both are useful. They solve different problems.
A related point: the value of a field model is not only in what it adds, but in what it separates. Historical base rates, current interpretation, recent change, and the conditions that would resolve a state are different kinds of meaning. Systems that conflate them produce context that is dense but difficult to evaluate. Systems that keep them distinct give a reasoning layer something it can actually work with — each piece answering a different question, together constituting a situation rather than a summary.
Better retrieval improves what you can recall. A field model may improve what you can reason about. For dynamic domains, the second problem is at least as hard as the first.
TopicSpace tracks narrative structure in the AI and adjacent technology ecosystem. This piece reflects ongoing research — not a completed system.
Leaderboard →Signals →Structures →