five model-implied expectations for the next 30 days
Not stock picks. Not recommendations. Not claims of certainty. Five forward tests of what the current topicspace field model implies should happen next, each tied to a specific pattern in the data and stated in a way that can be falsified.
One of the risks in market research is pretending every forward view is stronger than it really is. That is not what this is.
These are not stock picks or recommendations. They are model-implied expectations: forward tests of what the current topicspace field model implies should happen next if the patterns we are measuring remain intact.
The current model has been pointing to a specific split inside the AI market. Hardware, infrastructure, memory, and power have shown cleaner follow-through than software, platforms, and other names where the story is loud but price does not confirm. The latest research note, the hardware-software gap, made that pattern much harder to ignore.
The next step is not to admire the finding. It is to test it.
Five expectations follow. Each is tied to a concrete pattern in the data and stated in a way that can be falsified.
The hardware-software gap holds or widens
Over the next 30 days, the hardware cluster continues to outperform the software/platform cluster on a forward excess-return basis.
The hardware-software gap was not just present in the full sample. It widened in the more recent window. That matters because it suggests the split is not fading as the market evolves. If the current field model is capturing something real, the market should continue rewarding physical bottlenecks more cleanly than the interpretation layer built on top of them.
Full-period gap +4.92pp; last-90-day gap +6.07pp (Welch t = 13.5). Today’s hardware cluster shows multiple clean confirmations and early follow-through — SMCI (rel +25%), NBIS (+9%), AMD (+11%), SNDK (+18%), DELL (+18%) — while the software cluster shows nine names stuck in “story not being paid” divergence.
If the software/platform cluster outperforms the hardware cluster over the next month, the most important current finding in the model weakens immediately.
This is the flagship test. If this expectation fails, a lot of the current framing around AI market structure needs to be revisited.
The loud divergence cluster resolves downward, not upward
The current cluster of names showing strong story but weak price follow-through posts negative net forward excess over the next month.
The field has repeatedly shown a broad group of names where the story is active, the narrative is loud, and the market still refuses to pay for it. In the current model, that pattern has been much less constructive outside the hardware side of the field.
Today’s nine-name cluster: ANET (NDS +140, rel −23%), VST (NDS +70, rel −10%), ZETA (NDS +54, rel −13%), TTD (NDS +48, rel −10%), MSFT, META, PLTR, NFLX, CEG. Spans software, AI infrastructure, and energy. ANET is the most extreme “story not being paid” setup on the entire board. PLTR’s storm cohesion just flipped from 100% “growing” to 100% “fading” across 21 storms — the narrative engine itself is starting to roll over.
If the cluster posts clearly positive forward excess, that would suggest the market has started paying for these loud stories after all.
A lot of commentary treats “strong story, weak stock” as obviously attractive. The current model does not. This expectation is a direct test of that disagreement.
MELI does not snap back simply because the quarter looked strong
MELI continues to lag rather than recover quickly — the “strong fundamentals, market overreaction” framing does not produce clean follow-through.
The intuitive read on MELI — strong growth, stock down, market overreaction — is exactly the setup the model has been most cautious about outside the hardware cluster. A strong quarter alone has not been enough.
MELI just entered coverage. DIVERGENCE state, NDS +84.3 (#2 on the board behind ANET), rel −17.3% over 5d, validity “weak here”. 13 storms all volatile, no consensus forming. Parallel: SOFI has been in DISAGREEMENT for 34 straight days with the same intuitive framing in commentary — and has continued to bleed throughout.
MELI materially outperforms over the next month, or its narrative consolidates into a coherent field-level frame.
This is the narrowest of the five tests — one ticker, one quarter, just-entered coverage. It earns its slot because it asks whether the model’s caution travels beyond the AI-core names where most of the other tests live.
Contested hardware continues to lead
Names like INTC and MU continue to outperform broader benchmarks over the next month.
One of the more interesting findings in the current work is that the alpha inside the hardware cluster does not appear to live in the most obvious names. It appears more often in names where the narrative is contested, price is already pushing against a bearish story, and the market has not settled into full agreement.
INTC: NDS −131.5, rel +19.9% — the most bearish narrative in the universe being aggressively rejected by price. Historical mean 10D forward excess: +10.3pp. MU: NDS −164.1, rel +32.2% — even more extreme contradiction. Historical mean: +9.0pp. Both setups are live today; the foundry-pivot narrative on INTC and the HBM-glut-skepticism narrative on MU are exactly the contested-narrative archetype this expectation is built on.
If these names underperform, or if they move into cleaner confirmed states and then stop behaving like contested opportunities, the expectation weakens.
This is one of the sharpest internal claims in the model: the market may reward disagreement more than consensus in the right part of the field.
Celebrity hardware continues to lag the more contested hardware names
The most narrated and already-loved hardware names continue to trail the more contested names in the same broader cluster.
This is one of the most interesting sub-findings in the recent research. The market’s favorite AI hardware names may already be too fully understood, too fully narrated, and too fully priced. In other words: the story is real, but the surprise is gone.
The cluster laggards (historical 10D forward excess): NVDA +0.20pp, SMCI −1.38pp, VST −1.87pp, CEG −1.82pp. These are exactly the “AI infrastructure poster child” names. The cluster leaders, by contrast: INTC +10.31pp, MU +8.96pp, MRVL +6.08pp, NBIS +5.88pp, VRT +5.69pp — names where some part of the story is still being argued. NVDA today shows narr=77 (very loud) but rel only −2.94% on 5d — loud doesn’t convert.
If the most obvious hardware leaders start outperforming the more contested names again, the “most-narrated equals most-priced” reading becomes less useful.
This is not just a return prediction. It is a test of where the market still finds room to change its mind.
What would make these expectations wrong
Three honest possibilities. First, the hardware-software gap is more sample-specific than structural — the current window is real but limited. Second, macro conditions shift in a way that changes what the market is willing to reward, and the current partition stops being the right one. Third, the model is better at describing the last several months than the next one.
That happens. A good field model still needs forward tests.
This is why the most useful expectation is often the one that gets falsified.
What comes next
The next step is to grade these expectations against realized outcomes over the next 30 days. That is where this becomes more than a good story — it becomes a loop:
- observe the field
- state the expectation
- measure what actually happened
- update the model
A model that never risks being wrong is not doing much. The most useful research is not the kind that sounds confident.
It is the kind that can be checked.