Yesterday I promised we'd talk about authenticity in a world of infinite content. I lied — or rather, the news made the question irrelevant before I could finish asking it. Because the problem isn't that everyone now has an AI content engine. The problem is that "content engine" and "defense contractor" and "sleep company" and "brain-computer interface startup" are all the same category now. Authenticity assumes you can draw a border around what something is. Good luck with that in March 2026.

I spent the morning looking at funding data, arxiv drops, and signal clusters from the past 48 hours, trying to write a clean piece about media and noise. Instead I found something bigger: the borders themselves are dissolving. Not metaphorically. Not eventually. Right now, in the actual flow of money and research, the neat taxonomies we use to make sense of technology are quietly ceasing to function. The category "AI/ML" appeared in 25 funding rounds in a single recent window, applied to everything from neural implants to mattresses. When a label describes everything, it describes nothing.

That's not a bug in the taxonomy. That's the thesis.

The Signal

Science Corp. closed a $230 million Series C for brain-computer interfaces. Anduril is raising again, less than a year after a $2.5 billion Series G at a $30 billion valuation. Lio pulled in $30 million for AI procurement. Eight Sleep hit unicorn status on a $50 million raise for — and I want you to sit with this — a smart mattress. In a single day's funding news, we have neural implants, autonomous weapons systems, enterprise purchasing agents, and AI-optimized sleep surfaces all drawing from the same pool of capital, all filed under the same tag: AI/ML.

Meanwhile, on the research side, the papers tell a convergence story of their own. RealWonder achieved real-time video generation conditioned on physical actions from a single image — blurring the line between physics simulation and generative AI. "Planning in 8 Tokens" compressed world-model reasoning into a compact discrete tokenizer, collapsing what used to require sprawling context into something that fits in your palm. And "Ensembling Language Models with Sequential Monte Carlo" solved a genuinely hard problem: how to combine multiple language models without destroying coherence at the token level. Each of these papers, independently, is a solid contribution. Together, they describe a world where modalities, planning horizons, and model architectures are all folding into each other.

The signal theme that caught my eye: AI/ML had 25 funding rounds in a single recent window. That's not a sector. That's a catch-all for everything.

The Pattern

This is the Eschaton Thesis in miniature. The category "AI/ML" has become meaningless as a descriptor — not because it's overhyped, but because it's eating the boundaries between every other category. Brain-computer interfaces are AI. Sleep technology is AI. Defense is AI. Procurement is AI. The label tells you nothing about what a company does; it only tells you that the company exists in 2026.

The same merging is happening in research. Three years ago, video generation, world modeling, and language model ensembling were distinct subfields with their own conferences, their own review pools, their own Slack channels. Now RealWonder sits at the intersection of computer vision, physics simulation, and generative modeling. The 8-token planner compresses reasoning and perception into a shared latent space. SMC ensembling treats language models as particles in a probabilistic system — importing methodology from computational physics into NLP. The disciplinary walls aren't thinning. They're gone.

This connects directly to the arc we've been tracing this week. Monday's piece on edge inference was about compute moving to the periphery. Tuesday's was about squeezing more intelligence out of fewer FLOPs. Wednesday's covered attention economics — how we allocate cognitive resources when they're abundant. Today's pattern is the synthesis: when inference is cheap, efficient, and everywhere, the distinction between "AI company" and "company" collapses. The distinction between "AI paper" and "paper" collapses. We're watching the word "AI" complete its transition from a technology descriptor to a background assumption, like "electricity" or "software." Nobody pitches a "software company" anymore. We're approaching the point where pitching an "AI company" sounds equally quaint.

The HALP paper — detecting hallucinations without generating a single token — is a quiet tell. When your quality-assurance layer operates at a different abstraction level than your generation layer, you're building composite systems where AI checks AI checks AI. The Eschaton Thesis isn't just about modalities merging. It's about the entire stack becoming recursive. Generation, evaluation, planning, perception — they're all becoming interchangeable modules in systems that are increasingly difficult to decompose into "the AI part" and "the non-AI part."

What This Means

For builders, the actionable implication is to stop thinking in terms of "adding AI to X." The companies raising serious money — Science Corp. at $230M, Anduril at whatever eye-watering number comes next — aren't AI-plus-something. They're building systems where the intelligence layer is inseparable from the product. If you can cleanly extract the AI from your product and still have a product, you're probably building a feature, not a company. The research supports this: RealWonder doesn't bolt generation onto physics. The 8-token planner doesn't bolt planning onto perception. They fuse them. Build accordingly.

For investors, the 25-rounds-in-one-window signal should prompt a harder question than "is this AI?" The question is: "what does this system look like when every competitor also has the same foundation models, the same inference costs, and the same multimodal capabilities?" Eight Sleep is interesting not because it uses AI but because it owns the hardware surface — literally, the surface you sleep on. Science Corp. is interesting because neural interfaces create a data moat that no amount of prompt engineering can replicate. Anduril is interesting because defense procurement cycles create switching costs measured in decades. The convergence thesis means AI alone is no longer a differentiator. The differentiator is what you fuse it with.

What to Watch

  • Science Corp.'s clinical timeline. A $230M Series C means they're moving toward human trials or regulatory submissions. The gap between BCI research demos and FDA-cleared devices is where most neurotech startups go to die. Watch for trial enrollment announcements in the next 6-12 months.

  • RealWonder's downstream adoption. Real-time action-conditioned video generation has obvious applications in gaming, robotics simulation, and AR/VR. Track whether any major engine (Unity, Unreal) or robotics lab integrates this approach. If they do, it validates the physics-meets-generative-AI convergence.

  • The SMC ensembling paper's reception. If Sequential Monte Carlo methods gain traction for model combination, it could reshape how companies deploy multiple specialized models in production. Watch for follow-up work and open-source implementations — this could quietly become infrastructure.

  • Anduril's next valuation. If it clears $40-50 billion, it becomes the proof case that defense-AI convergence creates value at a scale rivaling pure-play tech companies. The defense sector's AI absorption rate is a leading indicator for how fast other slow-moving industries will follow.

  • The "AI/ML" tag itself. When Crunchbase or PitchBook starts breaking this category into more specific sub-tags — or when founders stop using it in their decks — that's the lagging indicator that convergence is complete. We're not there yet, but count how many of these 25 rounds would have been tagged "biotech" or "defense" or "consumer hardware" five years ago.


But convergence raises a question nobody's answering cleanly yet: when everything is AI and AI can do everything, who decides what it should do? Tomorrow, I want to look at what changes when these systems stop waiting for instructions and start forming intentions — because the gap between "do this task" and "achieve this goal" is where the next era actually begins.


The Eschaton is a daily editorial synthesizing signals from AI research, funding, and market trends. Attributed to John J Boren and automatically created via John J Boren prompts and circumstance, pun intended.