ARTICLE

State of the machines

A look at where capital is flowing

The real YTD robotics story is the stack beneath the robot

One of the clearest capital clusters is around embodied intelligence and foundation model efforts.

Some of this capital still sits inside companies primarily classified as humanoid builders. But the underlying signal is the same. Money is also being directed toward the intelligence layer beneath the body.

Most robotics coverage fixates on the body. It is the easiest part of the story to show, especially when it looks human.

The more interesting YTD signal sits underneath it. So far this year, more than $2 billion has gone into embodied intelligence and robot model efforts, even before counting several undisclosed rounds.

That does not mean the category is solved; but it does show where serious conviction is building.

Capital is clustering around the intelligence layer

The clearest signal is funding.

Skild AI’s $1.4 billion Series C in January was one of the defining robotics financings of the year. It also did not sit alone for long.

Since then, a wider group of companies focused on embodied intelligence, robot foundation models and adjacent control layers has also raised significant capital. That includes X Square Robot, RLWRLD, AI² Robotics, Xiaoyu Bot and Rhoda AI.

Some are building robot models. Some are focused on world models, VLA systems or broader robot intelligence. Some still overlap with humanoid hardware.

Capital is also backing the layer that helps robots perceive, reason, adapt and operate in less structured environments.

This is not only a US story. China based players including X Square Robot, AI² Robotics and Xiaoyu Bot are part of the same pattern. The intelligence layer is turning into a global race.

Some are building the brain and little else. Others are building the brain and the body together. That split is one of the central questions in robotics right now. Does the intelligence layer become its own durable part of the stack, or does it get pulled into full stack robotics companies that want tighter control over data, hardware, deployment and feedback loops?

Right now, the market is funding both paths.

One bet is that the robot brain becomes a valuable standalone layer, usable across many systems and environments. The other is that robotics intelligence cannot really be separated from the hardware, the data loop, the operating constraints and the deployment context it is trained against.

That tension is unresolved. Either way, the market is spending serious money on the intelligence problem itself.

Capital is also moving into adjacent machine-intelligence categories, including world models and AV systems.

Set against the wider market, the robotics signal is clearer. This is not just spillover from a broader physical-AI cycle. A distinct cluster is forming around robot intelligence itself.

Rhoda AI is one of the clearest recent examples. The company emerged from stealth in March with a $450 million Series A and introduced FutureVision, a robot intelligence platform aimed at dynamic industrial settings. The significance is simple. Large rounds are still being written for the intelligence layer itself.

Even where the company story still sits close to humanoids, the same pattern appears. Capital is also backing teams working on the layer beneath the machine.

Capability work is advancing too

Funding is only part of the story.

Physical Intelligence is one of the better examples here. In late February, they explicitly framed general purpose physical intelligence as a platform layer for robotics applications. Then in early March, they published work on Multi Scale Embodied Memory MEM, a system designed to give vision language action models both short term and long term memory for longer horizon tasks.

Robotics still struggles with continuity. Short demos are one thing. Maintaining context, recovering from mistakes and carrying a task across longer windows is another. Better memory, world understanding and action planning do not solve deployment. They do move the stack forward.

Early this year, NVIDIA expanded its robotics stack with new GR00T and Cosmos releases aimed at full body control, reasoning and synthetic data driven development. Microsoft also introduced Rho alpha , its latest robotics model from Microsoft Research.

There are also important examples coming from companies building both the model layer and the robot itself. In January, Figure introduced Helix 02 , extending end to end control across the full body of its humanoid system. Around the same time, 1X outlined its video pretrained world model for NEO, showing another route into robot intelligence that is being built in direct contact with a hardware program.

Those examples sharpen the open question running through this category. Will the intelligence layer become a standalone platform that sits across many robotic systems, or will the strongest capabilities come from companies that build the brain and the body together?

The deployment stack is filling in

The third signal is that this is not just about model companies raising money and publishing research.

The deployment and validation layer is moving too.

One of the strongest examples came from ABB Robotics and NVIDIA, which said they had narrowed the sim to real gap through RobotStudio HyperReality. This goes beyond any single marketing claim. Industrial players are putting real effort into simulation, testing and production-line tooling before robots reach the floor.

The bottleneck in robotics is rarely just whether a model can produce an impressive clip. It is whether systems can be designed, tested, validated and operated at acceptable cost and reliability in real environments.

That is why NVIDIA’s broader push matters beyond the model layer. Its work across simulation, synthetic data, edge compute and developer tooling points to something larger than a single product cycle. It points to an attempt to become foundational infrastructure for physical AI.

Other moves fit the same pattern. NVIDIA’s integration with Hugging Face / LeRobot, more accessible simulation environments, and dedicated edge hardware like the Jetson T4000 all help narrow the gap between research progress and operational use.

The platform versus full stack question

Broadly, the market is funding both a platform layer bet and a full stack bet.

The platform layer view is that the robot brain becomes a horizontal layer that can sit across many machines and environments. The appeal is obvious. Faster distribution, less manufacturing complexity and the chance to become foundational infrastructure for a much wider market.

The full stack view is that the best robotics intelligence may only emerge through tight integration with hardware, deployment environments and proprietary data loops. The appeal there is different. Better feedback, tighter optimization and potentially stronger defensibility.

Neither path has won. That is what makes this moment interesting. The market is not converging on one architecture yet. It is funding multiple ways of trying to solve the same core problem.

What this actually means

The easiest mistake here is to act as if robotics has cracked the hard part.

We have seen robotics hype before. What looks different this time is not just the money, but the convergence of foundation model progress, synthetic data tooling, simulation infrastructure and early deployment efforts around the same stack.

Reliability, safety, certification, economics and integration still sit between technical progress and widespread deployment. The body is still hard. The data is still hard. The environments are still hard.

That is why this shift is worth watching.

The real shift is that investors, researchers and infrastructure providers are putting more effort into the layers that make useful robots more plausible.

In other words, the market is filling in the missing layers beneath the machine.

What to watch next

What matters now is not just who can raise, or who can publish the best demo. It is who can turn model, tooling and infrastructure gains into real deployment.

That means watching which intelligence-layer companies land production contracts, whether open-model strategies accelerate adoption or compress margins, and whether full-stack builders can keep the most valuable feedback loops inside their own systems.

If you only watch the robot body, you miss part of the story.

The more important YTD signal is that robotics is increasingly being funded and built as a stack.

The visible layer still gets the headlines. Underneath it, capital is clustering around embodied intelligence, capability work is advancing, and the deployment stack is getting stronger.

The body gets the attention. The stack underneath it is getting stronger.

CIRCULATION
Receive new intelligence as published.