Sudo AI reveals sudo R1, a manipulation system trained entirely in simulation
Sudo AI's first public system uses no real-world demonstrations; a closed-loop policy running at 15 to 25 Hz picks unseen objects including glass and fabric across changing conditions, with the company reporting near 98% first-attempt success in its own evaluation

Sudo AI publicly revealed sudo R1 in April 2026, a fully integrated robot system built around object picking and powered by a manipulation-centric foundation model. The system uses self-developed hardware and software and is trained entirely in simulation, with no real-world demonstrations used in the training process.
The system
Sudo AI frames picking as the gateway primitive of physical manipulation: nearly every useful downstream manipulation task begins with a pick, and if picking cannot be made reliable across the long tail of real-world objects, broader manipulation workflows remain out of reach regardless of how capable the planning layer becomes. sudo R1 is positioned as the foundation for that primitive before the company expands scope.
The policy runs closed-loop at 15 to 25 Hz, conditioned on the robot's latest observation at every control step. This is a deliberate departure from action-chunking architectures, where a model observes the environment, predicts a sequence of future actions, and executes the full sequence before taking the next observation. A system chunking at 20 Hz with 20-step chunks effectively observes once per second. Sudo AI's approach observes continuously, enabling mid-grasp recovery, dynamic target tracking, and trajectory adaptation when the scene changes during execution.
The training path
Sudo AI says zero real-world demonstrations were used in training. The company describes the sim-to-real transfer challenge as requiring every gap in the simulation chain to be closed simultaneously: physics fidelity, contact modelling, domain randomisation, and sensor simulation. Achieving reliable zero-real-data transfer for contact-rich manipulation at the reported reliability took years of dedicated engineering and is described by the company as a core differentiator, both for performance and for the compounding cost and iteration speed advantages it creates.
The value of the claim, if it holds at scale, is economic as much as technical. Real-world robot data collection is slow, expensive, and difficult to generalise. A training paradigm that runs entirely in simulation and scales by generating more synthetic data rather than deploying more robots to collect demonstrations would substantially lower the cost of capability expansion.
The evaluation
Dine, Sudo AI's design partner, reported first-attempt success rates near 98 percent across a wide range of real-world objects. Sudo AI evaluated sudo R1 on objects never encountered during training, spanning rigid and deformable, opaque and transparent, matte and reflective, including transparent glass and soft fabric, across changing lighting conditions, cluttered scenes, and physical interference from obstacles. The 60-minute uncut continuous run is the evaluation format Sudo AI uses to present the system's performance; the format provides a concrete test window but this is company-published evaluation, not an independent benchmark.
Maturity
Sudo AI acknowledges directly that true production-grade performance remains ahead, and that achieving generalisability, agility, robustness, and spatial intelligence simultaneously in a single policy is a fundamentally different challenge from achieving any one in isolation. No customer deployments, pricing, fleet use, technical report, open weights, or independent benchmarks have been disclosed. sudo R1 is a public reveal with company-run evaluation; the simulation-to-real transfer thesis has been demonstrated on picking across a documented object range, not validated across production environments at scale.
Have a robotics update Korthos should review? Send news, deployments, product releases, funding rounds, research, or media to tips@korthos.xyz or reach out on X at @agkorthos.