At GTC 2026 in Silicon Valley, Universal Robots presented the UR AI Trainer solution developed in cooperation with Scale AI. The system is intended to accelerate the development of artificial intelligence models for robotics and to reduce the gap between research and industrial deployments. High-quality data generated in training environments, where robots learn by imitating humans, plays a key role. Combined with the UR AI Accelerator platform and Scale AI software, it creates a consistent robotic data pipeline covering both real-world applications and simulation environments based on Nvidia Omniverse and Isaac Sim. Demonstrations with partners Scale AI and Generalist AI illustrate how these solutions can enable the development of Physical AI systems capable of performing complex manipulation tasks outside the laboratory environment.
Imitation learning and direct torque control
One of the main challenges in training AI-enabled robots is the fragmented hardware infrastructure and low quality of collected data. Many training datasets are generated on research robots that are not adapted to production conditions. In addition, numerous systems rely solely on image analysis, which makes it difficult to perform precise operations that require physical contact with objects.
Anders Beck, VP of AI Robotics Products at Universal Robots, highlights the changing expectations of users towards AI-based solutions. He notes that customers, from large enterprises to AI research labs, now need effective ways to collect high-quality, synchronized data from robots and vision systems in order to train models on the same robots that will later be deployed. According to Beck, the UR AI Trainer is intended to be the first tool in the industry to directly connect the laboratory to the factory floor in the AI model training process.
The UR AI Trainer system uses direct torque control functions and feedback, which is intended to give developers precise control over the physical interaction of the robot with its environment. AI models can be trained on the same hardware that is already used in more than 100,000 industrial applications of Universal Robots, which simplifies the transition from prototype to production application.
Leader-follower and integrated robotic data pipeline
UR AI Trainer enables guiding Universal Robots through tasks in a leader-follower configuration. Operators physically guide the robot acting as the “leader” through subsequent stages of the process, while the synchronized “follower” robot reproduces the motion in real time. During each demonstration the system records synchronized motion, force and vision data, creating structured datasets required to train Vision-Language-Action models.
The system runs on the UR AI Accelerator platform, which connects Universal Robots with Scale AI software. This makes it possible to collect data on UR robots in production environments and at scale. In this way, a continuous feedback loop is created that feeds the process of ongoing optimization of Physical AI systems, covering both data collection and model training and refinement.
Ben Levin, General Manager Physical AI at Scale AI, points out that the global presence of Universal Robots provides a convenient platform for data collection and AI deployment. He announces that, as a result of the cooperation, an integrated robotic data pipeline is being created that is intended to enable customers to train, deploy and improve AI models faster than before. The companies plan to publish later this year an extensive industrial dataset recorded on Universal Robots.
UR AI Trainer and simulation environment demonstrations at GTC
At GTC, the UR AI Trainer system had its official premiere in the form of demonstrations at the Universal Robots booth. Conference participants could control two UR3e robots acting as “leaders” and providing haptic signals to two UR7e robots acting as “followers”. With haptic feedback, visitors carried out a complex smartphone packaging task, while the system simultaneously collected data needed for imitation learning and training of Vision-Language-Action models. Demonstration data was recorded in real time in the Scale AI environment and could be replayed directly in the UR AI Trainer tool.
The process of acquiring training data was also presented in a virtual environment. Nvidia Omniverse and Isaac Sim were used to create a virtual smartphone packaging workstation. Participants could control a simulated dual-arm UR3e system with real-time haptic feedback using two Haply Inverse3 devices acting as “leaders”. The solution was intended to provide physics-accurate simulation, enabling the generation of synthetic data that reflects real mechanical interactions.
Universal Robots is also analysing the potential use of the Nvidia Physical AI Data Factory Blueprint to automate and scale synthetic data generation. The goal is to turn global computing resources into an efficient production system for high-quality training data for robots, which can then be used to train Physical AI systems in various applications.
Embodied foundation models in industrial applications
The data collection demonstrations were complemented by a showcase prepared by Generalist AI, the preferred partner of Universal Robots in the area of advanced foundation models for robotics. The company presented the first public demonstration of embodied foundation models, that is, artificial intelligence models acting in the physical world. Two UR7e robots autonomously performed a complex smartphone packaging task, demonstrating high motion precision, coordination and advanced object manipulation in real working conditions.
The demonstration was intended to illustrate how high-quality training data collected at scale, combined with state-of-the-art model architectures, can create Physical AI systems capable of operating outside the laboratory environment. Pete Florence, co-founder and CEO of Generalist AI, notes that the embodied foundation models developed by the company are intended to set standards in terms of precision and reliability. He emphasises that the demonstration on the industrial Universal Robots platform shows how the physical “common sense” of AI models can be translated into practical applications and provide a basis for large-scale deployments across sectors.
Anders Beck of Universal Robots points out that the use of UR technology by leaders in AI model training and data acquisition is one of the factors that has helped to establish Universal Robots as a preferred platform for Physical AI solutions. The company representative also took part in the GTC panel titled “Beyond the Workcell: Scaling Robotics Workflows Across the Factory Floor”, which focused on scaling robotics workflows across entire production plants.
Relevance for industrial robotics and automation
The solutions presented at GTC 2026 by Universal Robots, Scale AI and Generalist AI indicate the direction of development for robotic systems in which the data pipeline and the ability of robots to learn from demonstrations and synthetically generated scenarios play a key role. From an industrial perspective, this means a shift from rigidly programmed tasks to robots capable of perception, reasoning and adaptation in changing production environments.
According to the assessment presented by Amit Goel, Head of Robotics and Edge AI Ecosystem at Nvidia, the development of Physical AI requires moving away from traditional, pre-programmed automation towards more versatile systems. The use of Nvidia Isaac simulation environments and synthetic data generation tools is expected to enable the creation of a scalable infrastructure for acquiring and generating high-quality data, which is necessary to train a new generation of autonomous systems at scale.
In the context of manufacturing plants, including plastics processing and packaging operations, such solutions may, in the longer term, support the automation of complex manipulation tasks such as product packaging, component handling or assembly work. The integration of collaborative robots with Physical AI systems opens the way for applications in which mechanical, sensing and artificial intelligence components are tightly integrated within a single production environment.
