At the MFG Meeting session, “AI-Enabled Robotics: Unlocking High-Mix Manufacturing Today,” Simon Lapeyry, head of revenue at Intrinsic (an AI robotics group within Google), outlined how advances in artificial intelligence are transforming automation for high-mix production environments.
Due to increasingly agile development cycles and evolving demand for personalized products, modern production systems require greater flexibility. While the proportion of machine tools that ship equipped with robots and robot-integrated manufacturing cells has been on the rise for years, a key bottleneck is the availability of skilled robot programmers and the rate at which they can reconfigure integrated robotics to accommodate rapidly changing products. As a result, the proportion of robot-integrated machine tools has stagnated despite immense potential for operational efficiency gains through industrial automation.
To address these workforce gaps and accelerate time-to-value in high-mix environments, workflow advances across four key building blocks are needed:
Perception
Automated grasp planning
Automated motion planning
Sensor-guided insertion
Eliminating Fixtures with Intelligent Perception
A traditional method for reducing programming complexity and duration is to use infeed fixturing to corral parts into a repeatable and known location, incurring additional cost and time to develop specialized fixtures for distinct products and sort them in a high-mix environment.
By leveraging computer vision and Computer-Aided Drafting (CAD) models, AI-driven perception enables robots to identify distinct parts within a batch while determining their position and orientation. As a result, the cost of fixture development, cell reconfiguration, and cycle times can be significantly reduced.
Smarter Grasping for Variable Parts
However, without infeed fixtures to sort and corral components to a repeatable and known location for a robot to grasp, a new challenge is presented: How can a robot pick up a uniquely shaped product from an arbitrary location? While traditional programming methods rely on user-defined grasp point annotations created during robot programming, automated grasp placement uses product and robot CAD models to determine how the robot can grasp an arbitrary workpiece without requiring user input or programming. When coupled with AI-driven perception, this curtails development, reconfiguration, and cycle times.
Adaptive Motion Planning in Dynamic Environments
Once a robot has grasped a part, the journey to its destination can begin. However, that journey can be disrupted by obstacles within the work cell, and traditional programming methods, which rely on waypoints to avoid obstacles can be cumbersome and painful for developers while not reacting in real-time to dynamically changing environments.
To ensure that the part and robot do not collide with themselves or other equipment within the cell, similar to Google Maps, automated motion planning charts a course and reroutes during the journey to ensure that collisions are avoided. Consequently, product defects and equipment damage due to collisions can be avoided while minimizing or eliminating cumbersome path planning.
Precision at the Point of Insertion
At the end of its journey, the part arrives at its destination. Threading the needle, sensor-guided insertion automatically adjusts to a dynamic landscape in the same way that a pilot fine tunes trajectory while gliding onto a runway despite varying weather conditions such as crosswinds.
By dynamically adjusting to changing conditions, potential challenges due to part variation, tolerances, and misalignment can be mitigated in real-time when placing or inserting parts.
Scaling Productivity with Physical AI
By combining the building blocks of perception, automated grasp planning, automated motion planning, and sensor-guided insertion, and enhancing them using physical AI, operational efficiency gains can be realized while easing the skilled workforce bottleneck through improved productivity. As a result, time-to-value is accelerated while improving the capability to handle increasingly diverse products in high-mix environments.
Leveraging physical AI, Intrinsic is developing a robot-agnostic development environment to operationalize these technologies. As such, Intrinsic aims to reduce development and reconfiguration costs by providing a one-touch solution specifically for high-mix environments that identifies, grasps, moves, and places parts. To learn more about their technology and vision for high-mix manufacturing, check out Intrinsic’s booth 236233 at IMTS 2026, Sept. 14-19, in Chicago, Ill.



