D1.1 - Autonomous vision-guided grasping using any arm/gripper combination
Brief description of functionality/utility:
Fully autonomous grasping of arbitrary objects from random self-occluding heaps - Currently the world’s most robust, generalisable and computationally fast autonomous grasping algorithms.
Advanced robotics technologies for handling hazardous nuclear waste are essential for cleaning up legacy nuclear waste. The ERL has developed fully autonomous machines that are able to grasp arbitrary objects from random heaps, where one part of an object is occluded by another part. Currently the world's most robust, generalisable and computationally fast autonomous grasping algorithms. For this technology to work -
- No prior knowledge of the object’s appearance or shape is needed.
- No machine learning or training data is needed.
The shape of the object is acquired from partial datasets that have been obtained from the vision system, and is matched to the geometry of the robotic gripper and its fingers.
From a performance point of view (the speed of new grasps and reliability), this technology is second to none. The research team has ergonomically combined autonomous grasping with a haptic input device to enable a human operator to control the remote manipulator while being assisted by the AI.
These algorithms and control methods work with all types of robot arm and gripper. TRL 6+ (where the system adequacy has been validated in a simulated environment) has been demonstrated using large heavy-duty industrial manipulators on nuclear industry sites, under full nuclear safety and national security regulations.
Arbitrary objects that are unknown to the robot, including deformable materials such as rubber gloves, hoses or cables, can be grasped from random, cluttered, self-occluding heaps.
- No machine learning is used, as we solve this purely as a 3D geometry problem with fully explainable, mathematically efficient methods. This may be more palatable to nuclear site operators than black-box learning based approaches.
- No training data is needed.
- The method thus extends to new objects, never seen before by the robot. In contrast, learningbased methods may struggle to extend to objects that greatly differ from those in the training dataset.