Scroll to

read full story

BACK TO CONTENTS

ROBOTICS REVIEW

CONTACT
Austin

White, Photograph, Eyebrow, Forehead, Chin, Hairstyle, Skin, People, Cheek, Lip

Lightweight battery-powered aircraft are ready for take-off.

New System Enables Precise Pick-and-Place Applications

Austin Weber // Senior Editor // webera@bnpmedia.com

SimPLE is a learning-based tool to pick, regrasp and place objects precisely. It doesn’t require any prior real-world experience with the objects. Photo courtesy Massachusetts Institute of Technology

Picking and placing parts is one of the most common robotic applications. However, there is often a trade-off between high accuracy for a repetitive motion and reliability in an unstructured environment.

To teach robots how to move objects into an organized arrangement, engineers at the Massachusetts Institute of Technology (MIT) have developed a system called SimPLE (Simulation to Pick, Localize, and placE). Given only a model of the object, it generates training data by sampling grasps in simulation.

“SimPLE is a learning-based [tool] to pick, regrasp and place objects precisely that doesn’t require any prior real-world experience with the objects,” says Maria Bauza, Ph.D., a recent MIT grad who worked on the project at MIT’s Manipulation and Mechanisms Laboratory (MCube) along with lab director Alberto Rodriguez, Ph.D., and other students.

“SimPLE relies on three main components, which are developed in simulation,” explains Bauza. “First, a task-aware grasping module selects an object that is stable, observable and favorable to placing. Then, a visuotactile perception module fuses vision and touch to localize the object with high precision. Finally, a planning module computes the best path to the goal position.”

ROBOTICS REVIEW PRODUCTS

With their innovative design, COVAL's CVGC carbon vacuum grippers correspond perfectly to the demands of collaborative robot applications. The CVGC series stands out with a compact, light, strong carbon structure, collaborative safety measures, a choice of gripping interfaces, with or without an integrated vacuum generator. It is easily incorporated on your robot guaranteeing a fast setup.

COVAL Vacuum Technology, Inc.
919-233-4855
contact-us@coval.com
www.coval.com

New Generation Collaborative CVGC Carbon Vacuum Gripper

The combination of these three modules allows the robot to compute robust and efficient plans to manipulate a wide range of objects with high precision, while requiring no previous interaction with them in the real world.

According to Bauza, SimPLE is one of the first studies that utilizes tactile feedback for a complex manipulation task. “During our experiments, we found that both tactile [feedback] and vision are necessary to achieve the best performance, which suggest the importance of considering visuotactile perception in robotics applications,” she explains.

“Visuotactile perception is the ability to combine vision and touch sensing to perceive objects,” says Bauza. “In our case, this means combining vision and touch to precisely estimate the pose (position and orientation) of objects in order to manipulate them.”

Bauza and her colleagues used a dual-arm YuMi machine from ABB Robotics that was equipped with a pair of WSG-32 grippers from Weiss Robotics GmbH.

“Traditionally, robotics research has focused on single-arm setups with parallel-jaw grippers,” explains Bauza. “While this setting is usually sufficient to perform grasping on objects, it is often insufficient to efficiently perform precise pick-and-place.

When you customize ASG SmartBenches to include the X-PAQTM DC transducerized brushless screwdriving system, you get the traceability and quality control that is vital to your industry. If you need a multi-function workstation or a high-mix, low-volume production environment with 2 steps requiring semi-automated driving, ASG will build your SmartBench.

ASG, a Division of Jergens, Inc.
asginfo@asg-jergens.com
asg-jergens.com/automation/asg-smartbench

ASG Automation Solutions Increase Throughput and Quality

“Dual-arm robots, while more challenging to control, offer a wider task space enabling the machine to easily regrasp objects and reorient them as required by the final placing configuration,” Bauza points out.

“On each gripper, we attached two fingers,” says Bauza. “For the arm that produces the first grasp, these fingers are [equipped] with tactile sensors called GelSlim V3, which are the final evolution of a family of tactile sensors developed at MCube. On the other arm, we use nonsensorized fingers made with the same shape and materials.”

Bauza and her colleagues also used machine vision technology to process information from a depth camera and the signal, in the form of images, produced by two tactile sensors in the arm that performs the first grasp on an object. “The algorithm we devised for perception works equally for both sensing modalities,” she explains.

“The main idea behind it is to leverage simulation to learn an embedding space where two images encoded in this embedding space are close, if the object poses that generated them are close,” says Bauza. “As a result, to obtain the distribution of possible objects positions given a new image from the depth camera or from a tactile sensor, we just need to encode that image and compare its embedding against a dense set of possible images previously generated in simulation.”

ZiMo enables easy and mobile positioning at various locations without the need for complex integration into existing systems. An intuitive setup allows for operation without programming knowledge and its flexible configuration with quick adaptation enables profitable automation solutions, even for small-batch sizes. ZiMo offers versatility and flexibility - thanks to its compact size and adaptable setup.

Zimmer Group US, Inc.
828-855-9722
info.us@zimmer-group.com
www.zimmer-group.com

Mobile Robotic Cell Provides Flexible Automation Solution

The MIT engineers tested 3D-printed objects with a range of sizes and shapes, including a mix of household objects and industrial parts. SimPLE picked and placed 15 diverse objects. It achieved successful placements into structured arrangements with 1-millimter clearance more than 90 percent of the time for six of the objects and more than 80 percent of the time for 11 objects.

SimPLE relies on three main components, including visuotactile perception (middle) and motion planning (right). Illustration courtesy Massachusetts Institute of Technology

Bauza believes SimPLE could eventually be applied in many real-world manufacturing settings. “By performing precise pick-and-place, it is able to take an unstructured set of objects into a structured arrangement, enabling any downstream task,” she claims. “As a result, SimPLE could fit well in [environments] where automation is already standard, such as the automotive industry.

“SimPLE could also enable automation in many semistructured environments, such as medium-sized factories, hospitals or medical laboratories, where automation is less commonplace,” says Bauza. “Semistructured environments are at the same time structured (job requirements don’t change drastically) but flexible, as the specific objects and tasks might change from time to time. SimPLE relies on machine learning and simulated experience with objects, which is a benefit for settings where flexibility is a requirement.

“We are currently working to increase the dexterity and robustness of systems like SimPLE,” notes Bauza. “Two directions of future work include enhancing the dexterity of the robot to solve even more complex tasks, and providing a closed-loop [system] that instead of computing a plan, computes a policy to adapt its actions continuously based on the sensors’ observations. We…plan to continue pushing dexterity and robustness for high-precision manipulation in our ongoing research.”

Poster, Font

September 2024 | Vol. 67, No. 9

Material property, Rectangle, Font
Rectangle, Font
Font