Computer vision technology helps robots capture transparent objects better

To see and catch objects, robots often use depth-sensing cameras like Microsoft Kinect,media reported. Although the camera may be affected by transparent or glowing objects, scientists from Carnegie Mellon University have developed a solution. The function of the depth-sensing camera is to shoot an infrared laser beam onto an object and then measure the time it takes for light to reflect back from the object’s profile and then to the sensor on the camera.

Computer vision technology helps robots capture transparent objects better

Although the system works well on relatively dim opaque objects, it has problems with transparent objects because most of the light can pass through transparent objects, or shiny objects scatter reflected light. This is where the Carnegie Mellon system works, using a color optical camera, which can also be used as a depth-sensing camera.

The device uses a machine learning-based algorithm that can be trained on depth perception and color images of the same opaque object. By comparing two types of images, the algorithm learned to infer the three-dimensional shape of objects in color images, even if they are transparent or glow.

In addition, while only a small amount of depth data can be determined by direct laser scanning of such objects, the data collected can be used to improve the accuracy of the system.

In the current tests, robots using new technologies perform much better in capturing transparent and glowing objects than using standard depth sensing cameras.

Professor David Held said: “Although we sometimes miss it, it is largely good, better than any previous system for catching transparent or reflective objects. “