Worldwide Engineering Magazine

View Original

Intelligent vision sensor simplifies image-guided robotics

SensoPart’s latest software update for its VISOR® Robotic vision sensor makes the setup of robotics applications even simpler and more flexible. Thanks to a considerably expanded scope of functions, all common 2D applications can be solved in the robotic control system with minimal effort. Furthermore, use of an additional 3D detector determines the position of objects in 3D coordinates. All the new functions are independent of the robotics system used – this makes VISOR® Robotic unique on the market.

Two years ago, SensoPart launched VISOR® Robotic, the first special vision sensor for robotics, which considerably simplifies the setup of applications with robots. Now the next evolutionary phase is underway: The current software release 2.2 for VISOR® Robotic offers an enhanced scope of functions, transferring key configuration steps from the robot control system to the sensor. New calibration methods give the user greater freedom when setting up their application. Assisted by smart, user-friendly functions, they can sychronise the position of the object detected by the sensor with the robot’s gripper point in just a few steps.

Hand-eye calibration: calibrate here, work there

Hand-eye calibration is used when the sensor is mounted on a robotic arm – for example when picking up objects from a tray or during automatic screw tightening. Advantage: In contrast to standard calibration methods, the sensor’s field of view during calibration does not have to be identical to the robot’s subsequent operating range – which means you can “calibrate here and work there”. The user can send the robot’s current position to the VISOR® Robotic via a trigger – this can be used for any position within the cell. This has additional advantages when spatial restrictions do not allow the insertion of a calibration plate.

The second new calibration method “Base eye” is suited to applications in which the vision sensor is mounted on a stationary element – for example for the precise positioning of components in the gripper. Here, each element collected from the tray by the gripper is held briefly in front of the sensor and its exact position is determined; the gripper position can then be corrected if necessary during the next work step, for example when depositing the part.

With these two new calibration methods, it is no longer necessary to manually direct the gripper arm to real points or to have it seize components; to map the complete coordinate system in relation to the robot’s tool centre point, it is enough to take at least ten images in different positions of a calibration plate placed in the robot’s field of view or fitted on the robotic arm. A “result offset” is also possible, if necessary, i.e. a calculated shift of the vision sensor’s result points to the desired robot operating points. In this way, it is possible, for example, to teach the sensor the contour of a cup, when the robot’s gripper point is at the handle.

From 2D to 3D: identify height and tilt

Thanks to the function update, all common 2D robotics applications can be easily and elegantly executed with the VISOR® Robotic – for example during material feed or the machining of parts. And even more is possible if necessary: With the new “3D contour” detector, SensoPart transforms a 2D camera into a 3D version and supplies additional 3D information in the robotics sensor, allowing it to identify, for example, a height difference or localise tilted components. Together with the function “Result offset”, it is also possible to teach gripper points that are pivoted in relation to the object. In addition, the 3D detector also aids the high precision docking of a mobile robot with a workstation.

In this case, differing imaging positions or gripper points are emitted directly in robot coordinates by the VISOR® Robotic so that no further variables or calculations are necessary in the robot control system – the simple standard command “Move to point” is sufficient to control the robot. As the task is solved directly in the vision sensor, all settings are independent of the respective robot system in use and are easily transferable. This has proved its worth when a change in camera or gripper is necessary: The application is ready to go again after just a few minutes, without the need for a new teaching procedure or the intervention of robot programmers.

The current software update is now immediately available for free download by all users of the V10, V20 and V50 series of the VISOR® Robotic.