State-of-the-art production robots are, to say the least, extremely capable. Every step of the process is performed with great speed and consistent, high-level precision – sometimes for hours and days on end. The secret behind outstanding mechanical performance lies in image processing systems that transmit essential product data to packaging robots, ensuring the highest quality of the packaged products.
Robotics and vision systems
Work literally rolls towards the pick-and-place robots: conveyor belts often transport thousands of products per minute into production and packaging lines. Which biscuit or bottle a pick-and-place robot then picks up and places into a tray, for example, is decided within milliseconds – and right at the beginning of the process. Scanners are most often located above the conveyor belts to capture key product characteristics, ranging from color and shape to height. Depending on the operator’s image recognition requirements, advanced 2D and 3D scanners are available today to meet their specific needs.
Both types have interfaces to the robot control system. They exchange the captured product data with the control system via real-time bus systems – literally lending a helping hand to the agile machines. Communication networks such as these serve their purpose in applications where quick, reliable data transmission is of the essence. In the food industry in particular, large quantities of sensitive products require rapid processing. Thanks to bus systems, robots continuously receive coordinates at two-millisecond intervals. This allows data from up to 10,000 products to be transferred per minute – an absolute must for reliably processing large quantities in the shortest possible time.
Flexible data transfer
Bus systems based on common industry standards are predominantly used to ensure smooth communication, often supplemented by manufacturer-specific components. Gerhard Schubert GmbH, which has been developing scanner technology over the past 40 years, uses its own software to transfer data to robotics as smoothly as possible via the bus system. “We want to ensure that our image processing systems don’t have to convert the data into robot coordinates before it reaches the robots – and that they remain flexible when modifications are made to the line,” explains Daniel Greb, Head of Image Processing at Gerhard Schubert. For example, a manufacturer may want to integrate a robot with an additional axis into its line so that it can not only pick up products but also swivel them. In this case, the in-house software enables smooth, seamless communication between the scanner and the robot because it can be quickly reprogrammed.
Put simply, what scanners communicate to robots can be compared to product profiles that manufacturers feed into the system before production begins. Shape, color, height and rotational position are among the most common product coordinates. They specify what the scanners should look for in each instance. When the packaging line starts up, the image processing systems load this information and search specifically for articles that meet these criteria. Products that fall through the grid are not even registered by the devices that are equipped with special cameras. “These data packets can be dynamically re-parameterized at any time via the machine’s user interface,” explains Daniel Greb. This enables manufacturers to flexibly change the scanners’ inspection radius without having to interrupt production for long periods of time.
Smart and smarter
What are the advantages of the synergy between vision systems and robotics?
Daniel Greb: The combination of both systems – precise product detection and dynamic handling – ensures that products reach the consumer intact and in the highest quality. Uniform protocols for machine-to-machine communication are fundamental to successful machine collaboration. In spite of the drive for specialization, manufacturers should always ensure that they use established interfaces that enable stable data transfer. No one benefits from incomplete transfer between image processing and robotics.
What role does AI play in machine collaboration?
Greb: An increasingly important one. Especially in demanding production conditions, such as changing lighting conditions or numerous color nuances, it is easier to train AI than to program a separate rule for each case. The variables are simply too comprehensive for this. The tog.519 cobot from Schubert, for example, achieves remarkable speeds thanks to AI-based image processing – as it can pick up products from an unsorted pile accurately and reliably at up to 90 cycles per minute.
Increasingly powerful graphics processors are making the use of AI an attractive proposition in industry. Schubert has been working on this for some time and is integrating such graphics processing units, or GPUs for short, into new systems. An advantage here is that even potential errors can be described in advance and communicated to the systems to prepare them as comprehensively as possible for reality. However, there is still no way around rule-based algorithms, especially for clearly specifiable products such as biscuits, where unexpected changes are unlikely to occur.
“We want to ensure that our image processing systems don’t have to convert the data into robot coordinates before it reaches the robots – and that they remain flexible when modifications are made to the line.”
Daniel Greb, Head of Image Processing, Gerhard Schubert
Where do you see potential for optimization – and what would it look like in practice?
Greb: Image processing systems provide valuable data on production quality, which could be made even more easily accessible to manufacturers in the food and other industries. I’m thinking of cloud solutions that store this data so that it can be accessed from anywhere – and no one has to stand next to the machine to view it. Machine manufacturers such as Schubert could, in turn, use this information to train AI or readjust the settings of individual systems. However, this requires the consent of the manufacturers who use these image processing solutions. The potential of this approach hinges on the willingness to achieve the greatest possible data transparency.
Intelligent tools
The scope of the data packets can differ significantly even before any dynamic adjustments are made. Different scanners need different types of information. Unlike 2D scanners, 3D variants capture the height of products, for example, which greatly facilitates quality control of multi-layered objects such as sandwich biscuits. A 2D scanner would always evaluate these in the same way, as it views objects telecentrically – i.e., from a bird’s eye view – but height cannot be determined from above.
The 3D scanner calculates this automatically: by capturing the products from different observation points, the fields of view of several cameras overlap. The multiple images provide the basis for a stereoscopic overall picture. 3D scanners calculate the individual segments of the subsequent height image from two images with different perspectives. 2D scanners, on the other hand, achieve higher image resolution and greater color accuracy. This makes them both irreplaceable components for the precise quality assessment of complex products.
Scanners are intelligent tools that examine specific objects for predefined characteristics. The same applies to robots. The information provided by image processing enables them to select only those products on the conveyor belts that are allowed to continue their journey along the packaging line – and ultimately to the consumer. “Whereas the scanner checks, the robot selects. Together, both technologies provide the basis for efficient, quality-oriented packaging processes,” highlights Daniel Greb.
Mastering complex tasks
Scanners also provide valuable support when handling unsorted objects: thanks to the additional height information provided by 3D scanners, robots can grasp objects at a specific height or angle – ideal when products within a pile may be lying flat, standing upright, or touching each other. “Provided that the robot’s kinematics support this, the information from the 3D scanners can be used to perform extremely complex movements,” explains Greb.
If a product does not meet the requirements, manufacturers can select between two scenarios, depending on the specifications. If a scanner evaluates products as defective because their color, surface, or size does not correspond to the predefined values, this information is not sent to the robots. Instead, the defective products leave the process via reject conveyors. Alternatively, they
remain on the conveyors and are only ejected at the end of the line if they cannot be sorted out immediately due to their design or if they have to remain in the process for other reasons. The robots then receive the relevant information. For Daniel Greb, the advantages of this approach are evident: “Defective products provide important insight for statistical evaluations. Recording these deviations and transmitting them to higher-level systems, such as cloud solutions for statistical recording of product properties, can be the first step toward process optimization for manufacturers – and ultimately, towards more efficient production in the long term.”


