We take it for granted that if you want to capture a scene to show off to your friends and family, you can snap an image with your smartphone, then share it by text or social media. Anonymous cloud-based computing analyzes the image, perhaps suggesting tagging people who are in the scene. This kind of electronic image processing didn’t just appear, and is in fact the product of over 50 years of work. While interesting in itself, this same sort of image processing technology can not only record who was at an event for future generations, but guide robots and ensure that quality standards are met in a manufacturing environment.
Roughly 15 years after early experiments at MIT allowed a computer to describe its environment in the late 1960s, machine vision was born. This sub-discipline of computer vision is concerned with augmenting physical machinery with vision capabilities. It then began to slowly work its way into the industry for use in robot guidance and quality assurance. Pioneers in this industry included Automatix, which in 1980 became the first company to market industrial robots with built-in vision systems, and Cognex, which in 1982 marketed their Dataman optical character recognition (OCR) system, a prototype of which took 90 seconds to read the character “6.”
Fast-forward to today, and computer vision is a multi-billion dollar industry, with Cognex worth close to $12 billion, and competitors such as Keyence, Omron, Sick, and many others providing alternatives. Automatix, on the other hand, has merged and changed hands several times, and its technology is still available via RPC Machine Vision Systems, a reseller of Microscan.
Inspection speeds have increased astronomically, going from 90 seconds to read a single character, to being able to perform a full vision inspection in under 6 milliseconds (according to vision integrator Integro). While the type of inspections and processing times vary, this figure represents a speed difference of over 15,000 times, a statistic that would still be staggering if it were off by an order of magnitude. Along with speed difference, new ways to use these systems have arisen, such as using enhanced lighting systems, and line scan setups that build an image line-by-line from a moving object, rather than acquiring an image in one frame.
Robot guidance applications, where a camera tells the robot where to move to pick up parts or assemble them, have also advanced. In this setup, the camera isn’t acting as a passive component passing along inspection information, but is actually telling the robot where to move, even in three dimensions in some machine setups. While out of the scope of this article, one might also point to self-driving cars as an area where vision systems have advanced in incredible ways. Driving via vision along with other sensors once seemed inconceivable, but now seem poised to help that industry explode.
Read on for several examples of where vision systems are taking industrial productivity and quality assurance to the next level:
360 Degree Surface Inspection: As seen in this video at 1:00, the versatile Sawyer Robot is able to use vision guidance to pick up a variety of scattered rectangular parts off of a tray and place them accurately onto a pallet. This same robot/camera integration even illustrates how a robot can be used to manipulate the vision sensor itself at 4:15, moving the camera about the exterior of an engine to verify proper assembly. To establish a frame of reference, it first takes a picture of the top of the assembly, then moves around the engine to inspect and make sure that all components are in place. At 6:10, it’s even able to sense that a switch is in the wrong position, and uses an end effector to switch it back to where it should be.
3D Robot Guidance: As shown in the previous example, orientation of a robot based on parts in a 2D plane is impressive, but it’s something that most of us can understand, at least in concept. The robot featured here by integrator Kinemetrix is not only able to adjust the robot’s position in an X/Y/Z plane with the correct angular orientation, but can also adjust the end-of-arm tooling at the proper angle with respect to the ground. This allows it to pick up a variety of parts in multiple orientations with no tooling changeover, and requires very little support to keep running.
Pancakes Stacking: While stacking round parts is in some ways simpler than Sawyer’s orientation of rectangular parts, this robot/vision system collaboration shows an incredible amount of speed, able to stack 400 pancakes per minute. To accomplish this, the system uses four individual delta robots, meaning that each can pick up a pancake and stack it in under .6 seconds. The system can even recognize when pancakes overlap one another, and successfully grip and stack these as well.
NYC CNC Time Clock: Camera systems are mostly used to monitor part quality or guide robots in pick-and-place operations, but the human factor can’t be neglected. This setup from Saunders Machine Works, also known as NYC CNC on YouTube, uses commonly available hardware—an iPad—to allow workers to clock in using facial recognition. The system even pushes this information to an automated payroll service. This particular setup adds an extra sensor component to adjust the iPad to a person’s height, meaning that he or she doesn’t have to adjust their position to get a good reading. An interesting point on this video is that it doesn’t just deal with the capabilities of the system, but goes over the machining and setup required to get it to work well.
The Future of Industrial Vision Sensors
While these systems have advanced at an incredible rate, the possibilities for vision system integration in industrial environments still have much room for growth. As CCD resolution, computing power, and light and lens capabilities improve, we’re certain to see more and more areas where this type of guidance and inspection can be used, creating less expensive products with fewer defects and more accurate part placement. As these systems can forgo hard tooling for more versatile software tools, machinery can even be more flexible, able to run a greater variety of products with less changeover time.
Zach Wendt and Jeremy S. Cook are engineers who enjoy sharing the many capabilities vision sensors have with emerging technology. Zach, with Arrow Electronics, has a background in consumer product development. Jeremy writes for a variety of technical publications and has a background in manufacturing automation. You can learn more about sensor applications here.
Filed Under: Industrial automation