The classic machine vision inspection application in manufacturing is binary. For example, in a typical application, the vision system captures an image of a part resting on the infeed conveyor and determines whether or not it is the right part. If not, the part is rejected without any thought being given to what it actually might be.
This type of application typically uses a pattern-matching algorithm trained on an image of the correct part. The pattern-matching algorithm learns the object’s geometry, and when a part is inspected it looks for those shapes in the image.
A more challenging application comes when the vision system is asked to go beyond a true or false question to answer a multiple-choice question.
Let’s say that we want the vision system to capture an image and tell us which of 12 different parts is present so we can match it with the bill of materials for the build.
This is not a problem for most vision systems, but it requires considerable extra work. Instead of training the pattern-matching algorithm on a single part, it must be trained on 12 parts.
Then, for each part that is inspected, the algorithm must search the acquired image 12 different times. Getting all 12 of these inspections done in a single cycle can be challenging. For example, a typical inspection operation might take 38 milliseconds (ms) to compare the acquired image to each model for a total of 460 ms or almost one half second to compare all 12 models.
A multiple-model, pattern-matching algorithm can substantially reduce the amount of time required to perform this type of inspection.
The algorithm works by capturing images of all of the parts or features that need to be inspected and combining them into a single, multiple model.
After an image is acquired, the algorithm only needs to search it a single time.
Based on the geometric features contained in the model, the algorithm determines which part is present.
As you can imagine, this approach takes considerably less time than a traditional single-model pattern-matching algorithm.
Let’s look at a typical example of an automobile assembly plant that uses 12 different wheels.
Four vision systems are used to ensure that each of the four wheels on each vehicle matches the build order. Under the hood, the multiple-model pattern-matching tool collects the distinguishing features of each wheel and stores them in a single model. When the multiple-model pattern-matching algorithm runs, it returns a registration or inspection result based on the model that produces the best result. If the part being inspected does not match any of the stored models, then the algorithm returns a not-found result.
The multiple-model pattern-matching algorithm is considerably more efficient than conventional single-model pattern-matching because the features of the captured image only need to be inspected once.
Today, there is available a multiple-model pattern-matching algorithm that can capture an image and determine which of 12 different wheels are present in only 190 milliseconds, about 40% of the time required by a single-model pattern-matching algorithm and fast enough to meet the cycle time of just about any assembly line.
This approach also reduces the amount of memory required for the vision application.
Attempting to train a conventional pattern in applications where there is a lot of variation in the appearance of good parts often produces an unusable pattern since the pattern includes numerous features that are not present in other run-time part images.
Intelligent, self-learning, composite pattern-matching tools can improve inspection accuracy and simplify application setup by automating the process of learning to distinguish between image features that are important and those that can be ignored without creating an issue.