Understanding And Applying Machine Vision Part 7 potx

25 259 0
Understanding And Applying Machine Vision Part 7 potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Page 187 Figure 8.19 Erosion operation (courtesy of ERIM). Figure 8.20 Duality of dilation and erosion (courtesy of ERIM). Page 188 Figure 8.21 Mathematical morphology in processing of PC board images (courtesy of General Scanning/SVS). The results of the first operation-closing are depicted in Figure 8.22(b). A difference image is shown in Figure 8.22(c). A threshold is taken on the difference image [Figure 8.22(d)]. Filtering based on shape takes place to distinguish the noise from the scratch; that is, the detail that can fit in a structured element in the shape of a ring is identified in Figure 8.22(e) and subtracted [Figure 8.22(f)]. Figure 8.22(g) depicts the segmented scratch displayed on the original image. Figure 8.22a–g Morphology operating on a scratch on manifold (courtesy of Machine Vision International). Page 189 Figure 8.22 Continued. Page 190 Figure 8.22 Continued. Page 191 Figure 8.22 Continued. Page 192 8.4— Coding/Feature Extraction Feature extraction is the process of deriving some values from the enhanced and/or segmented image. These values, the features, are usually dimensional but may be other types such as intensity, shape, etc. Some feature extraction methods require a binary image, while others operate on gray scale intensity or gray scale edge-enhanced images. The methods described below are grouped into three sections: miscellaneous scalar features, including dimensional and gray level values; shape features; and pattern matching extraction. 8.4.1— Miscellaneous Scalar Features Pixel Counting For simple applications, especially part identification and assembly verification, the number of white pixels in a given window is enough to derive the desired result. This operation, finding the number of pixels above a threshold within a window, is called ''pixel counting." It is a very widely used technique and runs very quickly on most systems. Often pixel counting is used for tasks other than the main application problem such as threshold selection (as already described), checking part location, verifying the image, etc. Edge Finding Finding the location of edges in the image is basic to the majority of the image-processing algorithms in use. This can be one of two types: binary or enhanced edge. Binary-edge-finding methods examine a black-and-white image and fred the X-Y location of certain edges. These edges are white to black, or black-to-white transitions. One technique requires the user to position scan lines, or tools ("gates") in the image (Fig. 8.23). The system then finds all edges along the tools, and reports their X-Y coordinates. Figure 8.23 Edge finding, circles are edge locations found. Page 193 These gates operate like a one-dimensional window, starting at one end and recording coordinates of any transitions. Many times they can be programmed to respond to only one polarity of edge, or only edges "n" pixels apart, or other qualifiers. Like windows, some systems allow orientation at angles, while others do not. Because they are one-dimensional, and also because the video is binarized, any edge data collected using gates should be verified. This verification can be done by using many different gates and combining the results by averaging, throwing out erratic results, etc. Other features are often used to verify binary edge data such as pixel counting. Gray-scale edge finding is very closely tied with gray-scale edge enhancement; in fact, usually the two are combined into one operation and are not available as two separate steps. Generally, the edge-enhanced (gradient) picture is analyzed to find the location of the maximum gradient, or slope. This is identified as the "edge." The set of coordinates of these edge points are taken as features and used as inputs for classification. Sometimes, the edge picture is thresholded to create a binary image so that the edges are "thick." A "thinning" or "skeletonizing'' algorithm is then used to give single pixel wide edges. The first method, finding gradient, gives the true edge but usually requires knowing the direction of the edge (at least approximately). The second method, thresholding the gradient and thinning, may not find the true edge location if the edge is not uniform in slope. The error is usually less than one pixel, depending on the image capture setup; and, thus, thresholding the gradient image is a very common gray-scale edge locating method. There are systems that use algorithms that produce edge location coordinates directly from the gray scale data, using algorithms that combine filters, gradient approximations, and neighborhood operations into one step. These are usually tool-based (or window-based) systems. Many even 'learn" edges by storing characteristics such as the strengths of the edge, its shape and the intensity pattern in the neighborhood. These characteristics can be used to ensure that the desired edge is being found during run-time. 8.4.2— Shape Features Some computationally more intensive image analysis systems are based on extracting geometric features. One such approach (developed at Stanford Research Institute International) involves performing global feature analysis (GFA) on a binary picture. In this case, the features are geometric: centroid, area, perimeter, and so on. In GFA, no inferences are made about the spatial relationships between features, and generally the parts are isolated. Generally, operations are performed on the "raw" (unprocessed) video data (filtering) and preprocessed images (run length encoding) (Figure 8.24). Decisionmaking and control is the function of the microprocessor. Page 194 Figure 8.24 Run length encoded image. Figure 8.25 Thresholded segmentation. Page 195 An enhancement of this approach involves segmentation based on either threshold gray scale or edges. In thresholded gray scale, a threshold (Figure 8.25) is set, and if the pixel gray level exceeds the threshold, it is assigned the value 1. If it is less than the threshold, it is assigned the value 0. An operator during training can establish the threshold by observing the effect of different thresholds on the image of the object and the data in the image sought. The threshold itself is a hardware or software setting. A pixel gray value histogram (Table 4. 1) analysis display can provide the operator with some guidance in setting the threshold. In some systems, the threshold is adaptive; the operator sets its relative level, but the setting for the specific analysis is adjusted based on the pixel gray scale histogram of the scene itself. Once thresholded, processing and analysis is based on a binary image. An alternative to thresholded segmentation is that based on regions, areas in an image whose pixels share a common set of properties; for example, all gray scale values 0–25 are characterized as one region, 25–30, 30–40, and so on, as others. TABLE 8.1 Uses for Histograms Binary threshold setting Multiple-threshold setting Automatic iris control Histogram equalization (display) Signature analysis Exclusion of high and low pixels Texture analysis (local intensity spectra) SRI analysis is a popular set of shape feature extraction algorithms. They operate on a binary image by identifying "blobs" in the image and generating geometrical features for each blob. Blobs can be nested (as a part with a hole; the part is a blob, and the hole is a blob also). SRI analysis has several distinct advantages: the features it generates are appropriate for many applications; most features are derived independent of part location or orientation, and it lends itself well to a "teach by show" approach (teach the system by showing it a part). SRI, or "connectivity" analysis as it is often called, requires a binary image. However, it only deals with edge points, so often the image is "run-length" encoded prior to analysis. Starting at the top left of the screen (Figure 8.26), and moving across the line, pixels are counted until the first edge is reached. This count is stored, along with the "color" of the pixels (B or W), and the counter is reset to zero. At the end of the line, one should have a set of 'lengths" that add up to the image size (shown below), 0 = black, 1 = white. Page 196 These set of like pixels are called runs, so we have encoded this line of binary video encoded by the length of the runs; hence, "run-length encoding." Run-length encoding is a simple function to perform with a hardware circuit, or can be done fairly quickly in software. Figure 8.26 Shape features. Each line in the image is run-length encoded, and the runs are all stored. The runs explain about the objects in the scene. The image in Figure 8.26 may appearas: -20W 15B 65W -20W 17B 65W -20W 10B 1W 5B 65W -20W 10B 3W 5B 63W -20W 10B 5W 5B 26W 5B 30W -20W 10B 7W 5B 22W 9B 28W -20W 10B 9W 5B 19W 11B 27W etc. Note how the left "blob" split into two vertical sections. Similarly, as one works down the image, some blobs may "combine" into one. The keys to SRI are the algorithms to keep track of blobs and sub-blobs (holes), and to generate features from the run lengths associated with each blob. From these codes many features can be derived. The area of a blob is the total of all runs in that blob. By similar operations, the algorithms derive: Maximum X, Y Minimum X, Y Page 197 Centroid X, Y Orientation angle Length Width Second moments of inertia Eccentricity ("roundness") Minimum circumscribed rectangle Major and minor axes Maximum and minimum radii, etc Figure 8.27 Connectivity/blob analysis (courtesy of International Robotmation Intelligence). Page 198 Figure 8.28 Geometric features used to sort chain link (courtesy of Octek). Figures 8.27 and 8.28 are examples of the SRI approach. It is apparent that knowing all these features for every blob in the scene would be enough to satisfy most applications. For this reason, SRI analysis is a powerful tool. However, it has two important drawbacks that must be understood. 1. Binary image - the algorithms only operate on a binary image. The binarization process must be designed to produce the best possible binary image under the worst possible circumstances. Additionally, extra attention needs to be paid to the image-verifying task. It is easy for the SRI analysis to produce spurious data because of a poor binary image. 2. Speed - because it (typically) examines the whole image, and because so many features are calculated during analysis, the algorithms tend to take more time than some other methods. This can be solved by windowing to restrict processing, and by only selecting the features necessary to be calculated. Most SRI-based systems allow unnecessary features to be disabled via a software switch, cutting processing time. Some systems using SRI have a "teach by show" approach. In the teach mode, a part is imaged and processed. In interactive mode, the desired features are stored. At run time, these features are used to discriminate part types, to reject non-conforming parts, or to find part position. The advantage is that the features are found directly from a sample part, without additional operation interaction. [...]... same part number as the image; this can be used to sort mixed parts In reality, parts can not only translate (X and Y), but rotate (0) Therefore, a third variable must be introduced At each location (X and Y), the template must be rotated through 360 degrees, and the match at each angle evaluated This gives the system the ability to find parts at any orientation, as well as any position However, the... similar and are not, something is not right Edge counts - a tool of window that sees too few or too many edges may indicate part movement, dirty part or lighting problems 8.6— Decision-Making Decision-making, in conjunction with classification and interpretation, is characterized as heuristic, decision theoretic, syntactic or edge tracking 8.6.1— Heuristic In this case, the basis of the machine vision. .. continued from previous page) When the direction of max R is calculated and aligned to the Y axis and the direction of min R is calculated and if the min R directions of both the image and the standard are in the same direction, the object is right handed If the directions are opposed, the object is left-handed Thus, the attributes of only one of an enantiomorphic pair need to be stored relationships... appropriate for an industrial application As part of the software, a calibration procedure will define the conversion factors between vision system units and real world units Most of the time, conversion simply requires scaling by these factors Occasionally, for high accuracy systems, different parts of the image may have slightly different calibrations (the parts may be at an angle, etc.) In any case,... some logical combination of the measured dimensions and some preset limits For instance, if the measured length is between 3.3 and 3.5, AND the diameter is no more than 0.92, the part is good; otherwise, it is bad Any system that performs a Page 205 good/bad check of this type should also make the measured dimensions available During system setup and debug, it will be necessary to see the quantities... the image 2 The quality of match between the image and the template The location can be used for several things In robot guidance applications, the part' s X-Y coordinates are often the answer to the application problem For most inspection systems, the material handling does not hold part position closely enough Pattern matching can be used to find the part' s location, which is then used to direct other... matching section (Figures 8. 37 and 8.38) Significantly, in some applications, it may be the objective to determine the extent of rotation to be able to feed the information back to the machine to compensate accordingly for the next operation Such is the case with wire bonders and die slicers in microelectronics manufacturing In this instance, X, Y and theta data are fed back to the machine to make corrections... forgiving of acceptable vari- Page 209 Figure 8. 37 One tactic for correcting for translation positional uncertainty (courtesy of Inex Vision Systems) Figure 8.38 Correlation routine used to compensate for rotation and translation positional uncertainty (courtesy of Inex Vision Systems) Page 210 ables, the "goodness-of-fit" criterion becomes too lenient, and the escape rate for defective products becomes... the object perimeter is scanned and each edge pixel position is subtracted from the centroid value The first such value is stored in two counters (max and min) Each subsequent perimeter value is compared to these counts and if it is larger, it replaces the max count If it is smaller, it replaces the min count Centroid Algorithm The two centroids X and Y are first moments and are sometimes referred to... calculate part orientation It is assumed that all parts can be stored or referenced with their max R direction parallel to the Y axis The max R is known by its two end points (centroid and perimeter) These numbers can be used to calculate the relative orientation Min R Max for Determination of Handedness (continues) Page 212 (table continued from previous page) When the direction of max R is calculated and . dimensional and gray level values; shape features; and pattern matching extraction. 8.4.1— Miscellaneous Scalar Features Pixel Counting For simple applications, especially part identification and assembly. identifying "blobs" in the image and generating geometrical features for each blob. Blobs can be nested (as a part with a hole; the part is a blob, and the hole is a blob also). SRI analysis. mode, a part is imaged and processed. In interactive mode, the desired features are stored. At run time, these features are used to discriminate part types, to reject non-conforming parts, or

Ngày đăng: 10/08/2014, 02:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan