1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Understanding And Applying Machine Vision Part 12 pptx

25 276 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 25
Dung lượng 138 KB

Nội dung

Page 339 Specific technical advice includes the following: 1. A system with built-in climate control may avoid maintenance problems in certain applications. 2. Avoid requiring unnecessary peripheral equipment to be included in the system; this will just complicate the application. 3. Define system interface requirements fully. 4. Avoid applications that require extended lengths of cable. 5. If possible, incorporate a manual mode to exercise the system for one full cycle to allow an easy test mode for servicing. Expect that the vendor knows the process involved so he or she can make independent assessments of variables and reflect an awareness of the environment. Expect that the vendor will provide training, documentation, and technical support after as well as before installation. Vendors should recognize that the application of machine vision technology is a learning experience for the user; this could lead to new expectations for the equipment, especially where new knowledge about the production process itself comes about as a consequence of being able to make observations only for the first time with such machine vision equipment. Recognize that software is not a "Band-Aid" for otherwise poor staging designs. As a last piece of advice, one user panelist suggested, "Never trust a machine vision vendor that uses the phrase 'piece of cake.' " References Applications Abbott, E. H., "Specifying a Machine Vision System," Vision 85 Conference Proceedings, Machine Vision Association of the Society of Manufacturing Engineers, March 25–28, 1985. Revised for SME Workshop on Machine Vision, November 1985. Abbott, E., and Bolhouse, V., "Steps in Ordering a Machine Vision System," SMEIMVA Vision 85, March 1985. Funk, J. L., "The Potential Societal Benefits from Developing Flexible Assembly Technologies," Ph.D. Dissertation, Engineering and Public Policy, Carnegie Mellon University, December 1984. LaCoe, D., "Working Together on Design for Vision," Vision, September 1984. Quinlan, J. C., "Getting Into Machine Vision," Tooling and Production, July 1985. Robotics Industries Association, "Economic Justification of Industrial Robots," pamphlet. Rolland, W. C., "Strategic Justification of Flexible Automation," Medical Devices (MD & DI), November 1985. Page 340 Sephri, M., "Cost Justification Before Factory Automation," P&IM Review and APICS News, April 1984. Zuech, N., "Machine Vision: Part 1-Leverage for CIM," CIM Strategies, August 1984; "Machine Vision: Part 2- Getting Started," CIM Strategies, September 1984. Zuech, N., "Machine Vision Update," CIM Strategies, December 1984. Page 341 14— Alternatives to Machine Vision 14.1— Laser-Based Triangulation Techniques These sensors (Figure 14.1) project a finely focused laser spot of light to the part surface. As the light strikes the surface, a lens in the sensor images the point of intersection onto a solid-state array camera. Any deviations from the initial referenced point can be measured based on the number of sensor elements deviated from the referenced point. Accuracy is a function of standoff distance and range. Figure 14.2 depicts an integrated system performing both 2-D and 3-D measurements using sensor data based on laser triangulation principles. These techniques can be extended to making contour measurements (Figure 14.3). In this case, light sections or structured light sheets are projected onto the object. The behavior of the light pattern is a function of the contour of the object. When viewed, the image of the line takes on the shape of the surface, and a measurement of that contour is made. Again, a referenced position is measured, and deviations from the referenced position are calculated based on triangulation techniques. Determination of the normal-to-surface vectors, the radius of curvature, Page 342 Figure 14.1 Laser-based triangulation technique. Page 343 and the distance from the apex to the sensor (range) can be made in a single measurement. Arrangements of multiples of such units can be configured to accommodate virtually any combination of shapes and sizes. 14.2— Simple Photoelectric Vision Optical methods can be used to provide edge guidance, typically associated with opaque web products (paper, rubber, etc.). Two photoelectric ''scanners" are used, one above and one below the web. Each scanner includes an emitter and receiver arranged so that when the two units are in operation, each receiver sees light from the other's emitter. By phase-locking techniques, the two beams developed can provide edge-guidance feedback. 14.3— Linear Diode Arrays An alternate approach is to use two linear diode arrays positioned at the edges (Figure 14.4). Differences in edge locations are simultaneously detected and used to determine edge positional offset. Linear array cameras are well suited to making measurements on objects in motion both perpendicular to and along the line of travel. Perpendicular measurements are derived by pixel-counting techniques. The resolution of measure- Figure 14.2 System offered by CyberOptics that employs laser triangulation principles to make dimensional measurements. Page 344 Figure 14.3 Depiction of light-sectioning principles. Page 345 Figure 14.4 Gauging with linear array cameras. ment along the axis of travel is determined by the scan rate of the system. A higher resolution can be achieved by increasing the frequency of data gathering with increasing number of pixels in the array. Another application for which linear arrays are well suited is pattern recognition to control the amount of spray material released. In these systems the array scans the product as it passes on the conveyor. The image with data associated with the object's extremities are stored and fed back to the spray mechanism to control the spray pattern. This is especially useful where different sizes and shapes are comingled on the conveyor. 14.4— Fiber-Optic Arrangements Fibers within a bundle can be custom arranged for specific applications (Figure 14.5). For example, to detect the presence of the edge of a moving web and to control its position, versions with three bundles can be used. Using this arrangement and a special photoelectric switch with one emitter and two receptors, two relay outputs can be obtained capable of controlling web width and position. 14.5— Laser Scanners In laser scanner systems a laser beam is deflected along a line across the object under investigation. A detector will measure the irradiance 1. transmitted through a translucent or transparent object or not intercepted by an opaque or strongly reflecting and/or absorbing object, Page 346 Figure 14.5 Simple fiber-optic photoelectric sensor. 2. intercepted by the object and scattered or specularly reflected by it, and 3. evident at the surface of the object at the incidence point on the line. These techniques can be used to make measurements, check for presence and absence, and assess surface quality (e.g., pits, scratches, pinholes, dents, distortions, and striations). In the case of making measurements, a typical laser gaging system (Figure 14.6) uses a rotating mirror to scan the laser across the part. The beam is converged by a lens into a series of highly parallel rays arranged to intercept the part being measured. A receiver unit focuses the scanning rays onto a sensor. Because the speed of the scanning mirror is controlled, the time the photodetector "sees" the part shadow can be accurately related to the dimensional characteristic of the part presented by the shadow. In the case of surface characterization, a similar laser scanner arrangement projects light across the object. By positioning the photodetector properly, only light scattered by a blemish will be detected. Analysis of the amplitude and shape of the signal can, in some cases, provide characterization of the flaw as well as size discrimination. 14.6— Laser Interferometer Interferometers function by dividing a light beam into two or more parts that travel different paths and then recombine to form interference fringes. The shape of the interference fringes is determined by the difference in optical path traveled by the recombined beams. Interferometers measure the difference in optical paths in units of wavelength of light. Page 347 Figure 14.6 Principles of laser-gauging approach to dimensional measurements. Since the optical path is the product of the geometric path and the refractive index, an interferometer measures the difference in geometric path when the beams traverse the same medium, or the difference of the refractive index when the geometric paths are equal. An interferometer can measure three quantities: 1. Difference in optical path, 2. Difference in geometric path, and 3. Difference in refractive index. Laser interferometers are used to perform in-progress gaging on machine tools. The laser is directed parallel to the Z- axis of the machine toward a combination 90' beam bender and remote interferometer cube. The beam bender- interferometer is rigidly attached to the Z-axis slide and redirects the optical beam path parallel to the X-axis and toward the cutting position at the tool turret. The beam is thus directed at a retroreflector attached to the moving element of a turret-mounted mechanical gage head. The actual measured distance is between the retroreflector on the gage head and the interferometer. 14.7— Electro-Optical Speed Measurements Laser velocimeters exist for the noncontact measurement of the speed of objects, webs, and so on. Some of these are based on the Doppler effect. In these cases, a Page 348 beam splitter breaks the laser beam into two identical beams that are directed onto the surface of the object at slightly different angles with regard to the direction of motion. Both beams are aligned to meet at the same point on the object's surface. The frequency of the reflected light beam is shifted, compared to the frequency of the original light, by the movement of the object. The shifted frequencies are superimposed so that a low-frequency beat (interference fringe pattern) is produced that is proportional to the speed of the moving object. 14.8— Ultrasonics Ultrasonic testing equipment beams high-frequency sounds (1–10 MHz) into material to locate surface and subsurface flaws. The sound waves are reflected at such discontinuities, and these reflected signals can be observed on a CRT to disclose internal flaws in the material. Cracks, laminations, shrinkage cavities, bursts, flakes, pores, bonding faults, and other breaks can be detected even when deep in the material. Ultrasound techniques can also be used to measure thickness or changes in thickness of materials. 14.9— Eddy Current Eddy current displacement measuring systems rely on inductive principles. When an electrically conductive material is subjected to an alternating magnetic field by an existing magnetic coil, small circulating electrical currents are generated in the material. These "eddy currents" generate their own magnetic field, which then interacts with the magnetic field of the existing coil, thereby influencing the impedance. Changes in impedance of the existing coil can be analyzed to determine something about the target: to evaluate and sort material; to measure conductivity of electrical hardware; to test metals for surface discontinuities; and to measure coating thickness, thermal conductivity, as well as the aging and tensile strength of aluminum that its alloys. Measurements are useful in finding defects in rod, wire, and tubing. 14.10— Acoustics Acoustic approaches based on pulse echo techniques (Figure 14.7), where emitted sound waves reflected from objects are detected, can be used for part presence detection, distance ranging, and shape recognition. In the case of part presence, if the part is present, there is a return signal to a detector. In the case of ranging, the sensors detect the time of flight between an emitted pulse of acoustic energy and the received pulse reflected from an object. Page 349 Figure 14.7 Acoustic-based pattern recognition. In shape recognition, the system uses sound waves of a fixed frequency usually at 20 or 40 KHz. The fixed-frequency sound wave reflects off objects and sets up an interference pattern as the waves interfere constructively and destructively. An array of ultrasonic transducers senses the acoustic field set up by the emitter at a number of distinct locations, typically eight. Pattern recognition algorithms deduce whether the shape is the same as a previously taught shape by comparing interference patterns. 14.11— Touch-Sensitive Probes Touch-sensitive probes employ some type of sensitive electrical contact that can detect deflection of the probe tip from a home position and provide a voltage signal proportional to the deflection. When such probes are mounted on a machine, the electrical signal corresponding to probe deflection can be transmitted to a control system. In this man- Page 350 ner they can serve as a means to determine where and when the workpiece has been contacted. By comparing the actual touch location with the programmed location in the part program, dimensional differences can be determined. Page 351 Appendix A— Glossary A Aberration Failure of an optical lens to produce exact point-to-point correspondence between an object and its image. Accuracy Extent to which a machine vision system can correctly interpret an image, generally expressed as a percentage to reflect the likelihood of a correct interpretation; the degree to which the arithmetic average of a group of measurements conforms to the actual value or dimension. Acronym Model-based vision technique developed at Stanford University that uses invariant and pseudoinvariant features predicted from the given object modes; the object is modeled by its subparts and their spatial relationships. Active Illumination Illumination that can be varied automatically to extract more visual information from the scene; for example, by turning lamps on and off, by adjusting brightness, by projecting a pattern on objects in the scene, or by changing the color of the illumination. A/D Acronym for analog to digital; A/D converter converts data from analog form to digital form. Algorithm Exact sequence of instructions, with a finite number of steps, that tell how to solve a problem. Aliasing Effect caused by too low a sampling frequency for the spatial frequencies in an image. The effect is that the apparent spatial frequency in the sampled image is much lower than the original frequency. It makes repetitive small features look large. Page 352 Ambient Light Light present in the environment around a machine vision system and generated from sources outside of the system itself. This light must be treated as background noise by the vision system. Analog Representation of data as a smooth, continuous function. Analog-to-Digital Converter Device that converts an analog voltage signal to a digital signal for computer processing. Angle of Incidence Angle between the axis of an impinging light beam and perpendicular to the specimen surface. Angle of View (1) Angle formed between two lines drawn from the most widely separated points in the object plane to the center of the lens. (2) Angle between the axis of observation and perpendicular to the specimen surface. Aperture Opening that will pass light. The effective diameter of the lens that controls the amount of light passing through a lens and reaching the image plane. Area Analysis Process of determining the area of a given view that falls within a specified gray level. Area Diode Array Solid-state video detector that consists of rows and columns of light-sensitive semiconductors. Sometimes referred to as a matrix array. Array Processor Programmable computer peripheral based on specialized circuit designs relieves the host computer of high-speed numbercrunching types of calculations by simultaneously performing operations on a portion of the items in large arrays. Artificial Intelligence Approach in computers that has its emphasis on symbolic processes for representing and manipulating knowledge in solving problems. This gives a computer the ability to perform certain complex functions normally associated with human intelligence, such as judgment, pattern recognition, understanding, learning, planning, classifying, reasoning, self-correction, and problem-solving. Aspect Ratio Ratio of width to height for the frame of a televised picture. The U.S. standard is 4 : 3. Also, the value obtained when the larger scene dimension is divided by the smaller scene dimension; e.g., a part measures 9 × 5 in.; the aspect ratio is 9 divided by 5, or 1.8. Astigmatism Lens aberration associated with the failure of primary and secondary images to coincide. Autofocus Computer-controlled function that automatically adjusts the optical system to obtain the sharpest image at the image plane of the detector. Automatic Gain Control Camera circuit by which gain is automatically adjusted as a function of input or other specified parameter. Automatic Light Control Television camera circuit by which the illumination incident upon the face of a pickup device is automatically adjusted as a function of scene brightness. Page 353 Automatic Light Range Television camera circuit that ensures maximum camera sensitivity at the lowest possible light level as well as provides an extended dynamic operating range from bright sun to low light. Automatic Vision Inspection Technology that couples video cameras and computers to inspect various items or parts for a variety of reasons. The part to be inspected is positioned in a camera's field of view. The part's image is first digitized by the computer and then stored in the computer's memory. Significant features of the stored image are than ''compared" with the same features of a known good part that has been previously placed in the computer's memory. Any difference between the corresponding characteristics of the two parts will be either within a tolerance and hence good or out of tolerance and therefore bad. Also see Computer Vision and Machine Vision. B Back Focal Distance Distance from the rearmost element in a lens to the focal plane. Backlighting Condition where the light reaching the image sensor is not reflected from the surface of the object. Often backlighting produces a silhouette of an object being imaged. Back Porch That portion of a composite picture signal that lies between the trailing edge of a horizontal sync pulse and the trailing edge of the corresponding blanking pulse. [...]... (EIA) standard governing monochrome television studio electrical signals Specifies maximum amplitude of 1.4 V peak to peak, including synchronization pulses Broadcast standard RS-232 EIA standard reflecting properties of serial communication link RS-232-C, RS-422, RS-423, RS-449 Standard electrical interfaces for connecting peripheral devices to computers EIA standard RS-449, together with EIA standards... alternate black and white bars of equal widths to the difference in signal between large-area blacks and large-area whites having the same illuminations as the black and white bars in the test pattern Page 376 SRI Vision Module Object recognition, inspection, orientation, and location research vision system developed at SRI International; based on converting the scene into a binary image and extracting... shade Discrimination Degree to which a vision system is capable of sensing differences Distortion Undesired change in the shape of an image or waveform from the original object or signal Dyadic Operator Operator that represents an operation on two and only two operands The dyadic operators are AND, equivalence, exclusion, exclusive, OR, inclusion, NAND, NOR, and OR Dynamic Range Ratio of the maximum... green, and blue; a three-primary-color system used for sensing and representing color images Roberts Cross Operator Operator that yields the magnitude of the brightness gradient at each point as a means of edge detection in an image Robot Vision Use of a vision system to provide visual feedback to an industrial robot Based upon the vision system's interpretation of a scene, the robot may be commanded... optical system and its focus Boolean Algebra Process of reasoning or a deductive system of theorems using a symbolic logic and dealing with classes, propositions, or on-off circuit elements such as AND, OR, NOT, EXCEPT, IF, THEN, etc., to permit mathematical calculations Bottom-up Processing Image analysis approach based on sequential processing and control starting with the input image and terminating... Area in which the vision system will look for a part This area is defined by how much the part is expected to move SECAM See NTSC Segmentation Process of separating objects of interest (each with uniform attributes) from the rest of the scene or background; partitioning an image into various clusters Page 374 Semantic Network Knowledge representation for describing the properties and relations of objects,... of straight lines and arcs Condenser Lens used to collect and redirect light for purposes of illumination Congruencing Process by which two images of a multi-image set are transformed so that the size and shape of any object on one image is the same as the size and shape of that object on the other image In other words, when two images are congruenced, their geometries are the same, and they coincide... charge injection device Classification See Identification Closed-Circuit Television Television system that transmits signals over a closed circuit rather than broadcasts the signals C Mount Threaded lens mount developed for 16-mm movie work; used extensively for closed-circuit television The threads have a major diameter of 1.000 in and a pitch of 32 threads per inch The flange focal distance is 0.69 in... a maximum value of 1, and very elongated objects have a compactness approaching zero Page 356 Compass Gradient Mask Linear filter based on specific weighting factors of nearest neighbor pixels Complementation Logical operation that interchanges the black and white regions in an image Composite Sync Combination of horizontal and vertical sync into one pulse Composite Video Television, the signal created... Combination of horizontal and vertical sync into one pulse Composite Video Television, the signal created by combining the picture signal (video), the vertical and horizontal synchronization signals, and the vertical and horizontal blanking signals Computer Vision Perception by a computer, based on visual sensory input, in which a symbolic description is developed of a scene depicted in an image It is often . Automation," P&IM Review and APICS News, April 1984. Zuech, N., " ;Machine Vision: Part 1-Leverage for CIM," CIM Strategies, August 1984; " ;Machine Vision: Part 2- Getting Started,". Conference Proceedings, Machine Vision Association of the Society of Manufacturing Engineers, March 25–28, 1985. Revised for SME Workshop on Machine Vision, November 1985. Abbott, E., and Bolhouse, V.,. "Never trust a machine vision vendor that uses the phrase 'piece of cake.' " References Applications Abbott, E. H., "Specifying a Machine Vision System," Vision 85 Conference

Ngày đăng: 10/08/2014, 02:21