Image Acquisition and Processing

Một phần của tài liệu Advances automation techniques in adaptive material processing (Trang 175 - 180)

3. Vision-Based Seam Tracking

3.4 Image Acquisition and Processing

The structured light sensor operates on the principle of triangulation as shown in Figure 20. A sheet of light is generated using a combination of a laser diode and cylindrical lens. This sheet of light is arranged to fall across the camera field of view at a known angle. The detector (CCD camera or CCD-line detector) receives the reflected laser beam at an angle. An optical

filter (band pass filter) will filter off all other light rays arising from welding, only allowing light at the same frequency as that of the laser beam. At different depths, the detector receives the reflected laser beam at different angles and the depth can then be calculated by triangulation.

Although the camera is mounted above the workpiece, it is able to detect the depth changes along the cross-section of the workpiece, i.e. the image captured is similar to looking at the workpiece from the front view.

The image is then passed through a hardware filter to remove noise. It is then digitised and stored in pixel memory by a frame grabber board.

3.4.2 Capturing the Image

VisionBlox image acquisition and preparation tools are used to grab an line of capturing an image is summarised in Figure image

28.

The programming of capturing an image is summarised in Figure

Define frame grabber vision board Define an editable image

for image display

Grab the image

Figure 28 Image acquisition programming.

The process of capturing the image is summarised in Figure 29. The size of the digitised image is 768 by 572 pixels with a greyscale level of 0 to 255. The origin of the image (0,0) is at the top left-hand corner of the image (see Figure 30).

Hardware filter and laser beam of sensor head is switched on

• ~

CCD camera captures image

Signals go through hardware filter

f

Frame Grabber Board digitises the analogue image

i

The digitised image is stored in the computer memory

( ^ End J

Figure 29 Processing of image acquisition.

Figure 30 A 768 by 572 pixels captured image with origin (0,0) at top-left comer.

3.4.3 Image Preprocessing

Before the captured image can be processed to obtain the seam point, it has to be "cleaned up" first or to undergo some image preprocessing. Image preprocessing can be carried out using built-in functions from VisionBlox.

3.4.4 Erosion

Erosion works (at least conceptually) by translating a structuring element to various points in the input image, and examining the intersection between the translated kernel coordinates and the input image coordinates. It takes two pieces of data as inputs (see Figure 31). The first one is the image that will be eroded. The second is a (usually small) set of coordinate points known as a structuring element (also known as a kernel). It is this structuring element that determines the precise effect of the erosion on the input image.

Figure 31 Original image and image after erosion.

The effect of greyscale erosion will generally darken the image. Bright regions surrounded by dark regions shrink in size, and dark regions surrounded by bright regions grow in size. Small bright spots in images will disappear as they are eroded away down to the surrounding intensity value, and small dark spots will become larger spots. The effect is most marked at places in the image where the intensity changes rapidly, and regions of fairly uniform intensity will be left more or less unchanged except at their edges

In VisionBlox, erosion is used mainly to eliminate fine detail from features, enlarge gaps and holes in features, or decrease the size of a feature without changing its general shape.

3.4.5 Blobs Tool Control

The Blobs Tool Control is used to segment the inspection image into areas of similar intensity. Information concerning the blobs, such as size,

number, and location, is then made available to the user. Blobs Tool Control performs a blob, or connectivity analysis, on the inspection image and displays the results on the inspection image (see Figure 32). This process will segment the inspection image into areas of the blob. Blobs Tool Control can operate on the whole image. A user can also define a region of interest (ROI) using an editable shape control.

Figure 32 Image after undergoing erosion, thinning and blobs tool.

3.5 Seam Detection Algorithms

Different algorithms have been considered for implementation of the seam tracking controller [13,18]. These include methods involving template matching, feature finding, Hough Transform and detection based on pixel intensity. We describe below seam detection algorithm using 'feature find' as it was found to be superior to the other algorithms considered for the task at hand.

The primary function of the feature-find tool is to locate a given feature within an image. The feature-find tool uses a normalised correlation engine.

If a feature is located with this tool, a resultant position is made available.

Each find has an associated correlation score, the angle at which the model was found, and status flags indicating that the model was found within the region of interest, or if it was found on an edge.

One reason the feature finding tool is so fast is that it uses a series of heuristics to speed up processing. Consider a normal convolution process where a certain size kernel is moved over the mage - pixel by pixel, this image matching comparison is normally a processor intensive and slow process. By using sub-samples, on both the search image and model, regions indicating high potential matches are found more quickly.

Once the feature finding tool is executed, it will indicate whether it successfully found its model by setting the "found" property to true. When the "found" property is set to true, then the X and Y coordinates of the found models can be read by "^Position" and "YPosition" property.

hi this application, only one trained feature or reference feature, which is normally at the starting position, is used. Whenever the system captures a new frame, it will use this trained feature to locate any similar feature in the acquired image.

As the process is started, the system uses the trained feature to locate the seam point from every image acquired. Once the identical feature is found, the "found" property is set to true, and the X and Y coordinates of the identical feature are read in pixels. However, the X coordinate represents the vertical axis (z-axis) of the real world.

Một phần của tài liệu Advances automation techniques in adaptive material processing (Trang 175 - 180)

Tải bản đầy đủ (PDF)

(321 trang)