1. Trang chủ
  2. » Giáo Dục - Đào Tạo

PHƯƠNG PHÁP ƯỚC LƯỢNG VẬN TỐC PHƯƠNG TIỆN DỰA TRÊN THUẬT TOÁN DỰ ĐOÁN CHUYỂN ĐỘNG

4 5 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 4
Dung lượng 1,23 MB

Nội dung

ISSN 1859 1531 TẠP CHÍ KHOA HỌC VÀ CÔNG NGHỆ ĐẠI HỌC ĐÀ NẴNG, SỐ 11(96) 2015, QUYỂN 2 63 A METHOD FOR ESTIMATING VEHICLE SPEED BASED ON A MOTION ESTIMATION ALGORITHM PHƯƠNG PHÁP ƯỚC LƯỢNG VẬN TỐC PHƯƠ[.]

ISSN 1859-1531 - TẠP CHÍ KHOA HỌC VÀ CƠNG NGHỆ ĐẠI HỌC ĐÀ NẴNG, SỐ 11(96).2015, QUYỂN 63 A METHOD FOR ESTIMATING VEHICLE SPEED BASED ON A MOTION ESTIMATION ALGORITHM PHƯƠNG PHÁP ƯỚC LƯỢNG VẬN TỐC PHƯƠNG TIỆN DỰA TRÊN THUẬT TOÁN DỰ ĐOÁN CHUYỂN ĐỘNG Do Viet Hoa1, Nghiem Le Hoa1, Tran Thanh2, Pham Ngoc Nam1 Hanoi University of Science and Technology; nam.phamngoc@hust.edu.vn Dong A University, Danang City; thanht@donga.edu.vn Abstract - Vehicle speed is an important parameter used to evaluate a traffic situation In this paper, a novel method for estimating vehicle speed based on motion estimation is proposed Firstly, after image flattening and cropping is performed, a differential image will be extracted from two consecutive frames using the two-frame difference method Then the common motion vector between two consecutive difference images is computed via a phase correlation algorithm Finally, by means of the scale factor and the computed motion vector, the common vehicle speed is calculated The experimental results show that the proposed method not only produces equivalent values in normal conditions but also produces more stable values in cases where there are unusual movements on the road compared with other existing methods for estimating vehicle speed Tóm tắt - Vận tốc phương tiện đại lượng quan trọng để đánh giá tình trạng giao thông Trong báo này, phương pháp ước lượng vận tốc dựa dự đoán chuyển động đề xuất Đầu tiên, sau thực trải phẳng ảnh cắt lấy vùng quan sát, ảnh sai khác tính tốn dựa hai khung hình liên tiếp Sau vector chuyển động chung tính tốn dựa việc áp dụng phương pháp tương quan pha cho toán hai ảnh sai khác liên tiếp Cuối việc sử dụng hệ số tỉ lệ vector chuyển động, vận tốc chung phương tiện đưa Kết thử nghiệm cho thấy phương pháp không đưa giá trị vận tốc tương đương điều kiện thơng thường mà cịn đưa giá trị vận tốc ổn định có di chuyển bất thường đường so với phương pháp ước lượng vận tốc biết Key words - traffic camera; motion estimation; phase correlation; estimation; speed Từ khóa - camera giao thơng; dự đốn chuyển động; tương quan pha; ước lượng; vận tốc Introduction Nowadays, traffic surveillance cameras are widely used in traffic monitoring and control systems Much traffic information, which includes vehicle speed, can be extracted from captured traffic videos Most of the existing algorithms use two or more frames to calculate vehicle speed in a video These algorithms recognize the feature points or objects in the previous frame and track them in the current frame The method proposed in [1] considers license plates as feature objects and try to recognize and track them In [2], a method is proposed to extract the contour of the vehicles and track all the points on the contour separately The methods based on tracking the feature points or objects can produce reliable vehicle speed but when the traffic scene is complex with several different types of moving, objects such as cars, motorbikes, or even walking people, the estimated speed may be inaccurate and become unstable Some other studies estimate vehicle speed from only one frame The study in [3] tries to estimate the vehicle speed from the image of blurred vehicles captured by a camera in a slow mode The method proposed in [4], which is developed for night scenes from a similar idea with the previous one, measures the length of car light in a traffic image to calculate the vehicle speed Both studies use a road-side camera and are only tested with scenes consisting of a single vehicle This paper is to propose a method for estimating vehicle speed based on motion estimation, which can produces reliable vehicle speed estimation not only in normal cases but also in unusual scenes consisting of many types of moving objects with different directions and speeds This paper is organized as follows Section provides an overview of the proposed method The detailed description of the method is given in Section Section shows test cases and the experimental results Finally, Section concludes the paper Overview of the proposed method The vehicle speed estimation method consists of the following steps (Figure 1): START CONVERT TO GRAYSCALE FLATTEN AND ROI-CRO P COMPUTE DIFFERENTIAL MASK FIND MOTIO N VECTOR CALCULATE VEHICLE SPEED END Figure Overview of the proposed method - Converting video frames into grayscale images in 64 Do Viet Hoa, Nghiem Le Hoa, Tran Thanh, Pham Ngoc Nam order to make the later processing steps less complex since they only have to work with a single color channel - Flattening and ROI (Region of Interest) cropping images to make the images identical in the scale factor (i.e the relation between the motion vector on the image and the real movement on the ground) = where:  =  - Finding the motion vector by applying the motion estimation on the two sequential binary masks image Detailed design of the method In this section, the details of each processing step will be described 3.1 Converting to grayscale The input frames are the True Color RGB image that consists of three color channels namely Red, Green and Blue In most image processing tasks, working with grayscale images is more common and effective than with True Color ones Therefore, before doing any further processing, the input frames have to be converted to grayscale by using the following conversion equation: : the world coordinate of the point on the image - Computing differential masks to produce binary masks that are considered as the masks of movement - Calculating the vehicle speed from the motion vector and the scale factor (2) = : the pixel coordinate of the point on the ℎ ℎ ℎ ℎ ℎ : the transformation matrix = ℎ ℎ ℎ ℎ With a specific camera, the transformation matrix is constant and can be calculated by camera calibration algorithms [5] In most cases, this depends on the tilt angle, the pan angle, the focal length and the scale factor After image flattening is performed, the scale factor between the distance on the image and the real distance on the ground is constant (Figure 3) Another step to reduce the computational cost is ROI cropping which keeps only the usable regions in the image (e.g the road)  ( , ) = 0.299 × ( , ) +0.587 × ( , ) + 0.114 × ( , ) (1) where:  ( , ): the pixel value at ( , ) of the grayscale image ( , ), ( , ), ( , ): the value at ( , ) of the  red, green, blue channels, respectively After this step, each pixel in the image has only the intensity value as illustrated in Figure Figure Flattened traffic image 3.3 Computing differential masks In this step, two consecutive frames will be compared and all the pairs of pixels which have a significant change will be considered as different and the corresponding pixel in the mask will be set to white color (Figure 4) Figure Grayscale traffic image 3.2 Flattening and ROI cropping To calculate the vehicle speed from the motion vector, the scale factor has to be known The flattening step aims to produce an image which has the same scale factor in every pixels It can be achieved by performing perspective transformation The projection equation is given below: Figure Differential mask The differential image between two consecutive frames -th and ( − 1)-th is calculated by the following equation: ( , )=| ( , )− ( , )| (3) where: ( , ): The intensity difference at ( , ); ISSN 1859-1531 - TẠP CHÍ KHOA HỌC VÀ CƠNG NGHỆ ĐẠI HỌC ĐÀ NẴNG, SỐ 11(96).2015, QUYỂN ( , ): The intensity value of the -th frame at ( , ) Then, all the pixels in the differential image will be compared with a threshold value to determine whether the difference is significant or not The thresholding equation is given: 0, ( , ) < ( , )= (4) 255, ( , ) ≥ where: ( , ): the differential mask value at ( , )  between the -th and ( − 1)-th frames  : threshold value The value of can be automatically chosen by Otsu algorithm [5] 3.4 Finding motion vector This is an essential step in our proposed method Instead of finding feature points or objects, tracking and calculating the motion vectors of each feature, the motion estimation algorithm is used to calculate the motion vector of the whole image This is the reason why the input image has to be flattened The fast, noise-insensitive motion estimation algorithm used in our method is phase correlation This algorithm tries to find the motion vector in the frequency domain This algorithm will be briefly described below Let , be the two grayscale images and , be the Discrete Fourier Transform (DFT) of , respectively: = ℱ{ }, = ℱ{ } (5) Let × be the size of the image Suppose (Δ , Δ ) is the motion vector: ( − Δ ) , ( − Δ ) ( , )= ∗ = (8) Applying the inverted DFT we have: ( , )=ℱ { }= ( +Δ , +Δ ) (9) The motion vector can be found via the coordinate of the peak found in ( , ) 3.5 Calculating vehicle speed With the found motion vector on the image, the movement distance on the ground can be calculated by means of the following equation: = ( × ) + × (10) where:  (11) : the vehicle speed  : frame per second Experiments In this section, the proposed method will be tested using several test cases: single vehicle tests and multiple vehicle tests The proposed method will be compared with the optical flow-based vehicle speed estimation method in [2] which is proved to be highly accurate 4.1 Single vehicle test In single vehicle tests (Figure 5), there is only one motorbike in the scene which runs at a known speed and direction This test is used to evaluate the accuracy of the proposed method Figure The single vehicle test scene The relative error between the speed estimated by our method and the optical flow-based method is given below: [] = ∑ [] [] × 100% (12) where: ( , )= ( , )× (7) The cross-correlation between the two images in the frequency domain is: ∗  (6) After applying DFT we have: × where: : the movement distance on the ground  , : the scale factor in - and -coordinate (m/pixel) The vehicle speed finally can be calculated as below:  : the relative error between the speed estimated by our method and the optical flow-based method  : the total number of frames [ ]: the speed estimated in -th frame  by our method [ ]: the speed estimated in -th frame  by the optical flow-based method Estimated vehicle speed (single vehicle) 25 23 Speed (km/h) ( , )= = 65 21 19 17 15 10 20 30 40 Frame Optical Flow 50 60 70 Our method Figure The estimated vehicle speed in the single vehicle test 66 Do Viet Hoa, Nghiem Le Hoa, Tran Thanh, Pham Ngoc Nam The experimental results (Figure 6) show that the vehicle speed estimated by our method is equivalent with the value produced by the optical flow-based method Performing the test in many different videos, the average relative error is 0.71% This value can be considered as negligible 4.2 Multiple vehicle test Figure The multiple vehicle test scene Speed (km/h) Estimated vehicle speed (a walking person in the scene) 43 38 33 28 23 10 20 30 40 50 60 70 Frame Optical Flow Our method 30 25 20 15 10 11 22 33 44 55 66 77 88 99 110 121 132 143 154 165 Speed (km/h) Estimated vehicle speed (a faster motorbike in the scene) Frame Optical Flow Our method Figure The estimated vehicle speed in the multiple vehicle test In multiple vehicle tests (Figure 7), there are motorbikes in the scene running at a known speed and direction and motorbike in the scene which runs at an arbitrary speed and direction Sometimes there are some bicycles and people in the scene This test is used to evaluate the stability of the proposed method in complex scenes The experimental results (Figure 8) show that when the four motorbikes are running in the scene, the speeds estimated by the two methods are similar But when the fifth motorbike runs at a much slower or faster speed, or when the walking people appear in the scene, there appear significant differences The speed estimated by the optical flow-based method becomes unstable, especially when the speed of the fifth motorbike is slower than that of the other ones By contrast, the speed estimated by our method is still stable and independent of individual movements By applying motion estimation for the whole image, the found motion vector tends to represent all movements in the video, therefore the final estimated speed is the average speed at which most of the vehicles in the scene run Conclusions In this paper, we have proposed a vehicle speed estimation method that applies an algorithm for estimating phase correlation motion on the whole image The experimental results show that the accuracy of our method is comparable with the optical flow-based one in single vehicle test cases and our method is more stable in cases where there are some unusual movements in the scene REFERENCES [1] G Garibotto, P Castello, E D Ninno, P Pedrazzi and G Zan, "Speed-Vision: Speed Measurement by License Plate Reading and Tracking," in IEEE Intelligent Transportation Systems Conference Proceedings, Oakland, 2001 [2] J L G H B R L W Jinhui Lana, "Vehicle Speed Measurement Based on Gray Constraint Optical Flow Algorithm," Optik International Journal for Light and Electron Optics, pp 289-295, 2013 [3] H.-Y Lin, "Vehicle Speed Detection and Identification from a Single Motion Blurred Image," in IEEE Workshop on Applications of Computer Vision (WACV/MOTION’05), Breckenridge, 2005 [4] Y Goda, L Zhang and S Serikawa, "Proposal a Vehicle Speed Measuring System Using Image Processing," in International Symposium on Computer, Consumer and Control (IS3C), Taichung, 2014 [5] N Otsu, "A Threshold Selection Method from Gray-Level Histograms," IEEE Transactions on Systems, Man and Cybernetics, vol 9, no 1, pp 62-66, 2007 (The Board of Editors received the paper on 09/15/2015, its review was completed on 10/12/2015)

Ngày đăng: 16/11/2022, 20:43

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w