【正文】
但是實(shí)際應(yīng)用中,車輛牌照(字符)上下邊緣定位是有一定誤差的,這也是本算法誤差的主要來源。另一方面,由于參數(shù)比較多,實(shí)地應(yīng)用時(shí)測(cè)量參數(shù)比較費(fèi)事,不過這些參數(shù)可以無須多次測(cè)量,首次使用時(shí)一次測(cè)量即可。第五章 結(jié)論及展望隨著我國交通建設(shè)的高速發(fā)展,隧道、大橋、道路監(jiān)控的規(guī)模在迅速擴(kuò)大,依靠人工監(jiān)控車輛運(yùn)行狀況已很難滿足當(dāng)前日益復(fù)雜的交通管理需要,解決這一問題的關(guān)鍵在與建立智能交通系統(tǒng)ITS。作為ITS的重要組成部分,基于視頻的交通信息檢測(cè)技術(shù)占有這非常重要的地位。本文研究了基于視頻的交通事件和交通參數(shù)檢測(cè)系統(tǒng)中瞬時(shí)車輛速度檢測(cè)模塊。分別對(duì)兩個(gè)模塊的內(nèi)容進(jìn)行了研究:。在線性模型標(biāo)定的基礎(chǔ)上,根據(jù)實(shí)地場(chǎng)景的特性,用幾何的方法,推導(dǎo)出一維標(biāo)定算法,并結(jié)合系統(tǒng)的實(shí)際情況,針對(duì)一維標(biāo)定的缺點(diǎn),增加一個(gè)補(bǔ)償角度,擴(kuò)展成為二維標(biāo)定,最后簡(jiǎn)化二維標(biāo)定,在提高標(biāo)定精度的同時(shí),又不增加計(jì)算成本。分別研究了模板匹配算法和速度計(jì)算方法。模板匹配是在基于圖像匹配的基礎(chǔ)上,將虛擬線圈和圖像匹配算法結(jié)合起來,根據(jù)線圈位置選取合適的模板,在一定范圍內(nèi)進(jìn)行匹配,并對(duì)多種匹配準(zhǔn)則進(jìn)行對(duì)比,得到像素級(jí)車輛位移。速度計(jì)算部分分析模板波形特征,取其谷值點(diǎn),計(jì)算出實(shí)際車輛位移,結(jié)合攝像機(jī)標(biāo)定結(jié)果,計(jì)算出車輛速度。盡管運(yùn)用對(duì)視頻的處理,獲得車輛實(shí)時(shí)瞬時(shí)速度得到了初步實(shí)現(xiàn),并取得了一定進(jìn)展,但在以下幾個(gè)方面仍然需要進(jìn)一步深入研究:攝像機(jī)標(biāo)定有著已成型的幾種算法,但這幾種典型算法是不能夠精確的適用于所有情況的,在實(shí)際應(yīng)用中,需要根據(jù)現(xiàn)場(chǎng)實(shí)驗(yàn)的情況進(jìn)行適當(dāng)?shù)囊粋€(gè)改進(jìn)或者調(diào)整,以減少標(biāo)定誤差,減少算法計(jì)算量。視頻測(cè)速的前提是需要一個(gè)準(zhǔn)確的二值化信息,現(xiàn)實(shí)情況下,現(xiàn)場(chǎng)光線條件隨著隧道內(nèi)照明燈的亮度變化而截然不同,車輛陰影也使圖像受噪聲干擾嚴(yán)重,從而決定了二值化信息中非??赡馨幸欢ǖ年幱靶畔?,影響了對(duì)車輛精確位置的確定,使得模板可能找到路面上,從而影響了車速的精確測(cè)量。如何使計(jì)算機(jī)模擬人的認(rèn)知能力,消除車輛圖像陰影所帶來的誤差,準(zhǔn)確快速地實(shí)現(xiàn)車速檢測(cè),是一個(gè)研究熱點(diǎn)。與通常的圖像匹配問題不同,匹配目標(biāo)—車輛處于運(yùn)動(dòng)狀態(tài)中,車輛和固定著的攝像機(jī)相對(duì)位置不斷變化,車輛尺寸、灰度因拍攝角度的不同而發(fā)生復(fù)雜改變。雖然本文中的車輛匹配方法實(shí)時(shí)完成了對(duì)目標(biāo)的匹配,但如何在動(dòng)態(tài)變化的圖像中提取顯著出目標(biāo)對(duì)象特征,并能準(zhǔn)確匹配目標(biāo),從而達(dá)到計(jì)算車速等應(yīng)用目的,是另外一個(gè)研究重點(diǎn)。參考文獻(xiàn)[1][D].山東:山東大學(xué),2005: 78[2]瞿潤平, 戰(zhàn)俊. 視頻檢測(cè)技術(shù)檢測(cè)交通流參數(shù)的原理和方法[J].中國人民公安大學(xué)學(xué)報(bào)(自然學(xué)科版), 1998, 4 (1): 2427.[3]馮奇,周小鵬,孫立軍,[J].上海公路,2004, 1: 51[4]Tamer Rabie, Gasser Auda. ActiveVisionbased Traffic Surveillance and Control[5][[J].中北大學(xué)學(xué)報(bào),2007, 28: 139144[6][J].廣東公安科技,2005, 78: 6667[7]蘇秀平,[J].機(jī)械設(shè)計(jì)與制造,2005, 6(6):148150[8][J].現(xiàn)代物理知識(shí),1998,10(5):67[9][J].貴州大學(xué)學(xué)報(bào)(自然科學(xué)版),2005, 22(3): 287288 WANG Yukang. The principle of ultrasonic wave and application in modern milifarg[J]. Journal of Gui zhou University(Natural Sciences), 2005, 22(3): 287288[10]王樹欣,[[J].電子世界,2005, 8: 47[11]Dailey D J, Li L. Video Image Processing to Create a Speed Sensor[R].Report of WSDOT/TransNow,1999[12][D].吉林:吉林大學(xué),2006: 3842[13][D].陜西:長安大學(xué),2009[14][D].陜西:長安大學(xué),2009[15] Horprasert T, Harwood D, Davis L. A Statistical Approach for Realtime Robust Background Subtraction and Shadow Detection[C]. IEEE ICCV39。99 FRAMERATE Workshop, 1999.[16][D].上海:上海交通大學(xué)2007:4850[17] 馬頌德,[M].北京:科學(xué)出版社,2003:5263[18] 張紹滿,盛栩志,李炳基,[J].華中科技大學(xué)學(xué)報(bào),2004, 32(1): 7678[19] 盛栩志,謝寒生,李炳基,[J].華中科技大學(xué)學(xué)報(bào),2004,32(3): 106108[20] 沈庭芝,:北京理工大學(xué)出版社,1997[21] 普偉,基于視頻的交通流量參數(shù)檢測(cè)[D].陜西:西安電子科技大學(xué),2007: 3537[22]邢霄飛,李永寧,林木華,[J].計(jì)算機(jī)應(yīng)用,2005,25(12): 2803[23]劉愛君,黃席拋,全洪淵,[J].自動(dòng)化與儀器儀表,2007, 131:6567[24]Yung N H C, Lai A H. S. An effective video analysis method for detecting red light runners. IEEE Transactions on Vehicular Technology, 2001, 50(4):10741084附錄A:外文翻譯原文部分An Algorithm for Automatic Vehicle Speed Detection using Video CameraABSTRACTThis paper presents a new method based on digital image processing to realize the realtime automatic vehicle speed monitoring using video camera. Based on geometric optics, it first presents a simplified method to accurately map the coordinates in image domain into realworld domain. The second part is focused on the vehicle detection in digital image frames in video stream. Experiment shows it requires only a single digital video camera and an onboard puter and can simultaneously monitor vehicle speeds in multiple lanes. The detected vehicle speed’s average error is below 4%.Index Terms—Digital Image Processing, vehicle speed detection, puter visionI. INTRODUCTIONVehicle speed monitoring is very important for enforcing speed limit laws. It also tells the traffic conditions of the monitored section of the road of interest.Traditionally, vehicle speed monitoring or detection is realized using radar technology, specifically radar gun and radar detector. Radar is an electromagnetic pulse generated by a transmitter, which sends the radiofrequency pulse down the highway to reflect off a moving vehicle. The reflection induces a very slight frequency shift, called Doppler Shift. The frequency shift can be analyzed to determine the true speed of a moving vehicle. Radar has been used to monitor vehicle speed since the end of World War II and is now the ONLY tool widely used to detect the vehicle speed by police. But the use of radar in speed detection has its limit. The cosine error rises when the radar gun direction is not on the direct path of the oning traffic. When the radar gun is located at the side of the road or above the road, the cosine error bees a significant factor affecting its accuracy. For example, a 15o deviation of the direct path could cause the reading speed 3% less than the real speed and 30o deviation causes 13% error in speed reading. In addition, shadowing (radar wave reflection from two different vehicles with different heights) and radiofrequency interference (error caused by the existence of similar band of RF in environment) are two other important factors that cause errors in radar speed detection. Because of these errors, local police in United States usually does not ticket a vehicle with the detected speed less than 8 km or less over the speed limit. And another disadvantage is that radar sensor can only track one vehicle at any time.In this paper, we present a new algorithm that takes advantage of the digital image processing and camera optics to automatically and accurately detect vehicle speed in realtime. The algorithm requires only a single video camera and an onboard processing puter to operate and can simultaneously detect vehicle speeds in multiple lanes with high accuracy, with less than 4% of error, in real time. The algorithm only requires that the camera is set up directly above the target road section (at least 5 meters above the road to assure satisfactory accuracy) with its optical axis tilting a certain angle downward from the highway forward direction. The calibration is very simple and is done directly on the video frames based on the position of an easytoget vanishing point and a vehicle’s known length and width and its information (upper edge position and lower edge position) in a sample image. The calibration does not require any information about the cam