freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

基于視覺(jué)的移動(dòng)機(jī)器人設(shè)計(jì)與分析畢業(yè)設(shè)計(jì)說(shuō)明書(shū)-資料下載頁(yè)

2025-06-27 20:32本頁(yè)面
  

【正文】 考文獻(xiàn)[1] 4. R西格沃特, IR諾巴克什. 李人厚. 自主移動(dòng)機(jī)器人導(dǎo)論[M] .西安: 西安交通大學(xué)出版社, 2006: 25[2] 祁偉等. 單片機(jī)C51程序設(shè)計(jì)教程與實(shí)驗(yàn) 北京航空航天大學(xué)出版社 2006[3] 鄧星鐘等. 機(jī)電傳動(dòng)控制. 武漢: 華中科技大學(xué)出版社,2000[4] 胡宴如. 模擬電子技術(shù). 北京: 高等教育出版社, 2007[5] 4. R西格沃特, IR諾巴克什. 李人厚. 自主移動(dòng)機(jī)器人導(dǎo)論[M] .西安: 西安交通大學(xué)出版社, 2006: 25[6] 濮良貴,紀(jì)名剛. 機(jī)械設(shè)計(jì). 北京:高等教育出版社,2001[7] 廖常初. PLC編程與應(yīng)用. 北京:機(jī)械工業(yè)出版社,2008[8] 錢(qián)鈞, 楊汝清, 王晨, 周啟龍, 楊明. 基于路標(biāo)的智能車(chē)輛定位[J]. 上海交通大學(xué)學(xué)報(bào), : 895898[9] [M].北京::380387 [10] 許俊勇,王景川,陳衛(wèi)東. 基于全景視覺(jué)的移動(dòng)機(jī)器人同步定位與地圖創(chuàng)建研究[J].機(jī)器人, 2008, 30(4): 289279[11] 王昆,何小柏,汪信遠(yuǎn). 機(jī)械設(shè)計(jì)課程設(shè)計(jì). 北京:高等教育出版社,2008[12] 張廣軍. 機(jī)器視覺(jué)[M]. 北京:: 99122[13] 卓晴,黃開(kāi)勝,邵貝貝,學(xué)做智能車(chē)——挑戰(zhàn)“飛思卡爾”杯,北京:北京航空航天大學(xué)出版社,2007[14] 連秀林,自主移動(dòng)機(jī)器人視覺(jué)導(dǎo)航技術(shù)研究,北京交通大學(xué)碩士論文,2006[15] 程宏煌,戴衛(wèi)恒等,圖像分割方法綜述,電信快報(bào),2000(10):3941[16] 應(yīng)駿,葉秀清,顧偉康,一個(gè)基于知識(shí)的邊沿提取算法,中國(guó)圖像圖形報(bào),1999,4A(3):239~242[17] 靳宏磊,朱蔚萍,李立源等,二維灰度直方圖的最佳分割方法,模式識(shí)別與人工智能,19912(3):329333[18] 劉立萍,吳立德,圖像分割中閾值選取方法比較研究,模式識(shí)別與人工智能,1997,10(3):271277[19] 曹莉華,圖像邊緣提取中的一種動(dòng)態(tài)閾值獲取法,小型微型計(jì)算機(jī)系統(tǒng),1997[20] 卓晴,智能汽車(chē)自動(dòng)控制器方案設(shè)計(jì),清華大學(xué)自動(dòng)化系,2006[21] 劉傳才. 圖像理解與計(jì)算機(jī)視覺(jué)[M]. 廈門(mén):廈門(mén)大學(xué)出版社. 2002: 3340 [22] 康牧,許慶功,王寶樹(shù). 一種Roberts自適應(yīng)邊緣檢測(cè)方法[J].西安交通學(xué)學(xué)報(bào),2008,42(10): 12401244 附 錄外文資料 Typically, an imageprocessing application consists of five steps. First, an image must be acquired. A digitized representation of the image is necessary for further processing. This is denoted with a twodimensional function I(x, y) that is described with an array. X marks a column and y a row of the array. The domain for x and y depends on the maximal resolution of the image. If the image has size n m, whereby n represents the number of rows and m the number of columns, then it holds for x that 0 x m, and for the y analog, 0 y n. x and y are positive integers or zero. This holds also for the domain of I(x,y)is the maximal value for the function value. This then provides the domain, 0 I(x, y) I(x。 y). Every possible discrete function value represents a gray value and is called a pixel. Subsequent preprocessing tries to eliminate disturbing effects. Examples are inhomogeneous illumination noise, and movement detection. If imagepreprocessing algorithms like the movement detection are applied to an image, it is possible that image pixels of different objects with different properties are merged into regions, because they fulfill the criteria of the preprocessing algorithm. Therefore, a region can be considered as the accumulation of coherent pixels that must not have any similarities. These image regions or the whole image can be deposed into segments. All contained pixels must be similar in these segments.Pixels will be assigned to objects in the segmentation phase, which is the third step. If objects are isolated from the remainder of the image in the segmentation phase, feature values of these objects must be acquired in the fourth step. The features determined are used in the fifth and last step to perform the classification. This means that the detected objects are allocated to an object class if their measured feature values match to the object description. Examples for features are the object height, object width, pactness, and circularity.A circular region has the pactness of one. The alteration of the regions length effects the alteration of the pactness value. The pactness bees larger if the regions length rises. An empty region has value zero for the pactness.Color ModelsThe process of vision by a human being is also controlled by colors. This happens subconsciously with signal colors. But a human being searches in some situations directly for specified colors to solve a problem. The color attribute of an object can also be used in puter vision. This knowledge can help to solve a task .For example, a putervision application that is developed to detect people can use knowledge about the color of the skin for the detection. This can affect ambiguity in some situations. For example, an image that is taken from a human being who walks beside a carton is difficult to detect, if the carton has a similar color to the color of the skin. But there are more problems. The color attributes of objects can be affected by other objects due to light reflections of these objects. Also colors of different objects that belong to the same class, can vary. For example, a European has a different skin color from an African although both belong to the class “human being”. Color attributes like hue, saturation, intensity, and spectrum can be used to identify objects by its color. Alterations of these parameters can effect different reproductionsof the same object. This is often very difficult to handle in putervision applications. Such alterations are as a rule for a human being no or only a small problem for recognition. The selection of an appropriate color space can help in puter vision. Several color spaces exist. Two oftenused color spaces are now depicted. These are RGB and YUV color spaces. The RGB color space consists of three color channels. These are the red, green, and blue channels. Every color is representedby its red, green, and blue parts. This coding follows the threecolor theory of Gauss. A pixels color part of a channel is often measured within the interval [0。 255]. Therefore, a color image consists of three gray images. The RGB color space is not very stable with regard to alterations in the illumination, because the representation of a color with the RGB color space contains no separation between the illumination and the color parts. If a putervision application, which performs image analysis on color images, is to be robust against alterations in illumination, the YUV color space could be a better choice, because the color parts and the illumination are represented separately. The color representation happens only with two channels, U and V. Y channel measures the brightness. The conversion between the RGB and the YUV color space happens with a linear transformation: (11)This yields the following equations: Y=++
點(diǎn)擊復(fù)制文檔內(nèi)容
教學(xué)課件相關(guān)推薦
文庫(kù)吧 www.dybbs8.com
備案圖鄂ICP備17016276號(hào)-1