【正文】
Support Vector Machine (SVM)I. INTRODUCTIONFace detection has received much more attention in recent years. It is the first step in many applications such as face recognition, facial expression analysis, surveillance, security systems and human puter interface (HCI). Therefore, the performance of these systems depends on the efficiency of face detection process. The main aim in face detection is to determine the location of probable faces in images. Face detection according to various approaches, are classified into four categories. (i) knowledgebased, (ii) template matching, (iii) featurebased, and (iv) machine learning methods.Knowledgebased methods detect faces based on some roles which capture the relationships among facial features. Template matching methods find the similarity between input image and the template. Featurebased methods use some features (such as color, shape, and texture) to extract facial features to obtain face locations. Machine learning methods use techniques from statistical analysis and machine learning to find the relevant characteristics of faces and nonfaces. Despite of the notable successes achieved in past decades, making a tradeoff between putational plexity and detection efficiency is still the main challenge.This paper proposes a method for color face detection using AdaBoost algorithm bined with skin color information and support vector machine (SVM). The rest of this paper is organized as follows. In Section 2, related work is explained. Proposed face detection algorithm is described in Section 3. Experimental results are presented in Section 4, and finally Section 5 concludes the paper.II. RELATED WORKA. Face Detection Using AdaBoostViola and Jones proposed a totally corrective face detection algorithm in. They used a set of Haarlike features to construct a classifier. Every weak classifier had a simple threshold on one of the extracted features. AdaBoost classifier was then used to choose a small number of important features and bines them in a cascade structure to decide whether an image is a face or a nonface. 1) Haarlike FeaturesA set of Haarlike features used as the input features to the cascade classifier, are shown in Fig. 1. Computation of Haarlike features can be accelerated using an intermediate image representation called the integral image. An integral image was defined as the sum of all pixel values (in an image) above and to the left, including itself. 2) AdaBoost LearningAdaBoost is an algorithm for constructing a posite classifier by sequentially training classifiers while putting more and more emphasis on certain patterns. It can be summarized as follows: a) Consider example images ,…, where stand for negative and positive examples, respectively.Figure 1. Example of Haarlike features b) Initialize Weights (1)Where m and n are the number of positive and negative examples, respectively, and L = m + n. c) Do for t=1, … ,T:1. Normalize the weights (2)2. For each feature, j, train a classifier , and calculate its error with respect to as (3)3. Choose the classifier with lowest error 4. Update the weights: (4) d) Final classifier is: (5)where 3) Detection CascadeIn order to greatly improve the putational efficiency and to also reduce the false positive rate, a sequence of increasingly more plex classifiers called a cascade is built. Fig. 2 shows the cascade.All SubWindows132Further processingTTTFFFReject SubWindowsFigure 2. Schematic depiction of a detection cascade.Every stage of the cascade either rejects the analyzed window or passes it to the next stage. Only the last stage may finally accept the window. So, to be accepted, a window must pass through the whole cascade, but rejection may happen at any stage. During detection, most subwindows of the analyzed image are very easy to reject, so they are rejected at early stage and do not have to pass the whole cascade. Stages in cascade are constructed by training classifiers using AdaBoost.B. Skin Color DetectionColor is a powerful fundamental cue of human faces. Distribution of skin color clusters lay on a small region of the chromatic color space. Skin color can be used as plementary information to other features (such as shape and geometry) and can be used to build more accurate face detection methods. The primary step for skin color detection in an image is to choose a suitable color space. There are many color spaces。 AdaBoost。當(dāng)然我們也可以將語音、指紋檢測等應(yīng)用成熟的方法引用到人臉檢測系統(tǒng)中來。當(dāng)然,對于弱分類器的閾值設(shè)定,我們應(yīng)該不僅僅只考慮正面樣本,而且也要考慮負(fù)面樣本。在文章的最后我們詳細(xì)介紹了基于AdaBoost算法在OpenCV中的相關(guān)應(yīng)用及測試結(jié)果。同時我們對弱分類器的構(gòu)造過程也提出了自己的建議,將原本的一個適應(yīng)性閾值調(diào)整為上下限兩個閾值。皮膚的顏色、人臉的非剛性特點(diǎn)、性別、年齡、是否配戴眼鏡以及光照環(huán)境都可能影響人臉檢測的結(jié)果。我的算法是在OpenCV中實(shí)現(xiàn),Omid老師的算法在matlab下實(shí)現(xiàn);訓(xùn)練集都是MIT樣本庫,運(yùn)行的硬件平臺是酷睿雙核2GHz、內(nèi)存2G的筆記本電腦。訓(xùn)練結(jié)束后,會在目錄data下生成一些子目錄,即為各個階段的強(qiáng)分類器,最后生成的XML文件是級聯(lián)的強(qiáng)分類器,也是我們編寫人臉檢測程序時需要調(diào)用的文件。該程序源碼由OpenCV自帶,且可執(zhí)行程序在OpenCV安裝目錄的bin目錄下。至此,正樣本與負(fù)樣本的描述文件生成完畢,訓(xùn)練的準(zhǔn)備工作完成。在DOS環(huán)境下輸入:D:\Program Files\OpenCV \bin\ info posdata\ vec data\ num 2706 w 20 h 20運(yùn)行完了會在D:\face\data下生成一個*.vec的文件。將所有的正樣本按上述方式命名后。轉(zhuǎn)換步驟如下:l 制作一個正樣本描述文件,用于描述正樣本文件名,正樣本數(shù)目以及各正樣本在圖片中的位置和大小。首先進(jìn)入負(fù)樣本路徑,cd D:\face\negdata,再次輸入dir /b , 文件, 刪除,這樣就生成了負(fù)樣本描述文件。首先,采用DOS命令生成樣本描述文件。但是在訓(xùn)練階段只有14的前提下,用酷睿雙核2GHz、內(nèi)存2G的筆記本電腦竟足足訓(xùn)練了20個小時,所以我認(rèn)為要得到一個極佳的分類器需要大量具有代表性的樣本庫,并且需要提高處理器的級別。我為了體驗(yàn)一下在OpenCV下分類器訓(xùn)練的過程,自己運(yùn)用MIT樣本集訓(xùn)練了一個分類器。 基于OpenCV的分類器實(shí)現(xiàn) 樣本描述文件創(chuàng)建在OpenCV ,已經(jīng)有用Adaboost算法訓(xùn)練好的基于haar特性的人臉檢測分類器。這兩個功能都是基于一個人臉分類器實(shí)現(xiàn),該樣本中有2706個大小的人臉,4381個大小的非人臉,該數(shù)據(jù)庫將傾斜的人臉也當(dāng)作非人臉。它可以實(shí)現(xiàn)圖像數(shù)據(jù)操作、圖像/視頻的輸入輸出、矩陣/向量數(shù)據(jù)操作及線性代數(shù)運(yùn)算、基本的圖像處理,以及支持多種動態(tài)數(shù)據(jù)結(jié)構(gòu)。 這意味著如果有為特定處理器優(yōu)化了的IPP庫,OpenCV將在運(yùn)行時自動加載這些庫。OpenCV 對非商業(yè)應(yīng)用和商業(yè)應(yīng)用都是免費(fèi)(FREE)的。OpenCV擁有包括300多個C/C++函數(shù)的跨平臺的中、高層API。6 基于OpenCV的程序?qū)崿F(xiàn) OpenCV簡介OpenCV是Intel資助的開源計算機(jī)視覺庫[17]。(3)當(dāng)時執(zhí)行:a. 利用AdaBoost算法增加一弱分類器,得到當(dāng)前的強(qiáng)分類器 ()b. 調(diào)節(jié)強(qiáng)分類器的閾值b,使得當(dāng)前的強(qiáng)分類器在訓(xùn)練集正面樣本上的檢測率 ,估算當(dāng)前分類器在訓(xùn)練集負(fù)面樣本上的虛警率,如果,返回。AdaBoost算法用來訓(xùn)練具有預(yù)定檢測率和虛警率的單級分類器的步驟如下:(1)輸入:訓(xùn)練樣本,指定檢測率和虛警率,弱分類器學(xué)習(xí)算法。級聯(lián)分類器結(jié)構(gòu)對于每級的分類器結(jié)構(gòu)