freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

人臉識別英文原文(可供中文翻譯)(更新版)

2024-09-13 03:51上一頁面

下一頁面
  

【正文】 images can be characterized directly in terms of pixel intensities. These images can be characterized by probabilistic models of the set of face images [4, 13, 15], or implicitly by neural networks or other mechanisms [3, 12, 14,19, 21, 23, 25, 26]. The parameters for these models are adjusted either automatically from exampleimages (as in our work) or by hand. A few authors have taken the approach of extracting features and applying either manually or automatically generated rules for evaluating these features [7, 11].Training a neural network for the face detection task is challenging because of the difficulty in characterizing prototypical “nonface” images. Unlike face recognition, in which the classes to be discriminated are different faces, the two classes to be discriminated in face detection are “images containing faces” and “images not containing faces”. It is easy to get a representative sample of images which contain faces, but much harder to get a representative sample of those which do not. We avoid the problem of using a huge training set for nonfaces by selectively adding images to thetraining set as training progresses [21]. This “bootstrap” method reduces the size of the training set needed. The use of arbitration between multiple networks and heuristics to clean up the results significantly improves the accuracy of the detector.Detailed descriptions of the example collection and training methods, network architecture,and arbitration methods are given in Section 2. In Section 3, the performance of the system is examined. We find that the system is able to detect % of the faces over a test set of 130 plex images, with an acceptable number of false positives. Section 4 briefly discusses some techniques that can be used to make the system run faster, and Section 5 pares this system with similar systems. Conclusions and directions for future research are presented in Section 6.2 Description of the SystemOur system operates in two stages: it first applies a set of neural networkbased filters to an image, and then uses an arbitrator to bine the outputs. The filters examine each location in the image at several scales, looking for locations that might contain a face. The arbitrator then merges detections from individual filters and eliminates overlapping detections. Stage One: A Neural NetworkBased FilterThe first ponent of our system is a filter that receives as input a 20x20 pixel region of the image, and generates an output ranging from 1 to 1, signifying the presence or absence of a face, respectively. To detect faces anywhere in the input, the filter is applied at every location in the image. To detect faces larger than the window size, the input image is repeatedly reduced in size (by subsampling), and the filter is applied at each size. This filter must have some invariance to position and scale. The amount of invariance determines the number of scales and positions at which it must be applied. For the work presented here, we apply the filter at every pixel position in the image, and scale the image down by a factor of for each step in the pyramid.The filtering algorithm is shown in Fig. 1. First, a preprocessing step, adapted from [21], isapplied to a window of the image. The window is then passed through a neural network, which decides whether the window contains a face. The preprocessing first attempts to equalize the intensity values in across the window. We fit a function which varies linearly across the window to the intensity values in an oval region inside the window. Pixels outside the oval (shown in Fig. 2a) may represent the background, so those intensity values are ignored in puting the lighting variation across the face. The linear function will approximate the overall brightness of each part of the window, and can be subtracted from the window to pensate for a variety of lighting conditions. Then histogram equalization is performed, which nonlinearly maps the intensity values to expand the range of intensities in the window. The histogram is puted for pixels inside an oval region in the window. This pensates for differences in camera input gains, as well as improving contrast in some cases. The preprocessing steps are shown in Fig. 2.The preprocessed window is then passed through a neural network. The network has retinalconnections to its input layer。00920x20 pixel windows examined in Test Set 1. Figs. 11, 12, and 13 show example output images from System 11 on images from Test Set 15.4 Improving the SpeedIn this section, we briefly discuss some methods to improve the speed of the system. The work described is preliminary, and is not intended to be an exhaustive exploration of methods to optimize the execution time.Further performance improvements can be made if one is analyzing many pictures taken by a stationary camera. By taking a picture of the background scene, one can determine which portions of the picture have changed in a newly acquired image, and analyze only those portions of the image. Similarly, a skin color detector like the one presented in [9] can restrict the search region. These techniques, taken together, have proven useful in building an almost realtime version of the system suitable for demonstration purposes, which can process a 320x240 image in 2 to 4 seconds, depending on the image plexity.5 Comparison to Other SystemsSung and Poggio developed a face detection system based on clustering techniques [21]. Their system, like ours, passes a small window over all portions of the image, and determines whether a face exists in each window.
點擊復(fù)制文檔內(nèi)容
法律信息相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號-1