freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

神經(jīng)網(wǎng)絡(luò)在電子元器件無鉛微連接潤濕特性研究中的應(yīng)用-資料下載頁

2025-06-28 16:24本頁面
  

【正文】 regression analysis is described in Sec. V. Finally, Sec. VI discusses the realtime implementation of the tracking application.II. DESIGN OF THE NEURAL NETWORK FOR OT Design of a neural network involves the selection of its model, architecture, learning algorithm, and activation functions for its neurons according to the need of the application. The objective of our application is to locate a specific airplane in the frames grabbed from a movie clip playing at the speed of 25 frames/second.A. Selection of the ANN ModelThe application, at hand, for which a neural network is to be designed, is a kind of function approximation problem. It may be noted that a backpropagation neural network (BPNN) with one (or more) sigmoidtype hidden layer(s) and a linear output layer can approximate any arbitrary (linear or nonlinear) function [2]. The number of hidden layers is normally chosen to be only one to reduce the network plexity, and increase the putational efficiency [3]. Thus, a BPNN is selected for the application at hand, and it consists of three layers: one input layer (of source nodes), one hidden layer (with tangent hyperbolic sigmoid activation function), and one output layer (with pure linear activation function), as shown in Figure 1.B. Input LayerThe input layer of a neural network is determined from the characteristics of the application inputs. There are 320x240 (. 76800) pixels in each frame ing from a movie (or camera). Each pixel contains three elements (red, green, and blue ponents). Thus, the total number of elements in a frame is 3x76800 (. 230400). If all these elements are directly put into the neural network, it will be almost impossible to process the image in realtime with a standard PC. Therefore, a preprocessing stage must be incorporated to reduce the size and dimensionality of the input pattern.Firstly, the color frame is converted into a gray level image,using the following expression for every pixel [4]:y = ()r + ()g + ()b (1)where y is the gray level value of the pixel in the output image, and r, g, and b are the red, green, and blue ponents of the pixel in the input color image, respectively. The values of y, r, g, and b are in the range [0, 255]. Secondly, the gray level image is downsampled simply by extracting 1st, 5th, 9th, etc. rows and columns, while skipping all other rows and columns in the image. The size of the image is now reduced to 80x60 (reduction factor of 4 with respect to both the number of rows and the number of columns). Thus, the total number of elements is reduced from 230400 to only 4800 with a total reduction factor of 48 (. 3x4x4). Thirdly, the data of the downsampled image is normalized, so that the value of each element can be in the range [, ], instead of [0, 255] for fast convergence during the training phase of the ANN. The normalization is done using (2): yn = y / 255 (2) where yn is the normalized value.Finally, the resulting image matrix is reshaped to form a standard pattern (columnvector), by concatenating the rows of the image matrix, and then transposing the large rowvector to make it a 4800element columnvector. Therefore, the number of input nodes in the proposed BPNN bees 4800.C. Hidden LayerHidden layer automatically extracts the features of the input pattern [3], and reduces its dimensionality further. There is no definite formula to determine the number of hidden neurons. In this research, a hitandtrial method was used to identify the number of neurons in the single hidden layer. It was found that only 50 hidden neurons could acplish the task at hand quite reasonably.The tangent hyperbolic activation function was chosen for the hidden layer after paring its converging results with those of the logistic sigmoid function. The tangent hyperbolic function and its fast approximation [5] are given in (3):where ai1 is ith element of a1 vector containing the outputs from the hidden neurons, and ni1 is ith element of n1 vector containing netinputs going into the hidden neurons. n1 vector is calculated as: n1 = W10p + b1 (4) where p is the input pattern, b1 is the vector of bias weights on the hidden neurons, and W10 is the weight matrix between 0th (. input) layer and 1st (. hidden) layer. Each row of W10 contains the synaptic weights of the corresponding hidden neuron.D. Output LayerThe output layer of the network is designed according to the need of the application output. Since the output of the neural network is expected to produce the row and column coordinates of the target (with respect to the topleft pixel position), the number of output neurons will be two.Since the frame size is 320x240, the values of the row and column coordinates of the target will be in the ranges [0, 240] and [0, 320], respectively. Thus, the pure linear activation function is selected for the output neurons, and expressed as: a2 = n2 (5) where a2 is the columnvector ing from the second output layer, and n2 is the columnvector containing the net inputs going into the output layer. n2 is calculated as: n2 = W21a1 + b2 (6) where W21 is the synaptic weight matrix between the first (. hidden) layer and the second (. output) layer, and b2 is the columnvector containing the bias inputs of the output neurons. Each row of W21 matrix contains the synaptic weights for the corresponding output neuron. The designed architecture of the proposed BPNN is shown in Fig. 1. The dimensions of the vectors and matrices are shown under their names, where m0 (= 4800) is the number of input nodes, m1 (= 50) is the number of hidden neurons, and m2 (= 2) is the number of output neurons. Fig. 1 Architecture of the proposed neural network III. TRAINING THE NEURAL NETWORK The training of the neural network was performed using MATLAB and its Neural Network Toolbox. Before training a neural network, a dataset must be prepared on which the network is to be trained. The BPNN is trained in a supervised manner, so the target (desired output) for every traini
點(diǎn)擊復(fù)制文檔內(nèi)容
數(shù)學(xué)相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號-1