freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

leaves160classification160and160leaf160mass160estimation美國大學(xué)生數(shù)模競賽二等獎(編輯修改稿)

2024-09-24 18:02 本頁面
 

【文章內(nèi)容簡介】 l NN to classify tree leaves As for classification, Neural work model is greatly able to get a fairly ideal conclusion. To distinguish one leaf shape patterns from each other, Neural work model is optimal. Through a study sample progress on and off, in which we adjust ()wi accordingly. Eventually our model is so “smart” as to identify different leaf shapes. A leaf sample characterize 8 features as mentionedabove. And it is necessary for us to explain the model and we separate NN as three parts to expatiate. Neuromime The follow graph is a base part ofNN . Figure : neuromime Solution to input signal: 1pk kj jju w x==229。 (1) Team 15263 Page 4 of 23 Where w is the weight, x is the input node value: k k kvuq= (2) k? is Threshold value: ()kkyvj= (3) ()j is activation function, ky is the output of a neuron in the successive layer. The activation function ()vj is a nonlinear function and is given by: ( ) ( )11 expv vj = + (4) Multilayer perceptron work This is the main structure of NN . Figure : Multilayer perceptron work The structure of the Artificial Neural Network ANN in this work contains three layers: input, hidden and output layers as shown in figure . We use input layer to input the characteristics of the leaves. Each layer contains ,ij and k nodes. The node is also called neuron or unit. This study summarized eight factors for ANN input, that is to say 8i= . The eight input units are sawtooth number, petiole length, blade length, blade width, blade thickness, leaf area and circular degree. For the hidden layer we make 3j= . The function of the output layer is to output classified information corresponding to the input data. The value of k ranges from the types of leaves we need to identify. The jkw is denoted as numerical weights between input and hidden layers, ijw between hidden and output layers as also Team 15263 Page 5 of 23 shown in figure . In fact, as for a sample of “s ”, the input of the hidden layer is: 21s sj jk kkh w I==229。 (5) The corresponding output state: 21( ) ( )sssj jkjkkH h w Ijj === 229。 (6) Therefore, the superimposed signal i received is: 3 3 21 1 1()s s sjki i j j i j kj j kh w H w w Ij= = ===邋 ? (7) The final output of the work is: 3 3 21 1 1( ) ( ) ( ( ) )s s z sjki i i j j i j kj j kO h w H w w Ij j j j= = == = =邋 ? (8) We hope the final output is idealization. For example. For example,after learning maple leaf ?s features, if the output is like the form of ( )1,0,1,0,1,1,0 , we called the output like this the ideal output, the ideal output is noted for {}siT . Figure : Different types of shapes ()a Linear. ()b Lanceolate. ()c Oblanceolate. ()d Spatulate. ()e Ovate. ()f Obovate. ()g Elliptic. ()h Oblong. ()i Deltoid. ()j Reniform. ()k Orbicular. ()l Peltate. ()m Perfoliate()n Connate. Backpropogation In order to minimizing the differences between actual output and desired output,we choose BP algorithm,which is one part of NN . As set forth, the error obtained when training a pair (pattern) consisting of both input and output given to the input layer of the work is given by: 255,1( ) ( )2 iiisE w T O=229。 (9) Where 5iT is the i th ponent of the desired output vector and 5iO is the Team 15263 Page 6 of 23 calculated output of i th neuron in the output layer. Combine (8) with (9), we can draw: 23255, 1 11( ) [ ( ( ) ) ]2 jki i j ks i j kE W T w w Ijj===邋 ? (10) This is a nonlinear function which is continuously differentiable. In order to obtain the minimum point and the value, the most convenient is to use the steepest descent method to get the minimal value of ()EW , when 10( ) ( )E W E W , we get the ideal value of the variables ,ijw and ,ijw . . 4 NN’s use to classify leaves Through NN , single several models leaves and grouping and number of them. Then , learning each group, NN is acquaintance each models. If want to classify one leaf. We are able to let NN to solve this problem, eventually, we classify the leaf as likemodel. Studying the reasons of the various shapes that leaves have. Leaves have a variety of forms. There are lots of reasons account for leaves varying in shapes and size, listed as follows: Overall, the reasons can be divided into external and internal factors. External factors: ? Seasons and climate (including wind, sunlight, moisture, temperature)。 ? Plant diseases and insect pests。 ? Artificial factor。 Internal factors: ? Deformation of cells, moisture loss of Mesophyll cells may cause volume decrease。 ? Phytohormone auxin。 ? Difference gene. we believe that there exits 4 base factors that lead to the variety of leaves shape. They are climate, disease, phytohormone and gene. And we endeavor find out reasons to them. climate: the change of sun shine, water, temperature, humidity which alters leaves shape. disease: through effecting the activity of an enzyme, so that influence leaves shape. Phytohormone auxin: have influence on gene expression gene: through DNA determine the general leaf shape Set up a AHP model to value these base factors We solve this problem based on the reasons listed above. After analyzing all of them, we hold an opinion that human attempt is usually fairly haphazard. Since we view all the leaves? living environment is stable, we don?t take artificial factor into Team 15263 Page 7 of 23 consideration. We definite total impact as target layer”, and climate, disease, phytohormone, gene as the criterion layer. As shown in the following figure : Figure : reasons for the various shapes Paired parison matrix structure To analyze the effects of electric vehicles? widespread use on the environment, social, economic and hea
點(diǎn)擊復(fù)制文檔內(nèi)容
醫(yī)療健康相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖片鄂ICP備17016276號-1