【正文】
epoch=100000) Sensitivity Relevance 1 (Obtained by removing the 4th neuron from the MLP of 251) 22611 [ ] [ ] [ ] [ ] [ ] bias= 2 (Obtained by removing the 2nd neuron from the MLP of 251) 14457 [ ] [ ] [ ] [ ] [ ] bias= 3 (Obtained by removing the 3rd neuron from the MLP of 251) 17501 [ ] [ ] [ ] [ ] [ ] bias= 。 2121 ???????? xxorxx37 TABLE 3. Data for 3 MLPs with 5 hidden neurons to realize the function MLP 251 Epoch MSE (training) MSE (testing) Trained weights and bias (goal= amp。 2121 ???????? xxorxxamp。Epoch?105. – The pruning processes start with MLPs of 251 and stop at an architecture of 241. – The relevant data used by and resulted from the pruning process are listed in Table 3 and Table 4. ?),( 21 xxFamp。 epoch=100000) Sensitivity Relevance 1 30586 [ ] [ ] [ ] [ ] [ ] [ ] bias=0 2 65209 [ ] [ ] [ ] [ ] [ ] [ ] bias=0 3 26094 [ ] [ ] [ ] [ ] [ ] [ ] bias=0 35 TABLE 2. Data for the 3 pruned MLPs with 4 hidden neurons to realize the function MLP 241 Epoch MSE (training) MSE (testing) Retrained weights and bias (goal= amp。 weight deviations on MLPs’ sensitivity Sensitivity of an MLP increases with the input and weight deviations. 24 ? Effects of the number of neurons in a layer – Sensitivity of MLPs: { n221 | 1?n ?10 } to the dimension of input. 25 – Sensitivity of MLPs: { 2n21 | 1?n ?10 } to the number of neurons in the 1st layer. 26 – Sensitivity of MLPs: { 22n1 | 1?n ?10 } to the number of neurons in the 2nd layer .