【文章內(nèi)容簡介】
the target output and the measured value for all the training set falls below a predetermined threshold) or the maximum allowable number of epochs is reached. Although the training is a time consuming process, it can be done beforehand, offline. The trained neural work is then tested using data was previously unseen during training. MLPs are the simplest and most monly used neural work architectures . They consists of input, output and one or more hidden layers with a predefined number of neurons. The neurons in the input layer only act as buffers for distributing the input signals xi to neurons in the hidden layer. Each neuron j in the hidden layer sums up its input signals xi, after weighting them with the strengths of the respective connections wji from the input layer and putes its output yj as a function f o f the sum, namely where f is one of the activation functions used in ANN a neural work consists of adjusting the work weights using different learning algorithms. A learning algorithm gives wji(t) in the weight of a connection between neurons i and j at time t. The weights are then updated according to the following formula: There are many available learning algorithms in the literature . The algorithms used to train ANNs in this study are Levenberg–Marquardt (LM) , Broyden–Fletcher–Goldfarb–Shanno (BFGS) , Bayesian Regularization (BR) , Conjugate gradient backpropagation with FletcherReeves updates (CGF) , and Resilient backpropagation (RP) algorithms. Neural LinearizationIn this paper, the multilayered perceptron (MLP) neural work architecture is used as a neural linearizer. The proposed technique involves an ANN to evaluate the thermocouple temperature (ANN output) when thermocouple output voltage is given as input. Training the ANN with the use of mentioned learning algorithm to calculate the temperature involves presenting it with different sets of input values and corresponding measured values. Differences between the target output and the actual output of the ANN are evaluated by the learning algorithm to adapt the weights using equations (1) and (2). The experimental data taken from thermocouple data sheets are used in this investigation. These data sheets are prepared for a particular junction temperature (usually 0176。C). The ANN is trained with 80 thermocouple temperatures that is uniformly distributed between 200 and 1000176。C which is obtained in the calibration phase. However the performance of the final work with the training set is not an unbiased estimate of its performance on the universe of possible inputs, and a n independent test set is required to evaluate the work performance after training. Therefore, the other data set of 20 thermocouple temperatures that is uniformly distributed between 200 and 1000176。C, is used in the test process. The input and output data tuples are normalized between and before training. After several trials with different learning algorithms and with different work configurations, it is found that the most suitable work configuration is 1 X 7 X 3 X 1 with the LM algorithm. This means that the number of neurons is 7 for the first hidden layer and 3 for the second hidden layer respectively. The input and output layers have the linear activation function and the hidden layers have the hyperbolic tangent sigmoid activation function. The number of epoch is 1000 for training. It is important to note that the criteria for too small and too big hidden layer neuron numbers depend on a