freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

外文翻譯---高精度溫度測量裝置的設(shè)計(jì)-在線瀏覽

2025-03-07 16:34本頁面
  

【正文】 D595 generate a voltage with a scale factor of 10mV/176。C。176。C and sensitivity: 10mV/176。%+ in the range of to , is connected to the terminals of analog multiplexer to generate tabled thermocouple voltages as shown in Figure 1 (b). This voltage is used as the input of the ANN, and thermocouple temperature without cold junction pensation is the output of the ANN. In the operation phase (Figure 1(a)), in order to make the cold junction pensation, data taken from Celsius thermometer output is used. The output value of ANN is shifted by the environment temperature that is obtained by Celsius thermometer. Then this value is displayed on the VI as the thermocouple temperature. The developed VI is used to acquire the data for ANN training phase and to show the calculated temperature in the operation phase. Figure 2 shows the front panel of the VI. The main features associated with this instrument are: display of the measured temperature and corresponding output voltage from conditioning circuit for collecting the data in the calibrating phase and actual temperature with cold junction pensation in the operation phase. The system is controlled by the software written in both operation and calibration phases.3 Artificial Neural NetworkANNs are based on the mechanism of the biologically inspired brain model. ANNs are feedforward networks and universal approximators. They are trained and learned through experience not from programming. They are formed by interconnections of simple processing elements, or neurons with adjustable weights, which constitute the neural structure and are organized in layers. Each artificial neuron has weighted inputs, summation and activation functions and output. The behaviour of the overall ANN depends upon the operations mentioned on the artificial neurons, the learning rule and the architecture of the network. During the training (learning), the weights between the neurons are adjusted according to some criterion (The mean square error between the target output and the measured value for all the training set falls below a predetermined threshold) or the maximum allowable number of epochs is reached. Although the training is a time consuming process, it can be done beforehand, offline. The trained neural network is then tested using data was previously unseen during training.MLPs are the simplest and most monly used neural network architectures . They consists of input, output and one or more hidden layers with a predefined number of neurons. The neurons in the input layer only act as buffers for distributing the input signals xi to neurons in the hidden layer. Each neuron j in the hidden layer sums up its input signals xi, after weighting them with the strengths of the respective connections wji from the input layer and putes its output yj as a function f of the sum, namely where f is one of the activation functions used in ANN a neural network consists of adjusting the network weights using different learning algorithms. A learning algorithm gives wji(t) in the weight of a connection between neurons i and j at time t. The weights are then updated according to the following formula: There are many available learning algorithms in the literature . The algorithms used to train ANNs in this study are Levenberg–Marquardt (LM) , Broyden–Fletcher–Goldfarb–Shanno (BFGS) , Bayesian Regularization (BR) , Conjugate gradient backpropagation with FletcherReeves update
點(diǎn)擊復(fù)制文檔內(nèi)容
環(huán)評公示相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號(hào)-1