【正文】
能: 中斷服務,對熒電壓進行采樣,存儲*入口參數:無*出口參數:無******************************************************************************/void time0(uint interval){ uchar th,tl。 //啟動定時 for(I=0。 //停止定時}西安工業(yè)大學畢業(yè)設計(論文)附錄C 外文文獻翻譯附錄C 外文文獻翻譯Neural Networks and ApplicationsWhen we talking about the neural network, we should usually better say “artificial neural network”(ANN),because that is what we mean most of the time. Biological neural networks are much more plicated in their elementary structures than the mathematical models we use for ANNs.Form a putational viewpoint, it is about a method of representing functions using networks of simple arithmetic puting elements and about methods for learning such representations from examples. These networks represent functions in much the same way that circuits consisting of simple logic gates represent Boolean functions. Such representations are particularly useful for plex functions with continuousvalued outputs and large numbers of noisy inputs, where the logicbased techniques sometimes have difficulty. Form a biological viewpoint, thisr is about a mathematical model for the operation of the brain. The simple arithmetic puting elements correspond to neurons—the cells that perform information processing in the brainand the network as a whole corresponds to a collection of interconnected neurons. For this reason,the networks are called neural networks. Besides their useful putational properties, neural networks may offer the best chance of understanding many psychological phenomena that arise from the specific structure and operation of the brain. We will therefore begin the chapter with a brief look at what is known about brains, because this provides much of the motivation for the study of neural networks. A neural network is posed of a number of nodes, or units, connected by links.Each link has a numeric weight associated with it. Weights are the primary means of longterm storage in neural networks, and learning usually takes place by updating the weights. Some of the units are connected to the external environment, and can be designated as input or output units. The weights are modified so as to try to bring the network’s input/output behavior more into line with that of the environment providing the inputs. Each unit has a set of input links from other units, a set of output links to other units, a current activation level, and a means of puting the activation level at the next step in time, given its inputs and weights. The idea is that each unit does is that each unit does a local putation based on inputs from its neighbors, but without the 附錄C 外文文獻翻譯need for any global control over the set of units as a whole. In practice, most neural network implementations are in software and use synchronous control to update all the units in a fixed sequence. To build a neural network to perform some task, one must first decide how many units are to be used, what kind of units are appropriate, and how the units are to be connected to form a network. One then initializes the weights of the network, and trains the weights using a learning algorithm applied to a set of training examples for the task. The use of examples also implies that one nust decide how to encode the examples in terms of inputs and outputs of the network. An ANN is a network of many very simple processors(units),each possibly having a (small amount of) local memory. The units are connected by unidirectional munication channels(“connections”),which carry numeric(as opposed to symbolic) data. The units operates only on their local data and on the inputs they receive via the connections. The design motivation is what distinguishes neural networks form other mathematical techniques: A neural network is a processing device, either an algorithm, or actual hardware, Whose design was motivated by the design and functioning of human brains and ponents of thereof. Most neural networks have some sort of “training” rule whereby the weights of connections are adjusted on the basis of presented patterns. In other words ,neural networks “l(fā)earn” form examples, just like children learn to recognize dogs from examples of dogs, and exhibit some structural capability for generalization.Neural networks normally have great potential for parallelism, since the putations of the ponents are independent of each other.In principle NNS can puter any putable function, . they can do everything a normal digital puter can do. Especially anything that can be represented as a mapping between vector spaces can be approximated to arbitrary precision by fed forward NNS (which is the often used type).In practice, NNs are especially useful for mapping problems, which are tolerant of some errors, have lots of example data available, but to which hard and fast rules cannot easily be applied. NNs are, at least today, difficult to problems that concern manipulation of symbols and memory.Neural Networks are interesting for quite a lot of very dissimilar people: puter scientists want to find out about the properties nonsymbolic information processing with neural nets and about learning systems in general.Engineers of many kinds want to exploit the capabilities of neural networks on many areas(. signal processing)to solve their application problems.Cognitive scientists view neural networks as a possible apparatus to describe models of thinking and conscience (Highlevel brain function).Suppose we want to construct a network for the restaurant problem. We have already seen that a perceptronis inadequate, so we will try a twolayer network. We have ten attributes describing each example, so we will need ten input units. Learning in such a network proceeds the same way as for perceptrons: example inputs are presented to the network, and if the network putes an output vector th