【正文】
entations are particularly useful for plex functions with continuousvalued outputs and large numbers of noisy inputs, where the logicbased techniques sometimes have difficulty. Form a biological viewpoint, thisr is about a mathematical model for the operation of the brain. The simple arithmetic puting elements correspond to neurons—the cells that perform information processing in the brainand the network as a whole corresponds to a collection of interconnected neurons. For this reason,the networks are called neural networks. Besides their useful putational properties, neural networks may offer the best chance of understanding many psychological phenomena that arise from the specific structure and operation of the brain. We will therefore begin the chapter with a brief look at what is known about brains, because this provides much of the motivation for the study of neural networks. A neural network is posed of a number of nodes, or units, connected by links.Each link has a numeric weight associated with it. Weights are the primary means of longterm storage in neural networks, and learning usually takes place by updating the weights. Some of the units are connected to the external environment, and can be designated as input or output units. The weights are modified so as to try to bring the network’s input/output behavior more into line with that of the environment providing the inputs. Each unit has a set of input links from other units, a set of output links to other units, a current activation level, and a means of puting the activation level at the next step in time, given its inputs and weights. The idea is that each unit does is that each unit does a local putation based on inputs from its neighbors, but without the 附錄C 外文文獻(xiàn)翻譯need for any global control over the set of units as a whole. In practice, most neural network implementations are in software and use synchronous control to update all the units in a fixed sequence. To build a neural network to perform some task, one must first decide how many units are to be used, what kind of units are appropriate, and how the units are to be connected to form a network. One then initializes the weights of the network, and trains the weights using a learning algorithm applied to a set of training examples for the task. The use of examples also implies that one nust decide how to encode the examples in terms of inputs and outputs of the network. An ANN is a network of many very simple processors(units),each possibly having a (small amount of) local memory. The units are connected by unidirectional munication channels(“connections”),which carry numeric(as opposed to symbolic) data. The units operates only on their local data and on the inputs they receive via the connections. The design motivation is what distinguishes neural networks form other mathematical techniques: A neural network is a processing device, either an algorithm, or actual hardware, Whose design was motivated by the design and functioning of human brains and ponents of thereof. Most neural networks have some sort of “training” rule whereby the weights of connections are adjusted on the basis of presented patterns. In other words ,neural networks “l(fā)earn” form examples, just like children learn to recognize dogs from examples of dogs, and exhibit some structural capability for generalization.Neural networks normally have great potential for parallelism, since the putations of the ponents are independent of each other.In principle NNS can puter any putable function, . they can do everything a normal digital puter can do. Especially anything that can be represented as a mapping between vector spaces can be approximated to arbitrary precision by fed forward NNS (which is the often used type).In practice, NNs are especially useful for mapping problems, which are tolerant of some errors, have lots of example data available, but to which hard and fast rules cannot easily be applied. NNs are, at least today, difficult to problems that concern manipulation of symbols and memory.Neural Networks are interesting for quite a lot of very dissimilar people: puter scientists want to find out about the properties nonsymbolic information processing with neural nets and about learning systems in general.Engineers of many kinds want to exploit the capabilities of neural networks on many areas(. signal processing)to solve their application problems.Cognitive scientists view neural networks as a possible apparatus to describe models of thinking and conscience (Highlevel brain function).Suppose we want to construct a network for the restaurant problem. We have already seen that a perceptronis inadequate, so we will try a twolayer network. We have ten attributes describing each example, so we will need ten input units. Learning in such a network proceeds the same way as for perceptrons: example inputs are presented to the network, and if the network putes an output vector that matches the target, nothing is done. If there is an error (a difference between the output and target), then the weights are adjustec to reduce this error. The trick is to assess the blame for an error and divide it among the contributing weights. In perceptrons, this is easy because there is only one weight between each input and the output. But in multilayer networks. There are many weights connecting each input to an output, and each of these weights contributes to more than one output. The backpropagation algorithm is a sensibly approach to dividing the contribution of each weight. As in the perceptron learning algorithm, we try to minimize the error between each target output and the output actually puted by the network. At the output layer, the weight update rule is very similar to the rule for the perceptron. There are two differences: the activation of the hidden unit aj is used instead of the input value。 and the rule contains a term for the g