【正文】
Logic programming, which is based on the firstorder predicate logic, has been widely used in AI applications, It is a programming style of attractive features. As logic is regarded as a formalism unifying AI, data management and programming, it is the key feature of many new generation projects, especially the FGCS. The principal idea behind logic programming is that an algorithm prises two parts: a logical ponent and a control ponent. The users need only to specify in his program what to do (. , the logical ponent ), whereas it is the responsibility of the system to exercise the control on how to do it. One of the major feature of logic programs is its nondeterminisms. Compared with normal programs, logic programs are nondeterministic in two senses: (1)When several clauses match a given procedure call, the search strategy by means of which the alternative clauses are tride is not determined. (2)When several calls need to be executed in a single goal statement, the order of execution is not determined. The nondeterminisms of logic programs make them suitable for AI applications (., problem solving ) and potentially rich in parallelism. There is continuous effort in improving the implementation efficiency of Prolog, which is a restriction of logic programming. However, the past effort mainly focuses on interpreterbased software implementation. It is only in recent years that Prolog oriented puter architectures are developed. The PSIII in Japan and the PLM at Berkeley are two famous sequential Prolog machines, which are based on the WAM, a high performance pilerbased execution model for Prolog. It is estimated that the highest possible performance of a sequential Prolog machine constructed. In order to meet the demand of gigantic inference speed in AI applications, the inherent parallelisms in logic programs should be exploited. There are many sources of parallelism in logic programs, of which the major ones are the ANDparallelism, the ORparallelism and the stream parallelism. Substantial research has been carried out on parallel putation models utilizing these parallelisms. New logic programming languages suitable for parallel processing have been developed. For example, the Concurrent Prolog, the GHC, the Parlog, the Deltaprolog, etc. special architectures supporting efficient implementation of these languages have been investigated. Besides, multiprocessor systems have been constructed to greatly enhance the performance of parallel processing of logic programs and production systems. THE DATAFLOW COMPUTATION OF Hobbes Traditional puters are challenged with application which require tremendously highspeed putation capability, which has led puter scientists to study non vonneumann architectures, of which the most promising one is the dataflow architecture. A dataflow putation is one in which the operations are executed in an order determined by the data interdependencies and the availability of resources. In a dataflow program, the ordering of operations is not specified by the programmer, but is that implied by the data interdependencies. Two varieties of dataflow putation can be distinguished: datadriven putations, and demanddriven putations. In a datadriven puter, an instruction can be executed as soon as the input data it requires are available. After the instruction is executed, its result is made available to the successor instructions. In a demand driven (. , reduction) system, an instruction is triggered when the results produced by it are demanded by other instructions. These demands cause further demands for operands unless the operands are locally available in which case the instruction is executed and the results are sent back. In both of these systems, as a result of data (or demand) activated instruction execution, many instructions can bee available for execution at once , and it is possible to exploit all of the parallelism in the program. It is expected that these architectures can efficiently exploit concurrency of putation on a very large scale. A number of such systems are being developed around the world. Most notably, the Japanese have chosen dataflow as the underlying architecture for their Fifth Generation Machines. Dataflow and reduction architectures hold great promise, but there are some important problems to be solved before they can be used effectively to provide large scale parallelism. One major problem of dataflow architecture is its heavy overhead. To solve the problem, proposals have been made to bine dataflow with control flow, and to exploit parallelisms at tasklevel. The Future of Hobbes Jed and I created Hobbes on the fly, with no blueprint for what the product would eventually bee. Consequently, we gave many of the resources names and locations that made sense to us, but would potentially be quite confusing for the librarians who use the system, or for our successors were we to leave Calvin. (Uh, what exactly is a 39。s hours are not easy to describe with text, so they are displayed on the HDL in a 2month calendar format. After being incorporated into Hobbes, the tedious process of updating this Web page was replaced by a table that utilizes