freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

計(jì)算機(jī)專業(yè)外文翻譯----向上向外擴(kuò)展:關(guān)于研究nutchlucene的互操作性-資料下載頁

2025-05-11 17:57本頁面

【導(dǎo)讀】主要的服務(wù)器供應(yīng)商繼續(xù)提供越來越強(qiáng)悍的機(jī)器,而近期,向外擴(kuò)展的解。決方案,規(guī)模較小的機(jī)器集群的形式,更加被商業(yè)計(jì)算所接受。方案是以網(wǎng)絡(luò)為中心高吞吐量的特別有效的應(yīng)用。在本文中,我們調(diào)查了向上擴(kuò)。展和向外擴(kuò)展這兩種相對(duì)的方法在一個(gè)新興的搜索應(yīng)用程序中并行的情況。的結(jié)論表明,向外擴(kuò)展的策略即使在向上擴(kuò)展的機(jī)器中依然可以表現(xiàn)良好。向外擴(kuò)展的解決方案提供更好的價(jià)格/性能比,雖然增加了管理的復(fù)雜性。80年代初期引發(fā)的計(jì)算機(jī)行業(yè)的科技革命導(dǎo)致它占領(lǐng)了90年代商業(yè)計(jì)。算大部分的市場(chǎng)。在第一階段的商業(yè)計(jì)算革命中,向上擴(kuò)展的優(yōu)勢(shì)是顯而易見的。算能力唯一的辦法。另外,計(jì)算機(jī)制造商更容易部署基于機(jī)架最佳化和刀片服務(wù)。據(jù)兩個(gè)不同的系統(tǒng):一個(gè)是以向上擴(kuò)展為基礎(chǔ)的超線程酷睿POWER5處理器。不多,從而可以公平的進(jìn)行性價(jià)比的比較。后一種做法顯著提。們的研究中的配置。們著重關(guān)注這些JS21刀片。Nutch/Lucene是一種執(zhí)行搜索應(yīng)用的框架。

  

【正文】 pReduce provides a convenient way of addressing an important(though limited)class of reallife mercial applications by hiding parallelism and faulttolerance issues from the programmers, letting them focus on the problem domain. MapReduce was published by Google in 2020 and quickly became a defacto standard for this kind of workloads. Parallel indexing operations in the MapReduce model works as follows. First, the data to be indexed is partitioned into segments of approximately equal size. Each segment is then processed by a mapper task that generates the (key, value) pairs for that segment, where key is an indexing term and value is the set of documents that contain that term (and the location of the term in the document). This corresponds to the map phase, in MapReduce. In the next phase, the reduce phase, each reducer task collects all the pairs for a given key, thus producing a single index table for that key. Once all the keys are processed, we have the plete index for the entire data set. In most search applications, query represents the vast majority of the putation effort. When performing a query, a set of index terms is presented to a query engine, which then retrieves the documents that best match that set of terms. The overall architecture of the Nutch/Lucene parallel query engine is shown in Figure 3. The query engine part consists of one or more front ends, and one or more backends. Each backend is associated with a segment of the plete data set. The driver represents external users and it is also the point at which the performance of the query is measured, in terms of queries per second (qps). A query operation works as follows. The driver submits a particular query (set of index terms) to one of the frontends. The frontend then distributes the query to all the backends. Each backend is responsible for performing the query against its data segment and returning a list with the top documents (typically 10) that better match the query. Each document returned is associated with a score, which quantifies how good that match is. The frontend collects the response from all the backends to produce a single list of the top documents (typically 10 overall best matches). Once the frontend has that list, it contacts the backends to retrieve snippets of text around the index terms. Only snippets for the overall top documents are retrieved. The frontend contacts the backends one at a time, retrieving the snippet from the backend that had the corresponding document in its data segment. 5 Conclusions The first conclusion of our work is that scaleout solutions have an indisputable performance and price/performance advantage over scaleup for search workloads. The highly parallel nature of this workload,bined with a fairly predictable behavior in terms of processor, work and storage scalability, makes search a perfect candidate for scaleout. Furthermore, even within a scaleup system, it was more effective to adopt a “scaleoutina box” approach than a pure scaleup to utilize its processors efficiently. This is not too different from what has been observed in large sharedmemory systems for scientific and technical puting. In those machines, it is often more effective to run an MPI (scaleout) application within the machine than relying on sharedmemory (scaleup) programming. Scaleout systems are still in a significant disadvantage with respect to scaleup when it es to systems management. Using the traditional concept of management cost being proportional to the number of images, it is clear that a scaleout solution will have a higher management cost than a scaleup one.
點(diǎn)擊復(fù)制文檔內(nèi)容
畢業(yè)設(shè)計(jì)相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號(hào)-1