freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

全文搜索引擎的設(shè)計與實現(xiàn)-外文翻譯-其他專業(yè)-全文預(yù)覽

2025-02-16 07:30 上一頁面

下一頁面
  

【正文】 tax of this mand set is similar to other shells (. bash, csh) that users are already familiar with. Here are some sample action/mand pairs: Action Command Create a directory named /foodir bin/hadoop dfs mkdir /foodir Create a directory named /foodir bin/hadoop dfs mkdir /foodir View the contents of a file named /foodir/ bin/hadoop dfs cat /foodir/ FS shell is targeted for applications that need a scripting language to interact with the stored data. DFSAdmin The DFSAdmin mand set is used for administering an HDFS cluster. These are mands that are used only by an HDFS administrator. Here are some sample action/mand pairs: Action Command Put the cluster in Safemode bin/hadoop dfsadmin safemode enter Generate a list of DataNodes bin/hadoop dfsadmin report Demission DataNode datanodename bin/hadoop dfsadmin demission datanodename Browser Interface A typical HDFS install configures a web server to expose the HDFS namespace through a configurable TCP port. This allows a user to navigate the HDFS namespace and view the contents of its files using a web browser. Space Reclamation File Deletes and Undeletes When a file is deleted by a user or an application, it is not immediately removed from HDFS. Instead, HDFS first renames it to a file in the /trash directory. The file can be restored quickly as long as it remains in /trash. A file remains in /trash for a configurable amount of time. After the expiry of its life in /trash, the NameNode deletes the file from the HDFS namespace. The deletion of a file causes the blocks associated with the file to be freed. Note that there could be an appreciable time delay between the time a file is deleted by a user and the time of the corresponding increase in free space in HDFS. A user can Undelete a file after deleting it as long as it remains in the /trash directory. If a user wants to undelete a file that he/she has deleted, he/she can navigate the /trash directory and retrieve the file. The /trash directory contains only the latest copy of the file that was deleted. The /trash directory is just like any other directory with one special feature: HDFS applies specified policies to automatically delete files from this directory. The current default policy is to delete files from /trash that are more than 6 hours old. In the future, this policy will be configurable through a well defined interface. Decrease Replication Factor When the replication factor of a file is reduced, the NameNode selects excess replicas that can be deleted. The next Heartbeat transfers this information to the DataNode. The DataNode then removes the corresponding blocks and the corresponding free space appears in the cluster. Once again, there might be a time delay between the pletion of the setReplication API call and the appearance of free space in the cluster. 中文譯本 原文地址 : 一、引言 Hadoop 分布式文件系統(tǒng) (HDFS)被設(shè)計成適合運行在通用硬件 (modity hardware)上的分布式文件系統(tǒng)。 江 漢 大 學 畢 業(yè) 論 文(設(shè) 計) 外 文 翻 譯 原文來源 The Hadoop Distributed File System: Architecture and Design 中文譯文 Hadoop 分布式文件系統(tǒng):架構(gòu)和設(shè)計 姓 名 XXXX 學 號 202108202137 2021 年 4 月 8 日 英 文原文 The Hadoop Distributed File System: Architecture and Design Source: Introduction The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on modity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly faulttolerant and is designed to be deployed on lowcost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS was originally built as infrastructure for the Apache Nutch web search engine project. HDFS is part of the Apache Hadoop Core project. The project URL is Assumptions and Goals Hardware Failure Hardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of ponents and that each ponent has a nontrivial probability of failure means that some ponent of HDFS is always nonfunctional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS. Streaming Data Access Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements that are not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates. Large Data Sets Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance. Simple Coherency Model HDFS applications need a writeoncereadmany access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A Map/Reduce application or a web crawler application fits perfectly with this
點擊復(fù)制文檔內(nèi)容
法律信息相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號-1