freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

局域網(wǎng)交換機體系結(jié)構(gòu)外文翻譯(存儲版)

2025-07-01 03:42上一頁面

下一頁面
  

【正文】 直通模式; ? 碎片隔離模式; ? 存儲轉(zhuǎn)發(fā)模式。為什么交換機檢查幀的前64 個字節(jié)呢 ?因為在設(shè)計良好的以太網(wǎng)網(wǎng)絡(luò)中 ,碰撞碎片必須在前 64 字節(jié)中檢測出來。 盡管 Cisco Catalyst 平臺已經(jīng)采用多種技術(shù)來實現(xiàn)交換矩陣,但以下兩種體系結(jié)構(gòu)的交換矩陣最為常見: ? 共享總線; ? 交叉矩陣。 圖 22 循環(huán)服務(wù)順序 圖 23 說明了共享總線體系結(jié)構(gòu)中將接收端口或入口處的幀移動到發(fā)送端口或出口的基本原理 ,其中各步驟說明如下。 第 2 步驟中添加到幀中的信息可用于確定哪些端口應(yīng)當發(fā)送幀。對于不同的交換機平臺,術(shù)語交叉矩陣意味著不同的內(nèi)容,但基本都指線路卡之間能夠同時使用多個數(shù)據(jù)信道或者通路。此外,交叉交換矩陣發(fā)生擁塞,也可能會延遲幀的處理。為了最大限度利用緩沖的優(yōu)勢,方法之一是采用靈活的緩沖區(qū)尺寸。所有的入口幀都被存儲到共享內(nèi)存“池”中,并且一直保存到出站端口準備發(fā)送幀為止。 Catalyst 4500 Supervisor IV 中英文資料 11 采用 16MB 的 SRAM 用于分組緩沖區(qū)。如果大部分或者全部端口都連接到高速文件服務(wù)器,而這些 文件服務(wù)器會產(chǎn)生穩(wěn)定的通信流,那么單個線路模塊就會超過整個交換矩陣的帶寬。為了獲得交換總線的訪問,每個線路卡的本地仲裁器都必須等待中央仲裁器的分配次序。 圖 27 線端阻塞 在使用交叉交換矩陣的情況下,因為很多線路卡都和交換矩陣之間具有高速連接,所以也可能發(fā)生線端阻塞。 6. 數(shù)據(jù)轉(zhuǎn)發(fā) 無論交換矩陣類型如何,交換機都必須決定哪些端口轉(zhuǎn)發(fā)幀和哪些端口應(yīng)當清空或者丟棄幀。為了降低阻塞和避免線端阻塞,高速交換 ASIC 使用了共享和每端口緩沖區(qū)技術(shù)。在正常工作的情況下,因為交換總線能夠以非常高的速度為幀提供服務(wù),所以只需要小的輸入隊列。因為交換機需要發(fā)送流量為端口轉(zhuǎn)發(fā)能力的 150%,所以交換端口 4 所轉(zhuǎn)發(fā)的流量就會產(chǎn)生擁塞。在早期的共享總線環(huán)境中,中央仲裁器采用循環(huán)服務(wù)的方法在不同的線路卡之間移動流量。 根據(jù)應(yīng)用情況的不同,過度預(yù)定的端口可能存在問題,也可能不存在問題。對于采用 Supervisor 1的 Catalyst 4000系列,它采用 8MB 的 SRAM( Static RandomAccess Memory,靜態(tài)隨機訪問內(nèi)存)作為動態(tài)幀緩沖區(qū)。 共享內(nèi)存 在最早的 Cisco 交換機產(chǎn)品中,某些產(chǎn)品使用共享內(nèi)存設(shè)計進行端口緩沖。 端口緩沖內(nèi)存 通過采用端口緩沖內(nèi)存,交換機 (例如 Catalyst 5500)能夠為每個以太網(wǎng)端口提供一定數(shù)量的高速內(nèi)存,這些內(nèi)存可用于幀發(fā)送之前的幀緩沖。新型 SFM2 用于支持 Catalyst 6513( 13 插槽的機箱),并且對 SFM 進行了體系結(jié)構(gòu)方面的優(yōu)化。因為總線采用共享訪問的方式,所以線路卡必須等待時機才能進行通信,這嚴重限制了總帶寬。 中英文資料 9 在共享總線體系結(jié)構(gòu)中,所有端口都將同時接收每個發(fā)送幀。如果希望根據(jù)幀的接收順序進行服務(wù),那么循環(huán)是最簡單的方法。 Catalyst 交換機中的交換矩陣可以看作汽車中的傳動裝置,在汽車中 ,傳動裝置負責將引擎的動力傳遞給汽車輪子;在Catalyst 交換機中,交換矩陣負責將輸入或入站端口的幀轉(zhuǎn)送給單個或多個輸出和出站端口。 碎片隔離模式 如呆交換機工作在碎片隔離模式 ,那么它將接收和檢查全幀的前 64 個字節(jié)。本章首先從第2 層的角度來研究交換機。 one important example is the use of per port buffering. Each port maintains a small ingress buffer and a larger egress buffer. Larger output buffers (64 Kb to 512 k shared) allow frames to be queued for transmit during periods of congestion. During normal operations, only a small input queue is necessary because the switching bus is servicing frames at a very high speed. In addition to queuing during congestion, many models of Catalyst switches are capable of separating frames into different input and output queues, providing preferential treatment or priority queuing for sensitive traffic such as voice. Chapter 8 will discuss queuing in greater detail. 6. Forwarding Data Regardless of the type of switch fabric, a decision on which ports should forward a frame and which should flush or discard the frame must occur. This decision can be made using only the information found at Layer 2 (source/destination MAC address), or on other factors such as Layer 3 (IP) and Layer 4 (Port). Each switching platform supports various types of ASICs responsible for making the intelligent switching decisions. Each Catalyst switch creates a header or label for each packet, and forwarding decisions are based on this header or label. Chapter 3 will include a more detailed discussion of how various platforms make forwarding decisions and ultimately forward data. 7. Summary Although a wide variety of different approaches exist to optimize the switching of data, many of the core concepts are closely related. The Cisco Catalyst line of switches focuses on the use of shared bus, crossbar switching, and binations of the two depending on the platform to achieve very highspeed switching solutions. Highspeed switching ASICs use shared and per port buffers to reduce congestion and prevent headofline blocking. 中英文資料 7 中文翻譯: 局域網(wǎng)交換機體系結(jié)構(gòu) 本章將介紹所有交換機生產(chǎn)廠商都遵守的局域網(wǎng)交換技術(shù)的一些基本概念。s local arbiter waits its turn for the central arbiter to grant access to the switching bus. Once access is granted to the transmitting line card, the central arbiter has to wait for the receiving line card to fully receive the frames before servicing the next request in line. The situation is not much different than needing to make a simple deposit at a bank having one teller and many lines, while the person being helped is conducting a plex transaction. In Figure 27, a congestion scenario is created using a traffic generator. Port 1 on the traffic generator is connected to Port 1 on the switch, generating traffic at a 50 percent rate, destined for both Ports 3 and 4. Port 2 on the traffic generator is connected to Port 2 on the switch, generating traffic at a 100 percent rate, destined for only Port 4. This situation creates congestion for traffic destined to be forwarded by Port 4 on the switch because traffic equal to 150 percent of the forwarding capabilities of that port is being sent. Without proper buffering and forwarding algorithms, traffic destined to be transmitted by Port 3 on the switch may have to wait until the congestion on Port 4 clears. Figure 27. HeadofLine Blocking Headofline blocking can also be experienced with crossbar switch fabrics because many, if not all, line cards have highspeed connections into the switch fabric. Multiple line cards may attempt to cre
點擊復(fù)制文檔內(nèi)容
畢業(yè)設(shè)計相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號-1