【正文】
kbps for receive or input buffers, and 168 KB for transmit or output buffers. Using the 168 KB of transmit buffers, each port can create as many as 2500 64byte buffers. With most of the buffers in use as an output queue, the Catalyst 5000 family has eliminated headofline blocking issues. (You learn more about headofline blocking later in this chapter in the section Congestion and HeadofLine Blocking.) In normal operations, the input queue is never used for more than one frame, because the switching bus runs at a high speed. Figure 25illustrates port buffered memory. Figure 25. Port Buffered Memory Shared Memory Some of the earliest Cisco switches use a shared memory design for port buffering. Switches using a shared memory architecture provide all ports access to that memory at the same time in the form of shared frame or packet buffers. All ingress frames are stored in a shared memory pool until the egress ports are ready to transmit. Switches dynamically allocate the shared memory in the form of buffers, acmodating ports with high amounts of ingress traffic, without allocating unnecessary buffers for idle ports. The Catalyst 1200 series switch is an early example of a shared memory switch. The Catalyst 1200 supports both Ether and FDDI and has 4 MB of shared packet dynamic randomaccess memory (DRAM). Packets are handled first in, first out (FIFO). More recent examples of switches using shared memory architectures are the Catalyst 4000 and 4500 series switches. The Catalyst 4000 with a Supervisor I utilizes 8 MB of Static RAM (SRAM) as dynamic frame buffers. All frames are switched using a central processor or ASIC and are stored in packet buffers until switched. The Catalyst 4000 Supervisor I can create approximately 4000 shared packet buffers. The Catalyst 4500 Supervisor IV, for example, utilizes 16 MB of SRAM for packet buffers. Shared memory buffer sizes may vary depending on the platform, but are most often allocated in increments ranging from 64 to 256 bytes. Figure 26 illustrates how ining frames are stored in 64byte increments in shared 中英文資料 5 memory until switched by the switching engine. Figure 26. Shared Memory Architecture 4. Oversubscribing the Switch Fabric Switch manufacturers use the term nonblocking to indicate that some or all the switched ports have connections to the switch fabric equal to their line speed. For example, an 8port Gigabit Ether module would require 8 Gb of bandwidth into the switch fabric for the ports to be considered nonblocking. All but the highest end switching platforms and configurations have the potential of oversubscribing access to the switching fabric. Depending on the application, oversubscribing ports may or may not be an issue. For example, a 10/100/1000 48port Gigabit Ether module with all ports running at 1 Gbps would require 48 Gbps of bandwidth into the switch fabric. If many or all ports were connected to highspeed file servers capable of generating consistent streams of traffic, this oneline module could outstrip the bandwidth of the entire switching fabric. If the module is connected entirely to enduser workstations with lower bandwidth requirements, a card that oversubscribes the switch fabric may not significantly impact performance. Cisco offers both nonblocking and blocking configurations on various platforms, depending on bandwidth requirements. Check the specifications of each platform and the available line cards to determine the aggregate bandwidth of the connection into the switch fabric. 5. Congestion and HeadofLine Blocking Headofline blocking occurs whenever traffic waiting to be transmitted prevents or blocks traffic destined elsewhere from being transmitted. Headofline blocking occurs most often when multiple highspeed data sources are sending to the same destination. In the earlier shared bus example, the central arbiter used the roundrobin service approach to moving traffic from one line card to another. Ports on each line card request access to transmit via a local arbiter. In turn, each line card39。本章首先介紹交換機(jī)如何接受數(shù)據(jù)。根據(jù)具體型號的不同,交換機(jī)在數(shù)據(jù)交換之前所存儲和檢查的楨數(shù)目也存在一定差異。在某些 Cisco Catalyst 交換機(jī)的文檔中 ,碎片隔離又稱為“快速轉(zhuǎn)發(fā)”模式。無論具體型號如何,也無論何時產(chǎn)生新的交換平臺 ,所有文檔都會將“傳動裝置”作為交換矩陣。為了能夠給特定通信流量提供優(yōu)先級服務(wù),當(dāng)前的 Catalyst交換平臺(例如 Catalyst 6500)能夠支持各種各樣的 QoS( Quanlity Of Service,服務(wù)質(zhì)量)特性。此外,負(fù)責(zé)轉(zhuǎn)發(fā)決策的硬件也將接收到幀。 為了克服共享數(shù)據(jù)總線體系結(jié)構(gòu)所產(chǎn)生的限制,解決方案是采用交叉交換矩陣,如圖 24所示。 3. 數(shù)據(jù)緩沖 在共享數(shù)據(jù)體系結(jié)構(gòu)傳送幀之前,幀必須等待中央仲裁器的處理安排。端口緩沖 內(nèi)存的不足之處,在于如果端口的緩沖已經(jīng)用盡,那么就會發(fā)生丟棄幀的情況。對于采用共享內(nèi)存體系結(jié)構(gòu)的交換機(jī),所有端口能夠同時以共享幀或者分組緩沖區(qū)的形式訪問內(nèi)存。中央處理器或者 ASIC負(fù)責(zé)幀交換,并且在交換之前將把幀存儲到分組緩沖區(qū)內(nèi)。例如,對于48 端口的 10/100/1000 吉比特以太網(wǎng)模塊,如果所有端口都工作在 1Gbit/s,那么就需要交換矩陣支持 48Gbit/s 的帶寬。每個線路卡上的端口通過本地仲裁器來請求發(fā)送流量。如果沒有適當(dāng)?shù)木彌_和轉(zhuǎn)發(fā)算法,那么交換端口 3 所轉(zhuǎn)發(fā)的流量就將等待,并且一直等到端口4 的擁塞消除為止。除了能夠在擁塞期間進(jìn)行排隊,很多型號的 Catalyst 交換機(jī)都 能夠?qū)指舻讲煌妮斎牒洼敵鲫犃兄?,進(jìn)而可以為某些敏感流量(例如語音)提供更高優(yōu)先級的待遇或者優(yōu)先級排隊。