Shenzhen Kai Mo Rui Electronic Technology Co. LTDShenzhen Kai Mo Rui Electronic Technology Co. LTD

News

Six Basic Knowledge Points of Core Switch

Source:Shenzhen Kai Mo Rui Electronic Technology Co. LTD2026-05-07

1. Backplane Bandwidth Also known as switching capacity, backplane bandwidth refers to the maximum data throughput between the switch’s interface

processor/interface card and the data bus. It can be likened to the total number of lanes on an overpass. Since communication between all ports

must go through the backplane, the bandwidth provided by the backplane becomes the bottleneck for concurrent communication among ports. The higher the bandwidth, the more available bandwidth allocated to each port and the faster the data switching speed; the lower the

bandwidth, the less available bandwidth for each port and the slower the data switching speed. In short, backplane bandwidth determines the

data processing capability of a switch — the higher the backplane bandwidth, the stronger its data processing performance. To achieve

full-duplex non-blocking transmission in a network, the minimum backplane bandwidth requirement must be met. Calculation Formula Backplane Bandwidth = Number of Ports × Port Rate × 2 Note: For Layer 3 switches, only when both forwarding rate and backplane bandwidth meet the minimum requirements can the switch be

qualified; neither is dispensable. Example If a switch has 24 ports: Backplane Bandwidth = \(24 \times 1000 \times 2 \div 1000 = 48\ \text{Gbps}\) 2. Layer 2 & Layer 3 Packet Forwarding Rate Network data consists of individual data packets, and processing each packet consumes system resources. Forwarding rate (also called throughput)** refers to the number of packets passing through per unit time without packet loss. Throughput is

equivalent to the traffic flow on an overpass. As one of the most critical parameters of Layer 3 switches, it indicates the overall performance

of the device. Insufficient throughput will create a network bottleneck and negatively affect the transmission efficiency of the entire network. A qualified switch should support wire-speed switching, meaning its switching speed matches the data transmission speed of the

link, eliminating switching bottlenecks to the greatest extent. For core Layer 3 switches, to realize non-blocking network transmission, the actual

rate must not exceed the nominal Layer 2 packet forwarding rate and Layer 3 packet forwarding rate. In this case, the switch can perform

wire-speed switching at both Layer 2 and Layer 3. Calculation Formula for Throughput (Mpps) Throughput (Mpps)= Number of 10G ports × 14.88 Mpps+ Number of Gigabit ports × 1.488 Mpps+ Number of Fast Ethernet ports × 0.1488 Mpps If the calculated required throughput is lower than the switch’s rated throughput, the device can achieve wire-speed forwarding. Only count the

port types that are physically present; omit those not equipped. Example A switch with 24 Gigabit ports requires full-configuration throughput of:\(24 \times 1.488 = 35.712\ \text{Mpps}\) This ensures non-blocking packet switching when all ports run at wire speed. Similarly, a switch with a maximum of 176 Gigabit ports needs a throughput of at least \(261.888\ \text{Mpps}\) for a true non-blocking

architecture design. Origin of 1.488 Mpps Wire-speed packet forwarding is measured by the number of 64-byte minimum packets transmitted per unit time. For Gigabit Ethernet:\[\frac{1000000000\ \text{bps}}{8\ \text{bit}} \div (64+8+12)\ \text{byte} \approx 1488095\ \text{pps}\] Explanation: A 64-byte Ethernet frame carries fixed overhead including an 8-byte frame header and a 12-byte inter-frame gap. Therefore, the

wire-speed forwarding rate of one Gigabit Ethernet port for 64-byte packets is 1.488 Mpps. Fast Ethernet wire-speed port: 0.1488 Mpps (1/10 of Gigabit) Gigabit Ethernet wire-speed port: 1.488 Mpps 10G Ethernet wire-speed port: 14.88 Mpps In practical selection, you can directly use these standard values. To sum up, a core switch that meets the three indicators — backplane bandwidth, packet forwarding rate — is regarded as a true wire-speed

non-blocking device. Generally, only switches satisfying both indicators are qualified. A switch with relatively large backplane bandwidth but small throughput usually reserves upgrade expansion space, or suffers from low software

efficiency and defective dedicated chip circuit design. A switch with relatively small backplane bandwidth but large throughput delivers higher overall performance. In addition, manufacturers’ claimed backplane bandwidth is generally credible, while rated throughput cannot be fully trusted — it is a

theoretical design value, difficult to test and with limited practical reference significance. 3. Scalability Scalability mainly includes two aspects: 1. Number of Slots Slots are used to install various functional modules and interface modules. Since each interface module provides a fixed number of ports, the

number of slots fundamentally determines the maximum port capacity of the switch. Moreover, all functional modules (such as supervisor engine module, IP voice module, extended service module, network monitoring

module, security service module, etc.) each occupy one slot. Thus slot quantity directly defines the switch’s scalability. 2. Module Types Undoubtedly, the more module types supported (e.g., LAN interface module, WAN interface module, ATM interface module, extended function

module), the stronger the switch scalability. Taking LAN interface modules as an example, it should cover RJ-45, GBIC, SFP and 10Gbps modules to adapt to complex environments and

network applications in medium and large networks. 4. Layer 4 Switching Layer 4 switching enables fast access to network services. Unlike Layer 2 switching based on MAC addresses or Layer 3 routing based on

source/destination IP addresses, Layer 4 switching also takes TCP/UDP application port numbers as the forwarding basis, and is designed

for high-speed Intranet applications. Besides load balancing, Layer 4 switches support traffic control based on application types and user IDs. Deployed directly at the front end of

servers, it identifies application session content and user permissions, making it an ideal solution to prevent unauthorized server access. 5. Module Redundancy Redundancy capability guarantees secure network operation. No manufacturer can guarantee zero equipment failure during operation.

Rapid service failover upon fault occurrence depends entirely on device redundancy. For core switches, key components should support redundancy, such as management module redundancy and power supply redundancy, to

ensure maximum network stability. 6. Routing Redundancy Protocols such as HSRP and VRRP are adopted to realize load sharing and hot backup of core devices. When any core switch or aggregation

switch fails, Layer 3 routing devices and virtual gateways can perform fast failover, implementing dual-link redundant backup and ensuring the

stability of the entire network.

新闻53.png

Related News

Professional Engineer

24-hour online serviceSubmit requirements and quickly customize solutions for you

+8613798538021