Data center core switch overview

With the development of information technology and the popularity of the Internet, the requirements for network equipment in data centers are gradually increasing, and ordinary switches are often unable to meet the needs of data centers. Data center-grade core switches are characterized by high quality service assurance and control identification capability, which can ensure stable and reliable data transmission, and higher security, simpler networking and faster service deployment.Using network mapping tools can help overview network operations allowing IT personnel to detect abnormal activities faster.

What is data center core switch?

A core switch is not a type of switch, but a switch that is placed at the core layer (the backbone part of the network). It is located at the uppermost layer of the three-tier network architecture and is equivalent to the top manager of a company. Its main role is to quickly forward data from the aggregation layer and provide a fast and reliable network architecture by forwarding data at high speed. Generally speaking, the core switch has a large number of ports and high bandwidth. Compared with the access switch and aggregation switch, it has higher reliability, redundancy, throughput and relatively low delay. For a network with more than 100 computers, if you want to run stably and at high speed, the core switch is essential.

What is the difference between a core switch and a regular switch?

  1. Differences in ports

The number of common switch ports is generally 24 to 48, and most of the network ports are Gigabit Ethernet or 100M Ethernet ports, which are mainly used to access user data or gather switch data of some access layers. This switch can be configured with VLAN simple routing protocol and some simple SNMP functions at most, and the backplane bandwidth is relatively small.

The core switch has a large number of ports, usually modular, and can be freely matched with optical ports and Gigabit Ethernet ports. Generally, core switches are three-layer switches, which can be set up with various advanced network protocols such as routing protocols/ACL/QoS/load balancing. The main point is that core switches have much higher backplane bandwidth than normal switches and usually have separate engine modules. For example, if you want to upgrade your network from 10G to 40G, you can use the 40G QSFP+ to 10G SFP+ solution to connect the switches. This is done by sequentially inserting four 10G SFP+ optical modules into the 10Gbps SFP+ port of one switch, then inserting one 40G QSFP+ optical module into the 40Gbps QSFP+ port of another switch, and finally connecting them with a branch fiber patch cable in the middle.

  1. Difference in user connection or access to the network

Generally, the part of the network directly facing user connection or access is called access layer, and the part between access layer and core layer is called distribution layer or aggregation layer. The purpose of access layer is to allow end users to connect to the network, so the access layer switch has the characteristics of low cost and high port density. The aggregation layer switch is the aggregation point of multiple access layer switches. It must be able to handle all traffic from the access layer equipment and provide uplink to the core layer. Therefore, the aggregation layer switch has higher performance, fewer interfaces and higher switching rate.

交换机

The network backbone is called the core layer. The main purpose of the core layer is to provide an optimized and reliable backbone transmission structure through high-speed forwarding communication. Therefore, the core layer switch application has higher reliability, performance and throughput.

What are the characteristics of data center core switches?

Compared with regular switches, data center switches have the following characteristics: large cache, high capacity, virtualization, FCoE and layer-2 TRILL technology.

·Large cache technology

The data center switch changes the out port cache mode of the traditional switching system and adopts the distributed cache architecture, which is much larger than the ordinary switch. The cache capacity of core switches can reach more than 1G, while that of general switches can only reach 2 ~ 4M. Core switches need to be used with high-performance fiber optical transceiver. In the case of burst traffic, the large cache of the core switch can still ensure zero packet loss in network forwarding, which is just suitable for the characteristics of large number of servers and large burst traffic in the data center.

·High capacity equipment

Network traffic in data centers is characterized by high-density application scheduling and surge burst buffering, while ordinary switches are mainly designed to meet interconnection and interoperability, and cannot achieve precise identification and control of services, and cannot achieve fast response and zero packet loss in large service situations to ensure business continuity. Therefore, ordinary switches cannot meet the needs of data centers, and data center switches need to have high-capacity forwarding characteristics. Data center switches must support high-density 10GbE boards, i.e. 48-port 10GbE boards. In order to enable full wire-speed forwarding of 48-port 10GbE boards, data center switches can only adopt CLOS distributed switching architecture. In addition, with the popularity of 40G and 100G, 8-port 40G boards and 4-port 100G boards are becoming commercially available to meet the demand for high-density applications in data centers. In the future, to achieve the popularity of 400G network, we can only rely on data center core switches and 400G optical transceiver.

·Virtualization Technology

Data center network equipment needs to be highly manageable and highly secure and reliable, so data center switches also need to support virtualization. Virtualization is the transformation of physical resources into logically manageable resources to break down the barriers between physical structures. With virtualization technology, multiple network devices can be managed in a unified manner or services on a single device can be completely isolated, thus reducing data center management costs by 40% and increasing IT utilization by approximately 25%.

·TRILL (Transparent Interconnection of Lots of Links) Technology

In terms of building a two-layer network, the original standard of the data center is STP protocol, but it has some defects. For example, STP works through port blocking, and all redundant links do not forward data, resulting in a waste of bandwidth resources; STP has only one spanning tree in the whole network, and data messages can arrive only after being transferred through the root bridge, which affects the forwarding efficiency of the whole network. Therefore, STP will no longer be suitable for the expansion of super large data centers. 

TRILL is produced to avoid these defects of STP and is a technology for data center applications. TRILL protocol effectively combines the two-layer configuration and flexibility with the three-layer integration, so that the whole network can realize loop free forwarding without configuration. TRILL technology is the basic characteristic of the core switch in the data center, which is not possessed by ordinary switches.

·FCoE (Fibre Channel over Ethernet) Technology

Traditional data centers often have a data network and a storage network, while the trend of network integration of the new generation of data centers is becoming more and more obvious. The emergence of FCoE technology makes network integration possible. FCoE is a technology that encapsulates the data frame of storage network in Ethernet frame for forwarding. The realization of this integration technology must be on the switches in the data center. Generally, ordinary switches do not support FCoE function.

The above network technologies are the main technologies of the core switch in the data center, which are not possessed by the ordinary switch. These technologies are network technologies serving the new generation of data centers and even cloud data centers. With these new network technologies, the data center has developed rapidly.

Share your love
Christophe Rude

Christophe Rude

Articles: 15897

Leave a Reply

Your email address will not be published. Required fields are marked *