Why 25G Ethernet Switches Are Still Necessary?

It has been about five years since the arrival of 25G Ethernet in 2014. For those years, the 25G Ethernet market has been filled with ups and downs. Facing with the broad adoption for 100G Ethernet and the upcoming new connection speeds of 200G/400G, the use of 25G devices, like the 25G switches, has been in doubt.

A Review of 25G Ethernet

25G Ethernet is one of the standards for Ethernet connectivity in a data center environment, developed by IEEE 802.3 task force P802.3by. Before 25G Ethernet was proposed, the next speed upgrade for data centers was expected to be 40G Ethernet (using four lanes of 10G) with a path to 100G defined as using 10 lanes of 10G. Now, with the 25G Ethernet standard, it supports to have four 25 Gbps lanes to achieve the speed of 100G Ethernet. In that way, it is said that 25G has paved the road for 100G.

25G Ethernet VS 40G Ethernet

Figure 1: 25G Ethernet VS 40G Ethernet

25G Ethernet Switch

With the 25G standard carried out, in 2016, its matching equipment was also available on the market, such as 25G SFP28 transceiver, DAC cable, 25G adapter, and 25GbE switch. Among those devices, the 25G Ethernet switch is the most representative one. Nowadays, the 25G switch market is mainly led by some branded vendors such as 25G Dell, Cisco, Juniper, Arista, and Mellanox switches. Usually, the 25G 48-port switch is the most popular type. Most 25G switches today offer two types of 25GbE interface form factors: QSFP28 that can support 4x25Gbps and SFP28 that can support 1x25Gbps. No matter for TOR (Top of Rack) switch or as the switch to deploy spine-leaf architecture, 25GbE switches will be the good choices.

Figure 2: FS 25G Ethernet Switch N8500-48B6C

Figure 2: FS 25G Ethernet Switch N8500-48B6C

Why Still Need 25G Ethernet Switches?

Switch Compatibility

The majority of 25G Ethernet switches in the market support backward compatibility. Because most of their matched optical transceivers are SFP28. And SFP28 is regarded as the enhanced version of SFP+, which is designed for 25G signal transmission. SFP28 utilizes the same form factor as SFP+, but the electrical interface is upgraded to handle 25Gbps per lane. As SFP28 adopts the same form factor as SFP+, SFP28 can connect with SFP+ ports and SFP+ transceiver can also connect with SFP28 ports. SFP28 is compatible with existing data center fiber cabling. Thus, you can greatly reduce the cost of re-architecture data centers and gain great flexibility in creating higher bandwidth during migration. To some extent, it can be both CapEx and OpEx savings.

Port and System Density

The 25G technology is similar to 10G, but the performance is increased by 2.5 times, thus reducing the power and cost per gigabit significantly. 25G Ethernet provides higher port and system density. For example, four 25 Gb/s data streams can be used to produce a 100G path over copper or fiber cable within a compact form factor. This approach also saves on energy consumption and requires fewer top-of-rack (ToR) switches and cables, which cuts much operational expenditure for data center operators in the end.

Price and Performance
Price Comparison by Connection Speed

Figure 3: Price Comparison by Connection Speed

As we can see in the Crehan forecast in figure 3, 25G delivers on both fronts with better price and performance. While 25G Ethernet is slightly more expensive than the 10G pricing, when valued with price and performance it is much cheaper on a per Gbit/s of bandwidth.

In fact, the 25GbE pricing is very competitive, with only a 30%-40% premium over 10GbE and this premium is expected to come down over time. To achieve these competitive pricing levels requires devices that are optimized to support 25GbE. In that case, deploy 25G devices, such as 25G Ethernet switches are necessary.

Summary

With the trend for higher Ethernet bandwidth, the demand for 10G Ethernet has been in decline. Before the 200G/400G Ethernet becomes mature, 25G shares lots of incomparable strengths to be considered as the proper choice to prepare for the upcoming migrations. In the context of that, 25G devices, such as 25 Ethernet switches are playing indispensable roles.

What Are the Commonalities of Switches Supporting Cumulus Linux

As the first full-featured Linux based operating system (OS), Cumulus Linux has injected great possibilities and new vitality in networking field in these two years. Due to its great effort in open networking, Cumulus Linux has been one of the three leading OSs in the market. The another two are IP Infusion OcNOS and Pica8 PICOS. Recently, the collaboration between FS and Cumulus Networks has been made. The N-series open switches from FS will be pre-installed the latest Cumulus Linux OS to customers. At the moment of their joint effort in achieving open networking, we are going to make an analysis of the similar features of the open manageable switch supported by Cumulus Linux.

FS Collaborates with Cumulus Networks

Figure 1: FS Collaborates with Cumulus Networks

An Overview of Cumulus Linux

Cumulus Linux is a flexible open network operating system, which can be installed on various open switches, including the layer 2 switch and layer 3 switches. The code used to build Linux is free and available for users to view or edit. Therefore, it looks like the world’s largest data center that allows users to automate, customize and scale using web-scale principles. After the installation of the Cumulus Linux OS, the open switch can act as a Linux server.

Cumulus Linux

Figure 2: Cumulus Linux

Similarities of Open Switches Supporting Cumulus Linux

Featured with supporting a broad partner ecosystem, the Cumulus Linux gives customers more options and flexibility in data center networking regarding switch type, CPU, chip type, and supported transceivers.

Switch Type

Generally, open switches that support Cumulus Linux are bare mental switches coming with open network install environment (ONIE). In that case, no matter you have a brite box switch like Cisco switch, or a white box switch like FS switch, Cumulus Linux can be accessible to them. Nowadays, in the market of open switches, the 32-port and 48-port switches with 40G/100G transmission speed are commonly applied by enterprise users. Considering their high-density and greater agility needs for networking, the open switches are mostly layer 3 switches so as to achieve spine-leaf or overlay architectures.

CPU

The open switch CPU that supports Cumulus Linux OS usually comes in three types: ARMv7, PowerPC, and x86_64. Among these three types, x86_64 is the most popular one, adopted by most vendors, such as Dell, HPE, Mellanox, and FS.

Chip Type
Chips of Open Switches

Figure 3: Chips of Open Switches

Currently, Broadcom chip and Mellanox chip are the major roles of switch chip. The Mellanox type is usually used by Mellanox itself or Penguin. Therefore, the Broadcom type dominates the largest switch chip market share, installed by the most brand vendors or the third party suppliers.

Supported Transceivers

Since most open switches support high-speed transmissions, the matching transceivers are QSFP28, QSFP+, and SFP28. Only some 10G and 1G open switches will need to use SFP+ and SFP transceivers. By the way, viewing the trend, you will find 25G Ethernet has been deployed by many enterprise users in recent years for high bandwidth need. Accordingly, the 25G open switch has been a more economical and efficient choice than 1G or 10G switches. Also, the 25G switch will be the best solution to pave the road for the upcoming 100G/400G Ethernet in the future.

Summary

Just like the agility and simplicity the Cumulus Linux has advocated, it brings a truly economical and open network environment for users. With so many choices for open switch type, CPU, chip, and supported transceivers, it liberates the choices for open switches, which begets an open networking market in the end.

DWDM Network over Long Distance Transmission

With the ever-increasing need for higher bandwidth, DWDM technology has been one of the most favorable optical transport network (OTN) applications. In this post, we will reveal FS.COM DWDM-based network solutions over various transmitting distances as well as some suggestions for the DWDM networks deployment.

DWDM Networks Basics

As usual, let’s review some basics of DWDM networks. In this part, we will figure out two questions: What is DWDM? What are the components of DWDM networks?

DWDM Technology
DWDM Networks

Figure 1: DWDM Networks

DWDM (Dense Wavelength Division Multiplexing) is an associate extension of optical networking. It can put data signals from different sources together on a single optical fiber pair, with each signal simultaneously carried on its own separate light wavelength. With DWDM, up to 160 wavelengths with a spacing of 0.8/0.4 nm (100 GHz/50 GHz grid) separate wavelengths or channels of data can be transmitted over a single optical fiber.

DWDM Networks Components

Conventionally, for DWDM networks, there are four devices showed as below that are commonly used by IT workers:

  • Optical transmitters/receivers
  • DWDM mux/demux filters
  • Optical add/drop multiplexers (OADMs)
  • Optical amplifiers transponders (wavelength converters)

DWDM Networks Over Long Distance Transmission Solutions

Scenario 1: 40 km Transmission
40km DWDM Network

Figure 2: 40km DWDM Network

For this case, the 80km DWDM SFP+ modules and 40ch DWDM Mux/Demuxs are recommended to use. Since the 80km DWDM SFP+ modules are able to support 10G transmission over 40 km, no additional device is needed under this scenario.

Scenario 2: 80 km Transmission
80km DWDM Network

Figure 3: 80km DWDM Network

Deploying this 80 km DWDM network, we will still use 80km DWDM SFP+ modules and 40ch DWDM Mux/Demuxs. The light source of 80km DWDM SFP+ modules might not be able to support such long transmission distance, as there might be a light loss during transmission. In this case, pre-amplifier (PA) is usually deployed before the location A and location B to improve the receiver sensitivity and extend signal transmission DWDM distance. Meanwhile, the dispersion compensation module (DCM) can be added to this link to handle the accumulated chromatic dispersion without dropping and regenerating the wavelengths on the link. The above diagram shows the deploying method of this 80km DWDM network.

Scenario 3: 100 km Transmission
100km DWDM Network

Figure 4: 100km DWDM Network

Under this scenario, the devices used in scenario 2 still need to remain. Since the transmission distance has been increased, the light power will be decreased accordingly. Besides that, you will also need to use booster EDFA (BA) to amplify the optical signal transmission of the 80km DWDM SFP+ modules.

By the way, if you want to extend DWDM transmission distance, you can read this post for solutions: Extend DWDM Network Transmission Distance With Multi-Service Transport Platform.

Factors to Consider in Deploying DWDM Networks

1. Being compatible with existing fiber plant. Some types of older fiber are not suitable for DWDM use. Currently, standard singlemode fiber (G. 652) accounts for the majority of installed fiber, supporting DWDM in the metropolitan area.

2. Having an overall migration and provisioning strategy. Because DWDM is capable of supporting massive growth in bandwidth demands over time without forklift upgrades, it represents a long-term investment. Your deployment should allow for flexible additions of nodes, such as OADMs, to meet the changing demands of customers and usages.

3. Network management tools. A comprehensive network management tool will be needed for provisioning, performance, monitoring, fault identification and isolation, and remedial action. Such a tool should be standards-based (SNMP, for example) and be able to interoperate with the existing operating system. For example, the FMT DWDM solutions from FS.COM are able to support kinds of network management, including NMU line-card, monitor online, simple management tool, and SNMP.

4. Interoperability issues. Because DWDM uses specific wavelengths for transmission, the DWDM wavelengths used must be the same on either end of any given connection. Moreover, other interoperability issues also need to be considered, including power levels, inter- and intra-channel isolation, PMD (polarization mode dispersion) tolerances, and fiber types. All these contribute to the challenges of transmission between different systems at Layer 1.

5. Strategy for protection and restoration. There might have hard failures (equipment failures, such as laser or photodetector, and fiber breaks) and soft failures such as signal degradation (for example, unacceptable BER). Therefore, you need to have a protection strategy while deploying a DWDM network.

6. Optical power budget or link loss budget. Since there might be an optical signal loss over the long distance transmission, it’s critical to have a link loss budget in advance.

Summary

Bringing great scalability and flexibility to fiber networks, the DWDM networks solutions obviously enjoys plenty of strengths, which is also proved to be future-proof. In this post, we make a revelation of the DWDM-based network over long distance transmission. Also, some tips for deploying a DWDM network has also been shared for your reference.

Open Switch—One Contributor to Open Source Network

With the higher and higher demand for network agility and scalability, traditional networking has been no longer satisfying. In that case, the open source network has been an urgent need. To meet with this new trend, here comes our open switch, a great contributor to the open networking.

What Is Open Switch?

Open switches refer to switches in which the hardware and software are separate components that can be changed independently of each other. That means you will gain more flexibility to tailor your own network switch. Conventionally, the open source switch in the market can mainly be classified into the bare metal switch, white box switch, and brite box switch.

Open Switch

Figure 1:Open Switch

Open Switch Hardware

The open hardware means the hardware of an open switch can support multiple operating systems (OS). This is in contrast to closed switches, in which the hardware and software are always purchased together. For example, if you buy a Juniper EX or MX you also buy JUNOS; if you buy a Cisco Catalyst switch you buy IOS. However, things will be different with open switches. In the context of that, no matter which type of open source switch you are using, it’s possible to support many operating systems instead of a proprietary one. By the way, the hardware manufacturers of the open switch are primarily Taiwanese, including Accton, Quanta QCT, Alpha Networks, and Delta Computer. These same companies are original design manufacturers (ODMs) for many of the mainstream switch vendors.

Open Hardware

Figure 2: Open Hardware

Open Switch Software

The open software signifies that an OS can be run on multiple hardware configurations. As we mentioned before, you don’t need to buy an OS from the original brand of your switch hardware. For example, if you have Cumulus Linux, you can buy a layer 3 switch without a brand label. They still work well with each other. In the past, most people have no choice but to use brite box switch that integrates OS and hardware of branded suppliers. Now, with an open switch software, choices and economic efficiency will be largely expanded and improved. Generally, there are three popular open softwares in the market: Cumulus Linux, IP Infusion OcNOS and Pica8 PICOS.

Cumulus Linux Software

Figure 3: Cumulus Linux Software

Why Choose Open Switch?

  • With an open source switch, more flexibility, and options can be enjoyed. There is no need to configure your switch as in the past or wait for vendors to release new software or hardware.
  • It brings the open source network to operators, enterprises, third-party vendors and network users, accelerates the innovation speed of new services and functions of the network deployment, and takes users closer to SDN (software-defined network) and NFV (network functions virtualization).
  • The network simplicity and reliability can be improved through the automated centralized network device management, unified deployment strategies, and fewer configuration errors.
  • The network flexibility and scalability have been greatly increased, which will also save much cost and time for IT workers and enterprises.

Summary

In this post, we make an exploration of the open switch. From the introduction to its hardware, software, and benefits, we can understand why the open switch has been a great facilitator for open networking.

An Overview on EVPN and LNV

Bombarded with assorted network applications and protocols, the technologies and solutions for network virtualization delivery have been enriched greatly over past years. Among those technologies, VXLAN, also called virtual extensible local area network, is the key network virtualization. It enables layer 2 segments to be extended over an IP core (the underlay). The initial definition of VXLAN (RFC 7348) only relied on a flood-and-learn approach for MAC address learning. Now, a controller or a technology such as EVPN and LNV in Cumulus Linux can be realized. In this post, we are going to make an exploration on those two techniques: LNV and EVPN.

VXLAN

Figure 1: VXLAN

What Is EVPN

EVPN is also named as Ethernet VPN. It is largely considered as a unified control plane solution for the controller-less VXLAN, allowing for building and deploying VXLANs at scale. The EVPN relies on multi-protocol BGP (MP-BGP) to transport both layer 2 MAC and layer 3 IP information at the same time. It enables a separation between the data layer and control plane layer. By having the combined set of MAC and IP information available for forwarding decisions, optimized routing and switching within a network becomes feasible and the need for flooding to do learning gets minimized or even eliminated.

What Is LNV

LNV is the short of lightweight network virtualization. It is a technique for deploying VXLANs without a central controller on bare metal switches. Typically, it’s able to run the VXLAN service and registration daemons on Cumulus Linux itself. The data path between bridge entities is established on the top of a layer 3 fabric by means of a simple service node coupled with traditional MAC address learning.

The Relationship Between EVPN and LNV

From the above wiki of the EVPN and LNV, it’s easy for us to notice these two technologies are both the applications of VXLAN. For LNV, it can be used to deploy VXLAN without an external controller or software suite on the bare-metal layer 2/3 switches running Cumulus Linux network operating system (NOS). As for EVPN, it is a standards-based control plane for VXLAN, which can be used in any usual bare-metal devices, such as network switch and router. Typically, you cannot apply LNV and EVPN at the same time.

Apart from that, the deployments for EVPN and LNV are also different. Here, we make a configuring model for each of them for your better visualization.

EVPN Configuration Case

 

EVPN

Figure 2: EVPN

In the EVPN-VXLAN network segments shown in Figure 2 (Before), hosts A and B need to exchange traffic. When host A sends a packet to host B or vice versa, the packet must traverse the switch A, a VXLAN tunnel, and the switch B. By default, routing traffic between a VXLAN and a Layer 3 logical interface is disabled. If the functionality is disabled, the pure Layer 3 logical interface on the switch A drops Layer 3 traffic from host A and VXLAN-encapsulated traffic from the switch B. To prevent the pure Layer 3 logical interface on the switch A from dropping this traffic, you can reconfigure the pure Layer 3 logical interface as a Layer 2 logical interface, like Figure 2 (After). After that, you need to associate this interface with a dummy VLAN and a dummy VXLAN network identifier (VNI). Then, an Integrated routing and bridging (IRB) interface need to be created, which provides Layer 3 functionality within the dummy VLAN.

LNV Configuration Case

 

LNV

Figure 3: LNV

The two layer 3 switches are regarded as leaf 1 and leaf 2 in the above figure. They are running with Cumulus Linux and have been configured as bridges. Containing physical switch port interfaces, the two bridges connect to the servers as well as the logical VXLAN interface associated with the bridge. After creating a logical VXLAN interface on both leaf switches, the switches become VTEPs (virtual tunnel end points). The IP address associated with this VTEP is most commonly configured as its loopback address. In the image above, the loopback address is 10.2.1.1 for leaf 1 and 10.2.1.2 for leaf 2.

Summary

In this post, we have introduced the two techniques of network virtualization: EVPN and LNV. These two applications of network virtualization delivery share some similarities, but also quite a lot of differences. Being satisfied with the simplicity, agility, and scalability over the network, the EVPN has been a popular choice in the market.