Tag Archives: 400G Network

400ZR: Enable 400G for Next-Generation DCI

To cope with large-scale cloud services and other growing data center storage and processing needs, the data center systems have become increasingly decentralized and difficult to manage. And applications like artificial intelligence (AI) urgently need low-latency, high-bandwidth network architectures to support the large number of machine-to-machine input/output (I/O) generated between servers. To ensure the basic performance of these applications, the maximum fiber propagation between these distributed data centers must be limited to about 100 km. Therefore, these data centers must be connected in distributed clusters. In order to ensure high-bandwidth and high-density data center interconnection at the same time, 400G ZR came into being. In this post, we will reveal what 400ZR is, how it works and the influences it brings about.

What Is 400ZR?

400ZR, or 400G ZR, is a standard that will enable the transmission of multiple 400GE payloads over Data Center Interconnect (DCI) links up to 80 km using dense wavelength division multiplexing (DWDM) and higher-order modulation. It aims to ensure an affordable and long-term implementation based on single-carrier 400G using dual-polarization 16 QAM (16-state quadrature amplitude modulation) at approximately 60 gigabaud (Gbaud). Developed by Optical Interconnect Forum (OIF), the 400ZR project is essential to facilitate the reduction of the cost and complexity of high-bandwidth data center interconnects and to promote interoperability among optical module manufacturers.

400G ZR

Figure 1: 400G ZR Transceiver in DCI Switch or Router

How Does 400ZR Work?

400G ZR proposes a technology-driven solution for high-capacity data transmission, which could be matched with the 400GE switch port. It uses a unique design of advanced coherent optical technology for small, pluggable form factor modules. Although the product form factor is not specified in the IA (implementation agreement), the companies or groups contributing to the 400ZR have defined this specification to fit the solution. These form factors defined separately by Multi-Source Agreement (MSA) bodies specify compact mechanical transceivers like QSFP-DD and OSFP, which are connectorized and pluggable into a compatible socket in a system platform. That is to say, the compatible 400ZR solutions that come to market will also be interoperable since the OIF and form factor MSAs are industry-wide organizations. And the interoperability of the 400ZR solutions offers the dual benefit of simplified supply chain management and deployment.

400ZR+ for Longer-reach Optical Transmission

Like other 400G transceivers, the pluggable coherent 400ZR solution can support 400G Ethernet interconnection and multi-vendor interoperability. However, it is not suitable for next-generation metro-regional networks that need transmission over 80 km with a line capacity of 400 Gb/s. Under such circumstances, 400ZR+, or 400G ZR+ is proposed. The 400ZR+ is expected to further enhance modularity by supporting multiple different channel capacities based on coverage requirements and compatibility with installed metro optical infrastructure. With 400ZR+, both the transmission distance and line capacity could be assured.

What Influences Will 400ZR Bring About?

Although 400ZR technology is still in its infancy, once it is rolled out, it will have a significant impact on many industries as the following three: hyper-scale data centers, distributed campuses & metropolitan areas and telecommunications providers.

400ZR Helps Cloud and Hyperscale Data Centers Adapt to the Growing Demand for Higher Bandwidth

The development of DCI and 400ZR could help cloud and hyper-scale data centers adapt to the growing demand for higher bandwidth on the network. They could deal with the exponential growth of applications such as cloud services, IoT devices, and streaming video. As time goes by, 400G ZR will contribute more to the ever-growing applications and users for the whole networking.

400ZR Will Support Interconnects in Distributed Data Centers

As is mentioned above, 400ZR technology will support the necessary high-bandwidth interconnects to connect distributed data centers. With this connection, distributed data centers can communicate with each other, share data, balance workloads, provide backup, and expand data center capacity when needed.

400ZR Allows Telecommunications Companies to Backhaul Residential Traffic

400G ZR standard will allow telecommunications companies to backhaul residential traffic. When running at 200 Gb/s using 64 Gbaud signalings and QPSK modulation, 400ZR can increase the range of high loss spans. For 5G networks, 400G ZR provides mobile backhaul by aggregating multiple 25 Gb/s streams. 400ZR helps promote emerging 5G applications and markets.

400ZR+/400ZR- Will Provide Greater Convenience Based on 400ZR

In addition to the interoperable 400G mode, the 400ZR transceiver is also expected to support other modes to increase the range of addressable applications. These modes are called 400ZR + and 400ZR-. “+” indicates that the power consumption of the module exceeds the 15W required by IA and some pluggable devices, enabling the module to use more powerful signal processing technology to transmit over distances of hundreds of kilometers. “-” indicates that the module supports low-speed modes, such as 300G, 200G, and 100G, which provide network operators with more flexibility.

Will 400ZR Stay Popular In the Next Few Years?

According to the data source below from LightCounting, 400ZR will lead the growth of optical module sales in 2021-2024. The figure below shows the shipment data of high-speed (100G and above) and low-speed (10G and below) DWDM modules sold on the market. It is clear that modules used in Cloud or DCI have an increasing tendency in 2021-2024. That means 400ZR will lead annual growth from 2021.

Source

In addition, with the first 100Gbps SerDes implementation in switching chips expected in 2021, the necessary data rate will move to 800 Gbps within the next 1-2 years for the optics interface. Since the OSFP form factor has been defined to allow an 8x 100GE interface without changing the definition of the transceiver. Similarly, in parallel, the coherent optics on the line side will transition to support 128GBaud 16QAM within a similar time frame, making it easy to migrate from the current 400ZR to the next-generation 800ZR. Therefore, 400ZR is crucial no matter in the current or the future network development.

Article Source

https://community.fs.com/blog/400zr-enable-400g-for-next-generation-dci.html

Related Articles

https://community.fs.com/blog/400g-qsfp-dd-transceiver-types-overview.html

https://community.fs.com/blog/400g-osfp-transceiver-types-overview.html

400G Data Center Deployment Challenges and Solutions

As technology advances, specific industry applications such as video streaming, AI, and data analytics are increasingly pushing for increased data speeds and massive bandwidth demands. 400G technology, with its next-gen optical transceivers, brings a new user experience with innovative services that allow for faster and more data processing at a time.

Large data centers and enterprises struggling with data traffic issues embrace 400G solutions to improve operational workflows and ensure better economics. Below is a quick overview of the rise of 400G, the challenges of deploying this technology, and the possible solutions.

The Rise of 400G Data Centers

The rapid transition to 400G in several data centers is changing how networks are designed and built. Some of the key drivers of this next-gen technology are cloud computing, video streaming, AI, and 5G, which have driven the demand for high-speed, high-bandwidth, and highly scalable solutions. The large amount of data generated by smart devices, the Internet of Things, social media, and other As-a-Service models are also accelerating this 400G transformation.

The major benefits of upgrading to a 400G data center are the increased data capacity and network capabilities required for high-end deployments. This technology also delivers more power, efficiency, speed, and cost savings. A single 400G port is considerably cheaper than four individual 100G ports. Similarly, the increased data speeds allow for convenient scale-up and scale-out by providing high-density, reliable, and low-cost-per-bit deployments.

How 400G Works

Before we look at the deployment challenges and solutions, let’s first understand how 400G works. First, the actual line rate or data transmission speed of a 400G Ethernet link is 425 Gbps. The extra 25 bits establish a forward error connection (FEC) procedure, which detects and corrects transmission errors.

400G adopts the 4-level pulse amplitude modulation (PAM4) to combine higher signal and baud rates. This increases the data rates four-fold over the current Non-Return to Zero (NRZ) signaling. With PAM4, operators can implement four lanes of 100G or eight lanes of 50G for different form factors (i.e., OSFP and QSFP-DD). This optical transceiver architecture supports transmission of up to 400 Gbit/s over either parallel fibers or multiwavelength.

PM4
PAM4

Deployment Challenges & Solutions

Interoperability Between Devices

The PAM4 signaling introduced with 400G deployments creates interoperability issues between the 400G ports and legacy networking gear. That is, the existing NRZ switch ports and transceivers aren’t interoperable with PAM4. This challenge is widely experienced when deploying network breakout connections between servers, storage, and other appliances in the network.

400G transceiver transmits and receives with 4 lanes of 100G or 8 lanes of 50G with PAM4 signaling on both the electrical and optical interfaces. However, the legacy 100G transceivers are designed on 4 lanes of 25G NRZ signaling on the electrical and optical sides. These two are simply not interoperable and call for a transceiver-based solution.

One such solution is the 100G transceivers that support 100G PAM4 on the optical side and 4X25G NRZ on the electrical side. This transceiver performs the re-timing between the NRZ and PAM4 modulation within the transceiver gearbox. Examples of these transceivers are the QSFP28 DR and FR, which are fully interoperable with legacy 100G network gear, and QSFP-DD DR4 & DR4+ breakout transceivers. The latter are parallel series modules that accept an MPO-12 connector with breakouts to LC connectors to interface FR or DR transceivers.

NRZ & PM4
Interoperability Between Devices

Excessive Link Flaps

Link flaps are faults that occur during data transmission due to a series of errors or failures on the optical connection. When this occurs, both transceivers must perform auto-negotiation and link training (AN-LT) before data can flow again. If link flaps frequently occur, i.e., several times per minute, it can negatively affect throughput.

And while link flaps are rare with mature optical technologies, they still occur and are often caused by configuration errors, a bad cable, or defective transceivers. With 400GbE, link flaps may occur due to heat and design issues with transceiver modules or switches. Properly selecting transceivers, switches, and cables can help solve this link flaps problem.

Transceiver Reliability

Some optical transceiver manufacturers face challenges staying within the devices’ power budget. This results in heat issues, which causes fiber alignment challenges, packet loss, and optical distortions. Transceiver reliability problems often occur when old QSFP transceiver form factors designed for 40GbE are used at 400GbE.

Similar challenges are also witnessed with newer modules used in 400GbE systems, such as the QSFP-DD and CFP8 form factors. A solution is to stress test transceivers before deploying them in highly demanding environments. It’s also advisable to prioritize transceiver design during the selection process.

Deploying 400G in Your Data Center

Keeping pace with the ever-increasing number of devices, users, and applications in a network calls for a faster, high-capacity, and more scalable data infrastructure. 400G meets these demands and is the optimal solution for data centers and large enterprises facing network capacity and efficiency issues. The successful deployment of 400G technology in your data center or organization depends on how well you have articulated your data and networking needs.

Upgrading your network infrastructure can help relieve bottlenecks from speed and bandwidth challenges to cost constraints. However, making the most of your network upgrades depends on the deployment procedures and processes. This could mean solving the common challenges and seeking help whenever necessary.

A rule of thumb is to enlist the professional help of an IT expert who will guide you through the 400G upgrade process. The IT expert will help you choose the best transceivers, cables, routers, and switches to use and even conduct a thorough risk analysis on your entire network. That way, you’ll upgrade appropriately based on your network needs and client demands.
Article Source: 400G Data Center Deployment Challenges and Solutions
Related Articles:

NRZ vs. PAM4 Modulation Techniques
400G Multimode Fiber: 400G SR4.2 vs 400G SR8
Importance of FEC for 400G

Silicon Photonics: Next Revolution for 400G Data Center

400G

With the explosion of 5G applications and cloud services, traditional technologies are facing fundamental limits of power consumption and transmission capacity, which drives the continual development of optical and silicon technology. Silicon photonics is an evolutionary technology enabling major improvements in density, performance and economics that is required to enable 400G data center applications and drives the next-generation optical communication networks. What is silicon photonics? How does it promote the revolution of 400G applications in data centers? Please keep reading the following contents to find out.

What Is Silicon Photonics Technology?

Silicon photonics (SiPh) is a material platform from which photonic integrated circuits (PICs) can be made. It uses silicon as the main fabrication element. PICs consume less power and generate less heat than conventional electronic circuits, offering the promise of energy-efficient bandwidth scaling.

It drives the miniaturization and integration of complex optical subsystems into silicon photonics chips, dramatically improving performance, footprint, and power efficiency.

Conventional Optics vs Silicon Photonics Optics

Here is a Technology Comparison Chart between Conventional Optics vs Silicon Photonics Optics, taking QSFPDD DR4 400G module and QDD DR4 400G Si for example:

The difference between a 400GBASE-DR4 QSFP-DD PAM4 optical transceiver module and a silicon photonic one just lies in: 400G silicon photonic chips — breaking the bottleneck of mega-scale data exchange, showing great advantages in low power consumption, small footprint, relatively low cost, easiness for large volume integration, etc.

Silicon photonic integrated circuits provide an ideal solution to realize the monolithic integration of photonic chips and electronic chips. Adopting silicon photonic design, a QDD-DR4-400G-Si module combines high-density & low-consumption, which largely reduces the cost of optical modules, thereby saving data center construction and operating expenses.

Why Adopt Silicon Photonics in Data Centers?

To Solve I/O Bottlenecks

The world’s growing data demand has caused bandwidths and computing power resources in data centers to be used up. Chips have to become faster when facing the growing demand for data consumption, which can process information faster than the signal can be transmitted in and out. That is to say, chips are becoming faster, but the optical signal (coming from the fiber) must still be converted to an electronic signal to communicate with the chip sitting on a board deep in the data center. And since the electrical signal still needs to travel some distance from the optical transceiver, where it was converted from light, to the processing and routing electronics — we’ve reached a point where the chip can process information faster than the electrical signal can get in and out of it.

To Reduce Power Consumption

Heating and power dissipation are enormous challenges for the computing industry. Power consumption will directly translate to heat. Power consumption causes heat, so what causes power dissipation? Mainly, data transmissions. It’s estimated that data centers have consumed 200TWh each year — more than the national energy consumption of some countries. Thus, some of the world’s largest Data Centers, including those of Amazon, Google, and Microsoft are located in Alaska and similar-climate countries due to the cold weather.

To Save Operation Budget

At present, a typical ultra-large data center has more than 100,000 servers and over 50,000 switches. The connection between them requires more than 1 million optical modules with around US$150 million-US$250 million, which accounts for 60% of the cost of the data center network, exceeding the sum of equipment such as switches, NICs, and cables. The high cost forces the industry to reduce the unit price of optical modules through technological upgrades. The introduction of fiber optic modules adopting Silicon Photonics technology is expected to solve this problem.

Silicon Photonics Applications in Communication

Silicon photonics has proven to be a compelling platform for enabling next-generation coherent optical communications and intra-data center interconnects. This technology can support a wide range of applications, from short-reach interconnects to long-haul communications, making a great contribution to next-generation networks.

  • 100G/400G Datacom: data centers and campus applications (to 10km)
  • Telecom: metro and long-haul applications (to 100 and 400 km)
  • Ultra short-reach optical interconnects and switches within routers, computers, HPC
  • Functional passive optical elements including AWGs, optical filters, couplers, and splitters
  • 400G transceiver products including embedded 400G optical modules400G DAC Breakout cables, transmitters/receivers, active optical cables (AOCs), as well as 400G DACs.

Now & Future of Silicon Photonics

Yole predicted that the silicon optical module market would grow from approximately US$455 million in 2018 to around US$4 billion in 2024 at a CAGR of 44.5%. According to Lightcounting, the overall data communication high-speed optical module market will reach US$6.5 billion by 2024, and silicon optical modules will account for 60% (3.3% in 20 years).

Intel, as one of the leading Silicon photonics companies, has a 60% market share in silicon photonic transceivers for datacom. Indeed, Intel has already shipped more than 3 million units of its 100G pluggable transceivers in just a few short years, and is continuing to expand its Silicon Photonics’ product offerings. And Cisco acquired Accacia for US$2.6 billion and Luxtera for US$660 million. Other companies like Inphi and NeoPhotonics are proposing silicon photonic transceivers with strong technologies.

Original Source: Silicon Photonics: Next Revolution for 400G Data Center

400G Optics in Hyperscale Data Centers

Since their advent, data centers have been striving hard to address the rising bandwidth requirements. A look at the stats reveals that 3.04 Exabytes of data is being generated on a daily basis. Whenever a hyperscale data center is taken into consideration, the bandwidth requirements are massive as the relevant applications require a preemptive approach due to their scalable nature. As the introduction of 400G data centers has taken the data transfer speed to a whole new level, it has brought significant convenience in addressing various areas of concern. In this article, we will dig a little deeper and try to answer the following questions:

  • What are the driving factors of 400G development?
  • What are the reasons behind the use of 400G optics in hyperscale data centers?
  • What are the trends in 400G devices in large-scale data centers?

What Are the Driving Factors For 400G Development?

The driving factors for 400G development are segregated into video streaming services and video conferencing services. These services require pretty high data transfer speeds in order to function smoothly across the globe.

Video Streaming Services

Video streaming services were already taking a toll on the bandwidth requirements. That, combined with the COVID-19 pandemic, forced a large population to stay and work from home. This automatically increased the usage of video streaming platforms. A look at the stats reveals that a medium-quality stream on Netflix consumes 0.8 GB per hour. See that in relation to over 209 million subscribers. As the traveling costs came down, the savings went to improved quality streams on Netflix like HD and 4K. What stood at 0.8 GB per hour rose to 3 and 7 GB per hour. This evolved the need for 400G development.

Video Conferencing Services

As COVID-19 made working from home the new norm, video conferencing services also saw a major boost. Till 2021, 20.56 million people have been reported to be working from home in the US alone. As video conferencing took center stage, Zoom, which consumes 500 MB per hour, saw a huge increase in its user base. This also puts great pressure on the data transfer needs.

What Makes 400G Optics the Ideal Choice For Hyperscale Data Centers?

Significant Decrease in Energy and Carbon Footprint

To put it simply, 400G raises the data transfer speed four times. 400G reduces the cost of 100G ports as breakouts when comparing a 4 x 100G solution to facilitate 400GbE with a single 400G solution to do the same. A single node at the output minimizes the risk of failures as well as lower the energy requirement. This brings down the ESG footprint that has become a KPI for the organizations going forward.

Reduced Operational Cost

As mentioned earlier, a 400G solution requires a single 400G port, whereas addressing the same requirement via a 100G solution requires four 100G ports. On a router, four ports cost way more than a single port that can facilitate rapid data transfer. The same is the case with power. Combined together, these two bring the operational cost down to a considerable extent.400G Optics

Trends of 400G Optics in Large-Scale Data Centers—Quick Adoption

The introduction of 400G solution in large-scale data centers has reshaped the entire sector. This is due to a humongous increase in the data transfer speeds. According to research, 400G is expected to replace 100G and 200G deployments way faster than its predecessors. Since its introduction, more and more vendors are upgrading to network devices that support 400G. The following image truly depicts the technology adoption rate.Trends of 400G Optics

Challenges Ahead

Lack of Advancement in the 400G Optical Transceivers sector

Although the shift towards such network devices is rapid, there are a number of implementation challenges. This is because it is not only the devices that need to be upgraded but also the infrastructure. Vendors are trying to upgrade them in order to stay ahead of the curve but the cost of the development and maturity of optical transceivers is not at the expected benchmark. The same is the case with their cost and reliability. As optical transceivers are a critical element, this comes as a major challenge in the deployment of 400G solutions.

Latency Measurement

In addition, the introduction of this solution has also made network testing and monitoring more important than ever. Latency measurement has always been a key indicator when evaluating performance. Data throughput combined with jitter and frame loss also comes as a major concern in this regard.

Investment in Network Layers

Lastly, the creation of a plug-and-play environment for this solution also needs to be more realistic. This will require a greater investment in the physical, higher level, and network-IP components layers.

Conclusion

Rapid technological advancements have led to concepts like the Internet of Things. These implementations require greater data transfer speeds. That, combined with the world going to remote work, has exponentially increased the traffic. Hyperscale data centers were already feeling the pressure and the introduction of 400G data centers is a step in the right direction. It is a preemptive approach to address the growing global population and the increasing number of internet users.

Article Source: 400G Optics in Hyperscale Data Centers

Related Articles:

How Many 400G Transceiver Types Are in the Market?

Global Optical Transceiver Market: Striding to High-Speed 400G Transceivers

400G QSFP Transceiver Types and Fiber Connections

400G QSFP has become one of the most popular form factors in the next-generation network. And different types of modules have appeared in the 400G optical transceiver market. What are 400G QSFP-DD transceiver types? What fiber cables could be used with these 400G optical modules? What about the answers to frequently asked questions about 400G QSFP? This post will illustrate them thoroughly.

400G QSFP Transceiver Types

400G QSFP transceivers are introduced respectively in the following table according to the two transmission types (over multimode fiber and single-mode fiber) they support.

Transmission TypeQSFP-DD Product DescriptionReachOptical ConnectorWavelengthOptical ModulationProtocol
Multimode fiber400G QSFP-DD SR8up to 100m over OM4 or OM5
up to 70m over OM3
MTP-16/MPO-16850nm50G PAM4IEEE P802.3cIEEE 802.3cd
Single-mode fiber400G QSFP-DD DR4up to 500m over parallel SMFMTP-12/MPO-121310nm100G PAM4IEEE 802.3bs
400G QSFP-DD XDR4/DR4+up to 2km over parallel SMFMTP-12/MPO-121310nm100G PAM4/
400G QSFP-DD FR4up to 2km over duplex SMFLCCWDM4 wavelength100G PAM4100Glambda MSA
400G QSFP-DD 2FR4up to 2km over duplex SMFCSCWDM4 wavelength50G PAM4IEEE 802.3bs
400G QSFP-DD LR4up to 10km over duplex SMFLCCWDM4 wavelength100G PAM4100Glambda MSA
400G QSFP-DD LR8up to 10km over duplex SMFLCCWDM4 wavelength50G PAM4IEEE 802.3bs
400G QSFP-DD ER8up to 40km over duplex SMFLC1310nm50G PAM4IEEE 802.3cn

Fiber Connections for 400G QSFP Transceivers

QSFP 400G SR8

  • A QSFP-DD SR8 can interop with another QSFP-DD SR8 over an MTP-16/MPO-16 cable. This is the most popular connection using an MTP-16/MPO-16 cable to connect two QSFP-DD SR8 transceivers directly.
  • 400G QSFP-DD SR8 breaks out to 2× 200G SR4.
  • QSFP-DD SR8 interops with 8× 50G SR over MPO-16 to 8× LC duplex fiber cables.

QSFP 400G DR4

  • QSFP-DD DR4 interops with QSFP-DD DR4 over an MPO-12 trunk cable.
    • 400G QSFP-DD DR4 interops with 4× 100G DR over MPO-12 to 4× LC duplex breakout cable.
    QSFP-DD DR4 to 4x 100G Breakout Connection

    QSFP 400G XDR4/DR4+

    • QSFP-DD XDR4/DR4+ interops with QSFP-DD XDR4/DR4+ over an MPO-12 trunk cable.
      • 400G QSFP-DD XDR4 interops with 4× 100G FR modules over an MPO-12 to 4× Duplex LC cable.

      QSFP 400G FR4

      QSFP-DD FR4 interops with QSFP-DD FR4 over a duplex LC cable.

      QSFP-DD FR4 Connection

      QSFP 400G 2FR4

      QSFP-DD 2FR4 interops with 2× 200G FR4 over 2× CS to 2× LC duplex cable.

      QSFP-DD 2FR4 Connection

      QSFP 400G LR4

      QSFP-DD LR4 interops with QSFP-DD LR4 over an LC duplex cable.

      QSFP-DD LR4 Connection

      QSFP 400G LR8

      QSFP-DD LR8 interops with QSFP-DD LR8 over an LC duplex cable.

      QSFP-DD LR8 Connection

      QSFP 400G ER8

      QSFP-DD ER8 interops with QSFP-DD ER8 over an LC duplex cable.

      QSFP-DD ER8 Connection

      400G QSFP Transceivers: Q&A

      Q: What does “SR8”, “DR4”, “XDR4”, “FR4”, “LR4”, and “LR8” mean in QSFP 400G modules?

      A: “SR” refers to short-range, and “8” implies there are 8 optical channels. “DR” refers to 500m reach using single-mode fiber, and “4” implies there are 4 optical channels. “XDR4” is short for “eXtended reach DR4”. And “LR” refers to 10km reach using single-mode fiber.

      Q: Can I plug a QSFP-DD transceiver module into an OSFP port?

      A: No. QSFP-DD and OSFP are totally different form factors. For more information about OSFP transceivers, you can refer to the 400G OSFP Transceiver Types Overview. You can use only one kind of form factor in the corresponding system. Eg, if you have a QSFP 400G system, QSFP-DD transceivers and cables must be used.

      Q: Can I plug a 100G QSFP28 module into a 400G QSFP port?

      A: Yes. A QSFP28 module can be inserted into a QSFP-DD port (without a mechanical adapter). When using a QSFP28 module in a QSFP-DD port, the QSFP-DD port must be configured for a data rate of 100G instead of 400G.

      Q: What other breakout options are possible apart from using the 400G QSFP-DD modules mentioned above?

      A: 400G QSFP-DD DACs & AOCs are possible for breakout 400G connections. See 400G Direct Attach Cables (DAC & AOC) Overview for more information about 400G DACs & AOCs.

      Article Source:

      https://community.fs.com/blog/400g-qsfp-dd-transceiver-types-overview.html

      Related Articles:

      https://community.fs.com/blog/optical-transceiver-market-200g-400g.html

      https://community.fs.com/news/400g-qsfp-dd-solution-for-400g-data-center-interconnect.html