Category Archives: Uncategorized

Things You Must Know: 200G vs. 400G Ethernet in Data Centers

With the rise of high data rate applications such as 5G and cloud computing, 200G and 400G Ethernet are getting much attention in data centers. In most cases, 400G Ethernet is more competitive than 200G Ethernet with regards to the applications in data centers. In this post, we are about to reveal how 400G Ethernet outperforms 200G Ethernet in several aspects.

400G Ethernet vs 200G Ethernet: More Comprehensive Standardization

During the evolution of the IEEE protocol standard, the 200G standard was issued later than the 400G standard. The 400G standard was first proposed in 2013 by IEEE 802.3 Working Group and was approved in 2017 with IEEE 802.3bs 400G Ethernet standard. While the 200G standard was proposed and approved in 2015 and 2018 respectively. And the 200G single-mode specification is generally based on the 400G single-mode specification but halved the 400G one. With the fast upgrades of 400G technology and its products due to market needs, the 400G standard is more comprehensive and maturer than that of 200G.

Common Use of 100G Server Promotes More 400G Ethernet Applications

Network switch speed is always driven by server uplink speed. No matter in the past or at present, one-to-four structure is often used to connect switches and servers to increase the port density of switches. And this structure is likely to be adopted in the future as well. Then, how to choose between the 200G Ethernet and 400G Ethernet mainly depends on the server we use.

How to Connect Servers in Data Centers.jpg

According to Crehan research and forecast, the momentum of 100G servers surpassed that of 50G servers since 2020. That means, most network operators are likely to use 100G server connection rather than 50G. And 100G servers would become the mainstream according to the trends during 2020-2023. In other words, one could skip 200G and choose 400G directly with 100G server deployed.

50G vs 100G Server Adoption Rates.jpg

Optical Transceiver Market Drives 400G Ethernet

There are two main factors that push 400G Ethernet more popular than 200G Ethernet in the optical transceiver market. One is the module supply, another is the cost.

400G Optical Transceivers Gain More Market Supplies and Acceptance

Normally, the early adoption of 400G is to support the rise of 200G long-haul for aggressive DCI network builds. It makes 400G possible in metro networks and supports 3x the distance for 200G wavelengths. WIth further development, 400G transceivers are more favorable among manufacturers. Many suppliers pay more attention to 400G Ethernet rather than 200G. For example, Senko’s new CS connector is specifically designed for 400G data center optimization. Actually, all things have reasons. Even if the total cost of 200G transceiver and 400G transceiver is the same, the cost and power consumption per bit of 400G transceiver is half of the 200G’s because of the doubled bandwidth of 400G. More importantly, the total revenue data among 100G, 200G and 400G shows that 400G is far beyond 200G in the whole market.

Total Revenue for 100G 200G and 400G Transceivers.jpg

According to shipment data of the top 8 suppliers gathered by Omdia, the 400G transceiver market is more prosperous than that of 200G. There are more options for users in 400G deployment. Although the top 8 suppliers all provide 200G and 400G transceivers, 200G transceivers only offer 100m SR4 and 2km FR4 while 400G transceivers could offer more options like SR8 100mDR4 500mFR4 2kmLR4 10km, and ER8 40km, etc. In addition, 400G products, such as 400G DAC and 400G DAC breakout cables and solutions are maturer and more perfect than 200G because of their earlier release.

Supplier SupportFinisarInnolightFITLumentumAccelinkSource PhotonicsAOIHisense
200G SR4      
200G FR4    
400G SR8
400G SR4.2     
400G DR4
400G FR4
400G ZR       

400G Optical Modules Support More Applications With Fewer Cost

Compared to 200G transceivers, 400G transceivers could support more applications including DCI and 200G applications. And they can double the traffic carrying capacity between applications than 100G/200G solutions. With 400G solutions, fewer transponders will be needed, resulting in less transportation and operating costs. This will make the 400G market more lively in return.

400G Ethernet Is more Suitable for Future Network Upgrades

The 200G optical modules will include two main form factors, namely QSFP-DD and CFP2. The 400G optical transceivers will mainly include QSFP-DD and OSFP packages. Since the OSFP is expected to offer a better path to 800G and higher transmission rates, 400G transceiver is more suitable and convenient for future network migration.

Conclusion

From the current analysis and evidence above, 400G Ethernet is more competitive than 200G Ethernet in Ethernet standardization, 100G server connection, optical transceiver market and future network upgrades. There is no need to hesitate between 200G Ethernet and 400G Ethernet. Choosing 400G Ethernet and products is a wise decision not only for the current but for the long-term future.

Article Source

https://community.fs.com/blog/200g-vs-400g-ethernet.html

Related Articles

https://community.fs.com/blog/optical-transceiver-market-200g-400g.html

https://community.fs.com/blog/400g-etherent-market.html

Data Center Layout

Data center layout design is a challenging task requiring expertise, time, and effort. However, the data center can accommodate in-house servers and many other IT equipment for years if done properly. When designing such a modest facility for your company or cloud-service providers, doing everything correctly is crucial.

As such, data center designers should develop a thorough data center layout. A data center layout comes in handy during construction as it outlines the best possible placement of physical hardware and other resources in the center.

What Is Included in a Data Center Floor Plan?

The floor plan is an important part of the data center layout. Well-designed floor plan boosts the data centers’ cooling performance, simplifies installation, and reduces energy needs. Unfortunately, most data center floor plans are designed through incremental deployment that doesn’t follow a central plan. A data center floor plan influences the following:

  • The power density of the data center
  • The complexity of power and cooling distribution networks
  • Achievable power density
  • Electrical power usage of the data center

Below are a few tips to consider when designing a data center floor plan:

Balance Density with Capacity

“The more, the better” isn’t an applicable phrase when designing a data center. You should remember the tradeoff between space and power in data centers and consider your options keenly. If you are thinking of a dense server, ensure that you have enough budget. Note that a dense server requires more power and advanced cooling infrastructure. Designing a good floor plan allows you to figure this out beforehand.

Consider Unique Layouts

There is no specific rule that you should use old floor layouts. Your floor design should be based on specific organizational needs. If your company is growing exponentially, your data center needs will keep changing too. As such, old layouts may not be applicable. Browse through multiple layouts and find one that perfectly suits your facility.

Think About the Future

A data center design should be based on specific organizational needs. Therefore, while you may not need to install or replace some equipment yet, you might have to do so after a few years due to changing facility needs. Simply put, your data center should accommodate company needs several years in the future. This will ease expansion.

Floor Planning Sequence

A floor or system planning sequence outlines the flow of activity that transforms the initial idea into an installation plan. The floor planning sequence involves the following five tasks:

Determining IT Parameters

The floor plan begins with a general idea that prompts the company to change or increase its IT capabilities. From the idea, the data center’s capacity, growth plan, and criticality are then determined. Note that these three factors are characteristics of the IT function component of the data center and not the physical infrastructure supporting it. Since the infrastructure is the ultimate outcome of the planning sequence, these parameters guide the development and dictate the data centers’ physical infrastructure requirements.

Developing System Concept

This step uses the IT parameters as a foundation to formulate the general concept of data center physical infrastructure. The main goal is to develop a reference design that embodies the desired capacity, criticality, and scalability that supports future growth plans. However, with the diverse nature of these parameters, more than a thousand physical infrastructure systems can be drawn. Designers should pick a few “good” designs from this library.

Determining User Requirements

User requirements should include organizational needs that are specific to the project. This phase should collect and evaluate organizational needs to determine if they are valid or need some adjustments to avoid problems and reduce costs. User requirements can include key features, prevailing IT constraints, logistical constraints, target capacity, etc.

Generating Specifications

This step takes user requirements and translates them into detailed data center design. Specifications provide a baseline for rules that should be followed in the last step, creating a detailed design. Specifications can be:

  • Standard specifications – these don’t vary from one project to another. They include regulatory compliance, workmanship, best practices, safety, etc.
  • User specifications – define user-specific details of the project.

Generating a Detailed Design

This is the last step of the floor planning sequence that highlights:

  • A detailed list of the components
  • Exact floor plan with racks, including power and cooling systems
  • Clear installation instructions
  • Project schedule

If the complete specifications are clear enough and robust, a detailed design can be automatically drawn. However, this requires input from professional engineers.

Principles of Equipment Layout

Datacenter infrastructure is the core of the entire IT architecture. Unfortunately, despite this importance, more than 70% of network downtime stems from physical layer problems, particularly cabling. Planning an effective data center infrastructure is crucial to the data center’s performance, scalability, and resiliency.

Nonetheless, keep the following principles in mind when designing equipment layout.

Control Airflow Using Hot-aisle/Cold-aisle Rack Layout

The principle of controlling airflow using a hot-aisle/cold-aisle rack layout is well defined in various documents, including the ASHRAE TC9.9 Mission Critical Facilities. This principle aims to maximize the separation of IT equipment exhaust air and fresh intake air by placing cold aisles where intakes are present and hot aisles where exhaust air is released. This reduces the amount of hot air drawn through the equipment’s air intake. Doing this allows data centers to achieve power densities of up to 100%.

Provide Safe and Convenient Access Ways

Besides being a legal requirement, providing safe and convenient access ways around data center equipment is common sense. The effectiveness of a data center depends on how row layouts can double up as aisles and access ways. Therefore, designers should factor in the impact of column locations. A column can take up three or more rack locations if it falls within the row of racks. This can obstruct the aisle and lead to the complete elimination of the row.

Align Equipment With Floor and Ceiling Tile Systems

Floor and ceiling tiling systems also play a role in air distribution systems. The floor grille should align with racks, especially in data centers with raised floor plans. Misaligning floor grids and racks can compromise airflow significantly.

You should also align the ceiling tile grid to the floor grid. As such, you shouldn’t design or install the floor until the equipment layout has been established.

data center

Plan the Layout in Advance

The first stages of deploying data center equipment heavily determine subsequent stages and final equipment installation. Therefore, it is better to plan the entire data center floor layout beforehand.

How to Plan a Server Rack Installation

Server racks should be designed to allow easy and secure access to IT servers and networking devices. Whether you are installing new server racks or thinking of expanding, consider the following:

Rack Location

When choosing a rack for your data center, you should consider its location in the room. It should also leave enough space in the sides, front, rear, and top for easy access and airflow. As a rule of thumb, a server rack should occupy at least six standard floor tiles. Don’t install server racks and cabinets below or close to air conditioners to protect them from water damage in case of leakage.

Rack Layout

Rack density should be considered when determining the rack layout. More free space within server racks allows for more airflow. As such, you can leave enough vertical space between servers and IT devices to boost cooling. Since hot air rises, place heat-sensitive devices, such as UPS batteries, at the bottom of server racks, heavy devices should also be placed at the bottom.

Cable Layout

Well-planned rack layout is more than a work of art. Similarly, an excellent cable layout should leverage cable labeling and management techniques to ease the identification of power and network cables. Cables should have markings at both ends for easy identification. Avoid marking them in the middle. Your cable management system should also have provisions for future additions or removal.

Conclusion

Designing a data center layout is challenging for both small and established IT facilities. Building or upgrading data centers is often perceived to be intimidating and difficult. However, developing a detailed data center layout can ease everything. Remember that small changes in the plan during installation lead to costly consequences downstream.

Article Source: Data Center Layout

Related Articles:

How to Build a Data Center?

The Most Common Data Center Design Missteps

Why Green Data Center Matters

Background

Green data centers appear in the concept of enterprise construction, due to the continuous growth of new data storage requirements and the steady enhancement of green environmental protection awareness. Newly retained data must be protected, cooled, and transferred efficiently. This means that the huge energy demands of data centers present challenges in terms of cost and sustainability, and enterprises are increasingly concerned about the energy demands of their data centers. It can be seen that sustainable and renewable energy resources have become the development trend of green data centers.

Green Data Center Is a Trend

A green data center is a facility similar to a regular data center that hosts servers to store, manage, and disseminate data. It is designed to minimize environmental impact by providing maximum energy efficiency. Green data centers have the same characteristics as typical data centers, but the internal system settings and technologies can effectively reduce energy consumption and carbon footprints for enterprises.

The internal construction of a green data center requires the support of a series of services, such as cloud services, cable TV services, Internet services, colocation services, and data protection security services. Of course, many enterprises or carriers have equipped their data centers with cloud services. Some enterprises may also need to rely on other carriers to provide Internet and related services.

According to market trends, the global green data center market is worth around $59.32 billion in 2021 and is expected to grow at a CAGR of 23.5% in the future to 2026. It also shows that the transition to renewable energy sources is accelerating because of the growth of green data centers.

As the growing demand for data storage drives the modernization of data centers, it also places higher demands on power and cooling systems. On the one hand, data centers need to convert non-renewable energy into electricity to generate electricity, resulting in rising electricity costs; on the other hand, some enterprises need to complete the construction of cooling facilities and server cleaning through a lot of water, all of which are ample opportunities for the green data center market. For example, Facebook and Amazon continue to expand their businesses, which has also increased the need for data storage of global companies. These enterprises need a lot of data to complete the analysis of potential customers, but these data processing needs will require a lot of energy. Therefore, the realization of green data centers has become an urgent need for enterprises to solve these problems, and this can also bring more other benefits to enterprises.

Green Data Center Benefits

The green data center concept has grown rapidly in the process of enterprise data center development. Many businesses prefer alternative energy solutions for their data centers, which can bring many benefits to the business. The benefits of green data centers are as follows.

Energy Saving

Green data centers are designed not only to conserve energy, but also to reduce the need for expensive infrastructure to handle cooling and power needs. Sustainable or renewable energy is an abundant and reliable source of energy that can significantly reduce power usage efficiency (PUE). The reduction of PUE enables enterprises to use electricity more efficiently. Green data centers can also use colocation services to decrease server usage, lower water consumption, and reduce the cost of corporate cooling systems.

Cost Reduction

Green data centers use renewable energy to reduce power consumption and business costs through the latest technologies. Shutting down servers that are being upgraded or managed can also help reduce energy consumption at the facility and control operating costs.

Environmental Sustainability

Green data centers can reduce the environmental impact of computing hardware, thereby creating data center sustainability. The ever-increasing technological development requires the use of new equipment and technologies in modern data centers, and the power consumption of these new server devices and virtualization technologies reduces energy consumption, which is environmentally sustainable and brings economic benefits to data center operators.

data center

Enterprise Social Image Enhancement

Today, users are increasingly interested in solving environmental problems. Green data center services help businesses resolve these issues quickly without compromising performance. Many customers already see responsible business conduct as a value proposition. Enterprises, by meeting compliance, regulatory requirements and regulations of the corresponding regions through the construction of green data centers, improve the image of their own social status.

Reasonable Use of Resources

In an environmentally friendly way, green data centers can allow enterprises to make better use of various resources such as electricity, physical space, and heat, integrating the internal facilities of the data center. It promotes the efficient operation of the data center while achieving rational utilization of resources.

5 Ways to Create a Green Data Center

After talking about the benefits of a green data center, then how to build a green data center. Here are a series of green data center solutions.

  • Virtualization extension: Enterprises can build a virtualized computer system with the help of virtualization technology, and run multiple applications and operating systems through fewer servers, thereby realizing the construction of green data centers.
  • Renewable energy utilization: Enterprises can opt for solar panels, wind turbines or hydroelectric plants that can generate energy to power backup generators without any harm to the environment.
  • Enter eco mode: Using an Alternating current USPs is one way to switch eco mode. This setup can significantly improve data center efficiency and PUE. Alternatively, enterprises can reuse equipment, which not only saves money, but also eliminates unnecessary emissions from seeping into the atmosphere.
  • Optimized cooling: Data center infrastructure managers can introduce simple and implementable cooling solutions, such as deploying hot aisle/cold aisle configurations. Data centers can further accelerate cooling output by investing in air handlers and coolers, and installing economizers that draw outside air from the natural environment to build green data center cooling systems.
  • DCIM and BMS systems: DCIM software and BMS software can help data centers managers identify and document ways to use more efficient energy, helping data centers become more efficient and achieve sustainability goals.

Conclusion

Data center sustainability means reducing energy/water consumption and carbon emissions to offset increased computing and mobile device usage to keep business running smoothly. The development of green data centers has become an imperative development trend, and it also caters to the green goals of global environmental protection. As a beneficiary, enterprises can not only save operating costs, but also effectively reduce energy consumption. This is also an important reason for the construction of green data centers.

Article Source: Why Green Data Center Matters

Related Articles:

Data Center Infrastructure Basics and Management Solutions

What Is a Data Center?

What Is Data Center Architecture?

What Is Data Center Architecture?

A data center is a physical facility that supports an enterprise’s computing activities, which realizes centralized processing, storage, transmission, exchange, and management of information. Data center architecture, as an architectural design that establishes connections between switches and servers, is typically created during the data center design and construction phases. Besides, it designates the way that the server, storage networking, racks, and other data center resources will be placed and also addresses the interconnection of these devices.

Types of Data Center Architecture

Typically, there are four kinds of data center architecture: mesh work, three-tier or multi-tier model, mesh point of delivery (PoD), and super spine mesh.

Mesh Network

The mesh network architecture, a kind of data center architecture, usually regarded as the network fabric, describes the network topology in which components pass data to each other through interconnecting switches. With predictable capacity and lower latency, it can support common cloud services. Besides, because of its distributed network designs, the mesh network can easily realize any connection and be more cost-effective in terms of network deployment.mesh network

Three-tier or Multi-tier Model

The multi-tier architecture has been the most commonly deployed model of data center architecture used in the enterprise data center, consisting of core, aggregation, and access layers.

  • Data center core layer: It provides a fabric for high-speed packet switching between multiple aggregation modules and connectivity to multiple aggregation modules.
  • Data center aggregation layer: It supports functions like service module integration, layer 2 domain definitions, spanning tree processing, and default gateway redundancy.
  • Data center access layer: It provides the physical level attachment to the server resources and operates in layer 2 or layer 3 modes. What’s more, it plays an important role in meeting particular server requirements such as NIC teaming, clustering, and broadcast containment.multi-tier model

Mesh Point of Delivery

The mesh point of delivery (PoD) architecture contains multiple leaf switches interconnected within the PoDs. It is a repeatable design pattern and its components maximize the modularity, scalability, and manageability of data centers. Besides, this architecture can realize the efficient connection between multiple PoDs and a super-spine tier. Therefore, data center managers can add new data center architecture to their existing three-tier topology easily for the low-latency data flow of new cloud applications.mesh point of delivery

Super Spine Mesh

Just as its name shows, super spine architecture is suitable for large-scale or campus-style data centers. This type of data center architecture services huge amounts of data passing from east to west through data halls.super spine mesh

Typical Composition of Data Center Architecture

The data center has plenty of functions and supports different types of services, such as data computing, storage, processing, etc, resulting in a variety of data center architecture.

The data center architecture mainly consists of three parts: data center network, security, and computing architecture. Apart from these three, there are some other data center architectures, such as data center physical architecture and data center information architecture. The following will mainly interpret three typical compositions of it.

Data Center Network Architecture

Data Center Network (DCN) is an arrangement of network devices that interconnect all data center resources together, which has always been a key research area for Internet companies and large cloud computing companies. Therefore, data center network architecture plays a vital role in data center architecture.

It usually consists of switches and routers in a two- or three-level hierarchy. For example, three-tier DCN, fat tree DCN, DCell, and other types. Besides, the scale, scalability, robustness, and reliability of the data center network architecture have always been the focus of attention.

Data Center Security Architecture

Data center security refers to the physical practices and virtual technologies for protecting data centers from threats, attacks, and unauthorized access. Lack of efficient data center security protections may result in data breaches, where a company’s information may get attacked by hackers.

Data center security architecture mainly includes two aspects: physical security and software security. Enterprises can protect data centers from attack by setting up a strong firewall between the external traffic and the internal network.

Data Center Computing Architecture

In the data center computing model, computer resources can be moved to the edge where the data resides, thus reducing the latency and bandwidth issues in transmission.

Data center computing architecture is one of the most important parts of data center architecture, which plays a huge role in the high-efficient use of resources, reducing capital expenditure (CAPEX) costs, and rapid deployment and scalability.

Data Center Architecture Evolution

With the continuous development of technology, the evolution of data center architecture is ongoing. Most modern data center architecture has evolved from on-premises physical servers to virtualized infrastructure that supports networks, applications, and workloads in multiple private and public clouds. This evolution has affected the way data centers are architected because all components of a data center may only access each other via the public Internet.

Article Source: What Is Data Center Architecture? | FS Community

Related Articles:

Infographic – What Is a Data Center? | FS Community

What Are Data Center Tiers? | FS Community

What Is a Data Center? | FS Community

SFP Connector vs SFP+ Connector vs SFP28 Connector

SFP (Small Form-factor Pluggable) module connector with various data speed rate is one of the major optical transceivers used for data communication. With ever-increasing demand for faster speed and higher density, the SFP connectors have experienced several generations of update for the signal speed capability as well as port density, from the original SFP to SFP+ and then to the new SFP28 type. The compatibility of these connecting ports is the pain point for many subscribers in data communication transmission. So what’s the similarities and differences between them and are these module connectors compatible with each other when plugged into switches? SFP28 vs SFP+ vs SFP connector, which one should you choose? This paper will give you the answer.

What Is SFP Connector?

Specified by a multi-source agreement (MSA), SFP connector was first introduced in early 2000 and designed to replace the previous gigabit interface converter (GBIC) connector in fiber optic and Ethernet high-speed networking systems. Based on the IEEE 802.3, SFF-8472 protocol specification, SFP module connectors has the ability to handle up to 4.25Gb/s with greater port density than the GBIC, which is why SFP is also known as mini GBIC. This allowed it to quickly become the connector of choice for system administrators who liked the idea of being able to significantly increase their output per rack. The SFP connectors can support Gigabit Ethernet, Fibre Channel, Synchronous Optical Network (SONET) and other communication standards.

What Is SFP+ Connector?

To cater the need for faster transmission speed, the SFP+ (or SFP10) was introduced in 2006, as an extension of the SFP connector. Based on IEEE802.3ae, SFF-8431, and SFF-8432 protocol specifications, the SFP+ is designed to support data rates up to 10Gb/s. Compared with its predecessor SFP, the newly SFP+ can support Fibre Channel, 10GbE, SONET, OTN, and other communication standards. The SFP+ is similar in size to the SFP connector. And the primary difference between an SFP and a SFP+ is their transmission speed. It is noticeable that SFP/SFP+ are both copper and optical.

SFP28

SFP28 Connector–The Third Generation of SFP Connector

As the third generation of SFP interconnect systems, the SFP28 (Small Form-Factor Pluggable 28) is designed for 25G performance specified by the IEEE 802.3by. The SFP28 connector delivers increased bandwidth, superior impedance control with less crosstalk compared to the SFP10. SFP28 can be sorted into SFP28 SFP-25G-SR and SFP-25G-LR. The former is designed to transfer data over short distance (up to 100m over MMF) while the latter is suitable for long distance transmission (up to 10 km over SMF). Utilizing 25GbE SFP28 leads to a single-lane connection similar to existing 10GbE technology, however it can deliver 2.5 times more data, which enables network bandwidth to be cost-effectively scaled in support of next-generation server and storage solutions.

Are the SFP, SFP+ and SFP28 Products Backward Compatible?

In most cases the connector and cable assembly are all backward compatible – an SFP+ connector is a direct replacement for an SFP connector to ensure simple upgrade to customer systems. As these are standard products, the cable assembly will also be compatible between the systems – an SFP copper cable assembly can be inserted to an SFP+ cage and mate with a SFP+ connector on the board.

Then how about the new SFP28 product? Since transceivers with various SFP connector types have become an important constituent of data communication network, compatibility issue of SFP28 and SFP+ is controversial among many subscribers. Here is a typical topic from Reddit, and it says like “For a project we’re looking to purchase some nexus 93180YC-EX ToRs for 25Gb+ down to the compute nodes. Cisco states that the downlink 25Gb ports are also 10Gb capable, but one can only really assume that means that the port is compatible with SFP+ optics too. Cisco’s SFP+ compatibility matrix appears to support that claim, however just curious if any of you have any SFP28 experience yet to confirm?”

The answer is definitely “yes”. SFP28 adopts the same form factor as SFP+, just running at 25 Gb/s instead of 10Gb/s, which offers better performance and higher speed. Besides, the pinouts of SFP28 and SFP+ connectors are mating compatible. Therefore, SFP28 connector is backwards compatible with SFP+ ports. That is to say, an SFP28 can be plugged into an SFP+ port and vice versa, but plugging an SFP+ into an SFP28 port would not get you 25Gb/s data rates.

Conclusion

SFP28 vs SFP+ vs SFP connector? Have you made clear which one to choose? Whether choosing SFP or SFP+ depends on your switch types. If your switch port only supports 1G, you can only choose the 1000BASE SFP (eg.MGBSX1). If it is a 10G switch, it depends on the speed and distance you require. When choosing between SFP28 and SFP+, it all depends on the transmission data rates you need. The SFP28 aims to build 25GbE networks that enables equipment designers to significantly reduce the required number of switches and cables. Thus when considering reduced facility costs related to space, power and cooling, the SFP28 would be the optimal choice for you.

Related articles: FS.COM QSFP28 Optical Modules Solution

                             SFP Compatibility Guide and How to Use a Compatible SFP Module?