Tag Archives: data center

Data Centers: Say Hello to White Box Switch

Today, nearly all mainstream organizations use traditional (integrated) switches from vendors like Cisco, HP, Arista and Juniper. However, hyperscale folks such as Google, Amazon and Facebook are taking the lead to use white box switch in the portion of their networks, operating the system in a different manner. So what is the magic behind that? Are these OTTs the only customers of white box switch? You may find some hints in this article.

White Box Switch

What Makes White Box Switch Special?
White box switches consists of generic and inexpensive hardware and a preload network operating system (NOS) that can be purchased and installed separately. Often the hardware and software come from different vendors. This is in contrast to a traditional switch that comes as one package including the hardware and the software. For example, when you buy a catalyst switch from Cisco, you are obliged to use Cisco IOS as its operating system. But with white box switch, you are allowed to buy hardware and software separately.

Except offering increased software flexibility/programmability and reduced vendor lock-in, white box switch enables users to have multiple choices on hardware, network operating system (NOS) and applications. The impact of which is profound when it comes to network orchestration, routing, automation, monitoring and network overlay.
White Box Switch NOS

What About the Target Market of White Box Switch?
White box switch is initially designed for data centers. Companies that operating mega data centers are especially prefer white box switch for at least two reasons: these companies generally demand for massive deployment of switches and the port density of each switch needs to be high. White boxes are cheaper while offering high-density ports, hence proven to be an optimal alternative. On the other hand, these large-scale companies also value the flexibility and openness of the switch platform, besides CAPEX savings. As an open platform to offer broader flexibility, white box switch free them from traditional L2/L3 protocols, enabling more possibilities to develop and support any SDN based networking.

So, are these large-scale OTTs the only target market for the white box switch? Definitely No!

Any small or medium-sized cloud based providers, or data center of service providers can consider deploy white box switches in data centers, concerning the cost savings and enhanced flexibilities compared with traditional switches. Also because of the familiar IT tools/ commands their technicians are used to. However, white box switches are not yet ready to offer all features and services that a service provider needs to offer, and not yet for deployment in non data center environments.

The Potential of White Box Switch
Based on an open platform, white box switch offers greater possibilities for innovation when compared with traditional networking gears. As the number of vendors that specialized in developing software began to soar, customers can choose from a range of software solutions with added functionality and reduced price.

White box switch becomes even popular in this age of SDN. In traditional switches, software and hardware are integrated into one package, which limits the network innovation greatly. SDN is here to decouple the software from hardware, helping speed shifts in networking. It resembles the standpoint of white box switching. Moreover, the advert of SDN also drives white box forward: when combined with SDN-centric designs, these deployments have resulted in dramatic improvements in automation, operational simplification, and faster innovation. These benefits are now being realized by enterprises of all sizes via commercially available SDN solutions.

Conclusion
Despite the fact that white box switches cannot be applied in non-data center environment for the time being, they are meeting their target market requirements successfully. The potential of white box switch cannot easily be underestimate, it is an ideal alternative that worth to be seriously considered at least for data center applications.

How to Take Full Advantages of Switches in Data Center: A Case Study of IBM G8264 Switch

During data center upgrading or migration to higher data rate like 40G/100G, the network designer is always pursuing for flexibility. This is because devices or cabling components with great flexibility can not only decrease the cost for upgrading, but also provide more possibilities for the data center in the future. Switch has always been the most important device data center. Thus, a flexible switch should support a variety of transmission media and data rates, which could have significant positive influence during data center upgrading on cabling and costs. IBM G8264 switch is such a switch that is specially designed for data center, which is suggested to be used at layer 2 or layer 3, providing non-blocking line-rate, high-bandwidth switching, filtering, and traffic queuing without delaying data. However, to make full use of these switches, you should select proper connection components and cabling plans. This post will take IBM G8264 switch as an example to illustrate how to take full advantages of the switches in data center.

Understand Your Switch—IBM G8264 Switch

The first step to make full use of a switch is to have a full understanding of the switch you are using. There are many ways to understand your switch. While the most direct method is to understand the ports on the switches. This method also works for IBM G8264 switches. As shown in the following picture, which is the front panel of IBM G8264 switch, the most outstanding part of the switch is the 48 SFP/SFP+ ports. It occupied most space on IBM G8264 switch front panel. These ports can support data rate of 1G/10G. Four QSFP+ ports for 40G are beside these SFP/SFP+ ports. There are three another ports for other use on the from panel: one 10/100/1000 Ethernet RJ45 port for out of band management, one USB port for mass storage device connection and one mini-USE console port for serial access.

IBM G8264 switch port information

IBM G8264 Connection in Data Center

It is clear that IBM G8264 switch can support data rate of 1G, 10G and 40G. The following parts illustrate how to connect IBM G8264 with the target devices in 1G, 10G, and 40G network separately in details. During the cabling in data center, distance is always a factor that cannot be ignored. The transmission distance required, can largely decide the cabling components selection.

1G Connection of IBM G8264 Switch

To accomplish the 1G connection of IBM G8264 switch and target devices, there are several methods according to transmission distance and transmission media (fiber optic or copper). For distance up to 100 meters, RJ-45 1000BASE-T SFP transceivers with UTP Cat5 cables are suggested, cause they are based on copper and is cheaper than fiber optic components. However, if you want reach a longer distance with good transmission quality, it would be better to use fiber optic cable and optical transceiver. By using 1000BASE-SX SFP optical transceivers with multimode fiber, the transmission distance is up to 220 (62.5 μ multimode fiber) meters and 550 meters (50 μ multimode fiber). For long distance transmission, single-mode fiber optic cables are suggested to be used with 1000BASE-LX SFP optical transceivers, which can connect IBM G8264 switch with the target devices that are 10 kilometers far away. The following chart is the detailed product solutions for IBM G8264 1G connection.

Transmission Media Module Cable & Connector Distance
Copper Cable BN-CKM-S-T: SFP 1000BASE-T copper transceiver RJ45, Cat5 cable 100 m
Fiber Optic Cable BN-CKM-S-SX: SFP 1000BASE-SX optical transceiver LC duplex, MMF 220 m(50μ multimode fiber)
550 m(62.5μ multimode fiber)
BN-CKM-S-LX: SFP 1000BASE-LX optical transceiver LC duplex, SMF 10 km

10G Connection of IBM G8264 Switch

As mentioned, IBM G8264 switch supports 10G configuration. For 10G, there are mainly two methods: using DACs (direct attach cables) or using transceivers and patch cords. The beauty of using DAC is the eliminating of transceivers and reduction of cost. However, the transmission distance is limited to 7 meters by using DACs. If longer distances are required, 10GBASE-SR transceiver used with OM3 multimode fiber can support transmission distance up to 300 meters. If 10GBASE-SR transceiver is used with OM4 fiber optic cable, distance up to 400 meters could be reached. Using 10GBASE-LR transceiver with single-mode fiber optic cable, IBM G8264 switch can be connected with target devices that are 40 kilometers away.

IBM G8264 switch and 40GBASE QSFP+ transceiver

If the 10G ports number cannot satisfy the requirements, the one QSFP+ port on IBM G8264 can be split into four 10G ports, by using QSFP+ DAC breakout cables for distances up to 5 meters. For distances up to 100 meters, optical MTP-to-LC break-out cables can be used with the 40GBASE-SR4 transceiver. Kindly check the following table for IBM G8264 switch 10G cabling components solutions.

Data Rate Modules Cable & Connector Distance
10G-10G Connection BN-SP-CBL-1M: SFP+ Copper Direct Attach Cable (1 meter) 0.5-7 m
BN-SP-CBL-3M: SFP+ Copper Direct Attach Cable (3 meter)
BN-SP-CBL-5M: SFP+ Copper Direct Attach Cable (5 meter)
BN-CKM-SP-SR: SFP+ 10GBASE-SR Short Range Transceiver LC duplex, MMF 300 m(OM3)
400 m(OM4)
BN-CKM-SP-LR: SFP+ 10GBASE-LR Long Range Transceiver LC duplex, SMF 40 km
40G-10G Connection BN-QS-SP-CBL-1M: QSFP+ DAC Break Out Cable (1 meter) 5 m
BN-QS-SP-CBL-3M: QSFP+ DAC Break Out Cable (3 meter)
BN-QS-SP-CBL-5M: QSFP+ DAC Break Out Cable (5 meter)
BN-CKM-QS-SR: QSFP+ 40GBASE-SR Transceiver MTP-to-LC break-out cables 100 m

40G Connection of IBM G8264 Switch

For 40G connection, both fiber optic connection and copper connection can be built by using different components. A 40GBASE QSFP+ to QSFP+ DAC can provide connection between IBM G8264 and target devices up to 7 meters. With multimode fiber optic cables, distance up to 100 meters (OM3) and 150 meters (OM4) can be reached, when using with 40GBASE-SR4 QSFP+ transceivers. For long distance 40G transmission, 40GBSE-LR QSFP+ transceiver and single-mode fiber optic cable with LC connectors are suggested. Related components for IBM G8264 switch are concluded in the following chart.

Modules Cable & Connector Distance
49Y7884: QSFP+ 40GBASE-SR Transceiver MTP connector, MMF 100 m(OM3)
100 m(OM4)
00D6222: 40GBASE-LR4 QSFP+ Transceiver LC connector, SMF 10 km
BN-QS-QS-CBL-1M: QSFP-to-QSFP cable (1 meter) 1-7 m
BN-QS-QS-CBL-3M: QSFP-to-QSFP cable (3 meter)
Conclusion

To make full used of the switches in data center with great flexibility, both the selection of switch and cabling solutions is very important. IBM G8264 as a switch with great flexibility is an ideal solution for data center upgrading to 40G. The above mentioned modules and cables are all provided by FS.COM, which are IBM G8264 compatible and are fully tested on the IBM G8264 switches. Kindly contact sales@fs.com for more details, if you are interested.

Clean the Dirt and Dust In Data Center

dirt and dust in data center

If you are in a data center, wipe your finger on a distribution cabinet or a patch panel. Then watch your finger, can you see the scene shown in the following picture? Your finger is attached with dust or dirt. This situation is so familiar to most telecom engineers working in data centers. However, how many people really care about it? You might recognize that the data center needs cleaning, but you might just think about it. This is the contaminant that can be seen and checked directly by eyes or touching. How about those dust or dirt inside the equipment? Over time, without timely cleaning, the accumulation of dirt and dust will lead to problems like overheating and various network failures. This is just the start of troubles, more are there to be deal with, if no proper action was taken.

Why Clean the Data Center?

What would happen, if there is no regular cleaning in data center? As mentioned, the most direct result of contaminant is overheating. Why? Dust and pollutants in the data center are usually light-weight. If there is air flow, dust or dirt will move with it. The cooling system of the data center is largely depending on server fan which can bring the dust and dirt into the cooling system. The accumulation of these contaminant can cause fan failure or static discharge inside equipment. The heat dissipation will need more time and heat emission efficiency is limited. The following picture which shows the contaminant at a server fan air intake, answers this question intuitively.

dirt and dust at server

With the cooling system being affected by the dust and dirt, the risk of the data center will be increased largely. Contaminants won’t stop at cooling system, they will capture every possible place where they can get to. In addition, today’s data center is largely depend on electronic equipment and fiber optic components like fiber optic connectors, which are very sensitive to contaminants. Problems like power failures, loss of data and short circuit might be happened if the contaminants were not cleaned. What’s worse, short circuit might cause fire in data center, which could lead to irreparable damage (shown in the following picture).

OLYMPUS DIGITAL CAMERA

Dust and dirt can also largely affect the life span of data center equipment as well as their operation. Cleaning and uptime usually run hand-in-hand. The uptime of a data center will be reduced if there are too many contaminants. Cleaning the data center regularly would be a good deal to reduce data center downtime and extend the life span of data center infrastructure equipment, comparing the cost of restarting the data center and repairing or replacement of the equipment.

Last but not least, Data center cleaning can offer an aesthetic appeal of a clean and dust-free environment. Although it is not the main purpose, but a clean data center can present a more desirable working environment for telecom engineers, especially for those who need to install cable under a raised floor or working overhead racks and cabinet. No one would reject the cleaning data center.

Contaminants Sources of Data Center

It is clear that data center cleaning is necessary. But how to keep the data center clean? Before take action, source of contaminants of data center should be considered. Generally, there are two main sources, one is inside the data center, and the other is from outside of the data center. The internal contaminants are usually particles from air conditioning unit fan belt wear, toner dust, packaging and construction materials, human hair and clothing, and zinc whiskers from electroplated steel floor plates. The external sources of contamination include cars, electricity generation, sea salt, natural and artificial fibers, plant pollen and wind-blown dust.

Data Center Cleaning and Contaminants Prevention

Knowing where the dust and dirt come from, here offers some suggestions and Tip to reduce the contaminants.

  • Reduce the data center access. It is recommended that limit access to only necessary personnel can reduce the external contaminants.
  • Sticky mats should be used at the entrances to the raised floor, which can eliminate the contaminants from shoes largely.
  • Never unpack new equipment inside the data center, establish a staging area outside the data center for unpacking and assembling equipment.
  • No food, drink or smoking in the data center.
  • Typically all sites are required to have fresh air make-up to the data center, remember to replace on a regular basis.
  • Cleaning frequency depends on activity in the data center. Floor vacuuming should be more often is the traffic in the data center increased.
  • Inspect and clean the fiber optic components regularly, especially, fiber optic connector and interface of switched and transceivers.
  • The inside and outside of racks and cabinets should be cleaning.
Conclusion

Data center is the information factory today. It deals with numerous information and data. Data center cleaning is necessary. On one hand, If the “factory” is polluted by dust and dirt, how could it provide reliable and high quality services. On the other hand, data center cleaning can extend the life span of equipment and saving cost for both cooling and maintenance.

Source: http://www.fs.com/blog/clean-the-dirt-and-dust-in-data-center.html

Do You Know Virtual Data Center?

virtual data center

Data centers are important to our daily life and work. However, in traditional data centers, engineers are struggling with the need to use multiple tools to maintain the data center management, like provisioning, managing and monitoring server, which is complex, expensive, inefficient and labor-intensive. Is there any better method to solve those problems and improve data center? The answer is virtualization. A modern data center is considered to be the future of the data center, which is known as virtual data center.

What Is Virtual Data Center?

A traditional data center is built by adding more compute, more storage and more networking, while a virtual data center is a data center that operates using virtualization technology which partitions a single physical server into multiple operating systems and applications, thereby emulating multiple servers, known as virtual machines (VMs). In other words, in a virtual data center all the infrastructure is virtualized and the control of the hardware configuration is automated through intelligent software system. This is the largest difference between the traditional and virtual data centers as well as one of the outstanding advantages of virtual data center.

Benefits of Virtual Data Center

As mentioned that data center virtualization is considered as the trend of the future data center. But what can virtual data center provide? What’s the advantages of virtual data center compared with the traditional one? The main abilities and advantages of virtual data center are illustrated in this part.

Cost Saving: it is known that hardware is usually the highest cost in data center. The development of the traditional data center depends on hardware and storage. Except the cost of hardware, the management and maintenance of the hardware are largely depend on labor operation of which the cost would go well beyond that of hardware. Thus, with virtual data center, the costs would be cut largely.

Easier Management and Maintenance: in traditional data center, things like heat dissipation, physical server, data backup and testing should be considered. And a lot of labor work and money has gone into these part of daily management and maintenance. Virtualize the servers using less physical hardware, less heat would be generated. The backup steps would be much easier in a virtual data center operated by software. Both full backups and snapshots of the virtual server and virtual machines can be done and these VMs can be moved from one server to another easier and faster. As to testing, with virtual data center the testing environments can be isolated from end users while keeping them online. When you’ve perfected your work, deploy it as live.

closer-to-cloud

Faster Recovery: if the a physical server dies, the redeployment might need a lot of time. However, with virtualization, the redeploy can occur within minutes with just a few clicks. Facing disaster, if the data center is virtualized, with up-to-date snapshots of your VMs, you can quickly get back up and running. With virtual data center, a host of issues go away.

Closer to Cloud: with virtual machines, you will be closer to enjoying a Cloud environment. You may even reach the point where you can deploy VMs to and from your data center to create a powerful cloud-based infrastructure. But beyond the actual virtual machines, that virtualization technology gets you closer to a cloud-based mindset, making the migration all the more easy.

The First Step to Achieve Virtual Data Center

These outstanding benefits are the main factors that drive the trend of data center to virtualization. However, to take full advantages of the virtual data center, many challenges are ahead, like security, allocating resources, maximizing investments in infrastructure. One of the most conspicuous challenges is bandwidth. As mentioned, virtual data center is automated through intelligent software system, which need the support of high bandwidth. Currently 10 Gigabit Ethernet (GbE) is now being widely adopted which could be a good choice for a storage access network. However, there might be potential blocks in the journey to virtual data center with 10GbE. To fully realized the benefits of a virtualized environment, the implementation for 40/100 GbE is recommended as the first step to virtualization. Currently, more vendors are supplying solutions for 40/100GbE. The cost of 40/100GbE is decreasing in a foreseen future. The increased traction of 40/100GbE has led to its best practice status as the standard for the next generation of high-bandwidth virtualized application.

Conclusion

The trend of the data center virtualization is clear with great benefits and potential application that virtual data center can bring to us. 40/100GbE is just the first step of the journey to virtual data center, barriers like security and resources are there to be broken down.

Source:http://www.fs.com/blog/do-you-know-virtual-data-center.html

Migrating to 40/100G With OM3/OM4 Fiber

To meet the continuously increased requirements, data center 40/100G migration is underway. The infrastructure of data centers for the 40G/100G should meet the requirements like high speed, reliability, manageability and flexibility. To meet these requirements, product solutions and the infrastructure topology including cabling must be considered in unison. Cable deployment in the data center plays an important part. The cable used in data center must be selected to provide support for data rate applications not only of today but also the future. Today, two types of multimode fiber—OM3 and OM4 fibers (usually with aqua color)—have gradually become the media choice of data center during 40/100G migration. This article illustrates OM3/OM4 multimode fibers in 40/100G migration in details.

Data Center and Multimode Fibers

Multimode fiber is being widely used in data centers. You might ask why not single-mode fiber? The answer is cost. As is known to all, the price of single-mode fiber is generally more expensive than multimode fiber. In addition multimode fibers provide a significant value proposition when compared to single-mode fiber, as multimode fiber utilizes low cost 850 nm transceivers for serial and parallel transmission. If you had all money you wanted and you’d just run single-mode fiber which has all the bandwidth you need, then you can go plenty of distance. However, this perfect situation would cost a lot of money. Thus, most data center would choose multimode fiber. OM1, OM2, OM3 and OM4 are the most popular multimode fiber. But OM3 and OM4 are gradually taking place of OM1 and OM2 in data centers.

OM

OM stands for optical multimode. OM3 and OM4 are both laser-optimized multimode fibers with 50/125 core, which are designed for use with 850nm VCSELS (vertical-cavity surface-emitting laser) and are developed to accommodate faster networks such as 10, 40 and 100 Gbps. Compared with OM1 (62.5/125 core) and OM2 (50/125 core), OM3 and OM4 can transport data at higher rate and longer distance. The following statistics (850 nm Ethernet Distance) shows the main differences between these four types multimode fibers, which can explain why OM3 and OM4 is more popular in data center now in some extent.

850 nm Ethernet Distance
Fiber Type 1G 10G 40/100G
OM1 300 m 36 m N/A
OM2 500 m 86 m N/A
OM3 1 km 300 m 100 m
OM4 1 km 550 m 150 m

 

Why Use OM3 and OM4 in 40/100G Migration

The Institute of Electrical and Electronics Engineers (IEEE) 802.3ba 40/100G Ethernet Standard was ratified in June 2010. The standard provides specific guidance for 40/100G transmission with multimode and single-mode fibers. OM3 and OM4 are the only multimode fibers included in the standard. The reason why OM3 and OM4 are applied in 40/100G migration is that they can meet the requirements for the migration cabling performance.

Bandwidth, total connector insertion loss and transmission distance are two three main factors should be considered when evaluation the performance needed for cabling infrastructure to meet the requirements for 40/100G. These factors can impact the cabling infrastructure’s ability to meet the standard’s distance of at least 100 meters over OM3 fiber and 150 meters over OM4 fiber. The following explains why OM3/OM4 are the chosen ones for 40/100G migration.

Get Higher Bandwidth With OM3/OM4

Bandwidth is the first reason why OM3 and OM4 are used for 40/100G migration. OM3 and OM4 are optimized for 850nm transmission and have a minimum 2000 MHz∙km and 4700 MHz∙km effective modal bandwidth (EMB). Comparing the OM1 and OM2 with a maximum 500 MHz∙km, advantages of OM3 and OM4 are obvious. With a connectivity solution using OM3 and OM4 fibers that have been measured using the minimum Effective Modal Bandwidth calculate technique, the optical infrastructure deployed in the data center will meet the performance criteria set forth by IEEE for bandwidth.

Get Longer Transmission Distance With OM3/OM4

The transmission distance of fiber optic cables will influence the data center cabling. The manageability and flexibility will be increased with fiber optic cables with longer transmission distance. OM3 fiber and OM4 fiber can support longer transmission distance compare with other traditional multimode fibers. Generally OM3 fibers can run 40/100 Gigabit at 100 meters and OM4 fibers can run 40/100 Gigabit at 150 meters. This high data rate and longer distance cannot be achieved by other traditional multimode fiber like OM1 and OM2. Employing OM3 fiber and OM4 in 40/100G migration is required.

Get Lower Insertion Loss With OM3/OM4

Insertion loss has always been an import factor that technically should consider during the data center cabling. This is because the total connector loss within a system channel impacts the ability to operate over the maximum supportable distance for a given data rate. As total connector loss increased, the supportable distance at that data rate decreases. According to the 40/100G standard, OM3 fiber is specified to a 100m distance with a maximum channel loss of 1.9dB, which includes a 1.5dB total connector loss budget. And OM4 fiber is specified to a 150m distance with a maximum channel loss of 1.5 dB, including a total connector loss budget of 1.0 dB. With low-loss OM3 and OM4 fiber, maximum flexibility can be achieved with the ability to introduce multiple connector mating into the connectivity link and longer supportable transmission distance can be reached.

OM3 or OM4?

Choosing OM3/OM4 is a wise and required choice for data center 40/100G migration. However, OM3 and OM4, which is better? Numerous factors can affect the choice. However, the applications and the total costs are always the main factors to consider to figure out whether OM3 or OM4 is needed.

First, the connectors and the termination of the connectors for OM3 and OM4 fibers are the same. OM3 is fully compatible with OM4. The difference is just in the construction of fiber cable, which makes OM4 cable has better attenuation and can operate higher bandwidth at a longer distance than OM3. Thus, the cost for OM4 fiber is higher than OM3. As 90 percent of all data centers have their runs under 100 meters, choosing OM3 comes down to a costing issue. However, looking in the future, as the demand increases, the cost will come down. Thus, OM4 might be the most viable product at some point soon.

No matter choosing OM3 or OM4, the migration is underway. With good performance like high data rate, long transmission distance and lower inserting loss, OM3/OM4 fiber is a must in data center migration to 40/100G.

Source: http://www.fs.com/blog/om3-and-om4-in-40-100g-migration.html