Tag Archives: data center

Cat6 Cables for Data Center Applications

The trend in network has always been leaning towards the higher bandwidth. Upgrading to a Cat6 cables (including Cat6a) system can ensure transmission speed and sustained performance for processing needs. Especially for data centers, investing into a higher-grade system will increase the network’s capacity and performance.

Cat6 Cables Overview

Conformed with EIA/TIA/IEEE standards, Cat6 cabling system includes patch cables, pre-terminated trunk cables, and bulk cables. For most first-class suppliers, their Cat6 Ethernet cables, involving Cat6a ethernet cables have 100% passed the Fluke Test, and deliver a specified testing report. Generally, Cat6 network cables adopt oxygen-free copper conductor with high electrical conductivity and low signal transmission attenuation. Being backward compatible with all the previous categories, cable UTP Cat6 and cable SFTP Cat6 both can be used to support up to 10 Gigabit Ethernet speed, and operate at up to 250MHz (Cat6a at 500MHz).

Cat6 Patch Cables

 

Cat6 Ethernet patch cables consist of Cat6, Cat6a, and slim Cat6 patch cables. According to length like 100ft Cat6 Ethernet cable, color, cable jacket, and shielding type, different Cat6 can be found in the market. Usually, the conductor of Cat6a and Cat6 shielded cable is 26AWG. The Cat6 unshielded cable is 24AWG, and the slim Cat6 is 28AWG. With a transmission distance up to 100m, Cat6 patch cables are widely used in data centers, network cabinets, offices to connect any data transmission equipment, such as PoE switches.

Cat6 Patch Cables

Figure 1: Cat6 Patch Cables

Cat6 Pre-terminated Trunk Cables

 

For Pre-terminated trunk cable, UTP Cat6 and SFTP Cat6a are available. Altogether, there are plug to plug type and jack to jack type can be found in the market. Generally, the conductor of jack to jack type is 23 AWG, while the plug to plug type is 26AWG. When it comes to the Cat6 cable price per meter, the plug to plug Cat6 cable price and Cat6a cable price are much higher than jack to jack types like jack to jack Cat6 UTP price. As for applications, Cat6 pre-terminated trunk cable assemblies are used to improve efficiency and reduce labor cost and waste in large infrastructures with high-density cross-connection and patching systems.

Cat6 Pre-terminated Trunk Cables

Figure 2: Cat6 Pre-terminated Trunk Cables

Cat6 Bulk Cables

 

Complied with IEEE 802.3af and IEEE 802.3at for PoE applications, most Cat6 bulk cables (including Cat6a) are about 1000ft (305m) lengths with spools. Their conductors are about 23 AWG. This Cat6 cable type is premium cabling designed for Cat6 or Cat6a applications, such as connecting an Ethernet wall jack to a router, patch panel or switch. With fast transmission and excellent signal quality, it ensures peak performance through your LAN.

Cat6 Bulk Cable

Figure 3: Cat6 Bulk Cable

Cat6 Cabling Application in Data Center

As we mentioned in the previous part, Cat6 cables consist of patch cables, pre-terminated trunk cables, and bulk cables. Each Cat6 cable type has its own features, which can be deployed into different scenarios. Here, we will take the integrated cabling of Cat6 pre-terminated trunk cable and Cat6 Ethernet patch cable types as a case to demonstrate its application in data center.

Cat6 Cabling Solution Case:

 

Cat6 Cables Data Center Application

Figure 4: Cat6 Cables Data Center Application

As the Cat6 wiring diagram shown above, we can find in this scenario, there are two racks in this data center needed to do cabling. And in each side, there is one FS S3900-24T4S switch. In that case, the first thing you need to do is to consider how to do Cat6 wiring and what’s the Cat6 wire order. Firstly, for the switch connection, the regular Cat6 patch cables will be used undoubtedly. As for connecting the two racks, the jack to jack trunk cable is suggested to use to do cross-connection. After that, cable managers and cable ties are recommended to deploy to keep the cables organized effectively. For suggested products list, you can refer to the following chart.

Suggested Products:

 

 Products List

Figure 5: Products List

Conclusion

As a cost-effective solution, Cat6 cables have been widely applied in all kinds of 1G/10G networks, especially in data centers. How to have a flexible and economically cabling system matters a lot to data center users. Hope this article can give you some inspirations.

Data Centers: Say Hello to White Box Switch

Today, nearly all mainstream organizations use traditional (integrated) switches from vendors like Cisco, HP, Arista and Juniper. However, hyperscale folks such as Google, Amazon and Facebook are taking the lead to use white box switch or whitebox switch in the portion of their networks, operating the system in a different manner. So what is the magic behind that? Are these OTTs the only customers of white box switch? You may find some hints in this article.

White Box Switch

What Makes White Box Switch Special?
White box switches consists of generic and inexpensive hardware and a preload network operating system (NOS) that can be purchased and installed separately. Often the hardware and software come from different vendors. This is in contrast to a traditional switch that comes as one package including the hardware and the software. For example, when you buy a catalyst switch from Cisco, you are obliged to use Cisco IOS as its operating system. But with white box switch, you are allowed to buy hardware and software separately.

Except offering increased software flexibility/programmability and reduced vendor lock-in, white box switch enables users to have multiple choices on hardware, network operating system (NOS) and applications. The impact of which is profound when it comes to network orchestration, routing, automation, monitoring and network overlay.
White Box Switch NOS

What About the Target Market of White Box Switch?
White box switch is initially designed for data centers. Companies that operating mega data centers are especially prefer white box switch for at least two reasons: these companies generally demand for massive deployment of switches and the port density of each switch needs to be high. White boxes are cheaper while offering high-density ports, hence proven to be an optimal alternative. On the other hand, these large-scale companies also value the flexibility and openness of the switch platform, besides CAPEX savings. As an open platform to offer broader flexibility, white box switch free them from traditional L2/L3 protocols, enabling more possibilities to develop and support any SDN based networking.

So, are these large-scale OTTs the only target market for the cheap white box switch? Definitely No!

Any small or medium-sized cloud based providers, or data center of service providers can consider deploy white box switches in data centers, concerning the cost savings and enhanced flexibilities compared with traditional switches. Also because of the familiar IT tools/ commands their technicians are used to. However, white box switches are not yet ready to offer all features and services that a service provider needs to offer, and not yet for deployment in non data center environments.

The Potential of White Box Switch
Based on an open platform, white box switch offers greater possibilities for innovation when compared with traditional networking gears. As the number of vendors that specialized in developing software began to soar, customers can choose from a range of software solutions with added functionality and reduced price.

White box switch becomes even popular in this age of SDN. In traditional switches, software and hardware are integrated into one package, which limits the network innovation greatly. SDN is here to decouple the software from hardware, helping speed shifts in networking. It resembles the standpoint of white box switching. Moreover, the advert of SDN also drives white box forward: when combined with SDN-centric designs, these deployments have resulted in dramatic improvements in automation, operational simplification, and faster innovation. These benefits are now being realized by enterprises of all sizes via commercially available SDN solutions.

Conclusion
Despite the fact that white box switches cannot be applied in non-data center environment for the time being, they are meeting their target market requirements successfully. The potential of white box switch cannot easily be underestimate, it is an ideal alternative that worth to be seriously considered at least for data center applications.

Related Article: Unveil the Myths About SDN Switches

How to Take Full Advantages of Switches in Data Center: A Case Study of IBM G8264 Switch

During data center upgrading or migration to higher data rate like 40G/100G, the network designer is always pursuing for flexibility. This is because devices or cabling components with great flexibility can not only decrease the cost for upgrading, but also provide more possibilities for the data center in the future. Switch has always been the most important device data center. Thus, a flexible switch should support a variety of transmission media and data rates, which could have significant positive influence during data center upgrading on cabling and costs. IBM G8264 switch is such a switch that is specially designed for data center, which is suggested to be used at layer 2 or layer 3, providing non-blocking line-rate, high-bandwidth switching, filtering, and traffic queuing without delaying data. However, to make full use of these switches, you should select proper connection components and cabling plans. This post will take IBM G8264 switch as an example to illustrate how to take full advantages of the switches in data center.

Understand Your Switch—IBM G8264 Switch

The first step to make full use of a switch is to have a full understanding of the switch you are using. There are many ways to understand your switch. While the most direct method is to understand the ports on the switches. This method also works for IBM G8264 switches. As shown in the following picture, which is the front panel of IBM G8264 switch, the most outstanding part of the switch is the 48 SFP/SFP+ ports. It occupied most space on IBM G8264 switch front panel. These ports can support data rate of 1G/10G. Four QSFP+ ports for 40G are beside these SFP/SFP+ ports. There are three another ports for other use on the from panel: one 10/100/1000 Ethernet RJ45 port for out of band management, one USB port for mass storage device connection and one mini-USE console port for serial access.

IBM G8264 switch port information

IBM G8264 Connection in Data Center

It is clear that IBM G8264 switch can support data rate of 1G, 10G and 40G. The following parts illustrate how to connect IBM G8264 with the target devices in 1G, 10G, and 40G network separately in details. During the cabling in data center, distance is always a factor that cannot be ignored. The transmission distance required, can largely decide the cabling components selection.

1G Connection of IBM G8264 Switch

To accomplish the 1G connection of IBM G8264 switch and target devices, there are several methods according to transmission distance and transmission media (fiber optic or copper). For distance up to 100 meters, RJ-45 1000BASE-T SFP transceivers with UTP Cat5 cables are suggested, cause they are based on copper and is cheaper than fiber optic components. However, if you want reach a longer distance with good transmission quality, it would be better to use fiber optic cable and optical transceiver. By using 1000BASE-SX SFP optical transceivers with multimode fiber, the transmission distance is up to 220 (62.5 μ multimode fiber) meters and 550 meters (50 μ multimode fiber). For long distance transmission, single-mode fiber optic cables are suggested to be used with 1000BASE-LX SFP optical transceivers, which can connect IBM G8264 switch with the target devices that are 10 kilometers far away. The following chart is the detailed product solutions for IBM G8264 1G connection.

Transmission Media Module Cable & Connector Distance
Copper Cable BN-CKM-S-T: SFP 1000BASE-T copper transceiver RJ45, Cat5 cable 100 m
Fiber Optic Cable BN-CKM-S-SX: SFP 1000BASE-SX optical transceiver LC duplex, MMF 220 m(50μ multimode fiber)
550 m(62.5μ multimode fiber)
BN-CKM-S-LX: SFP 1000BASE-LX optical transceiver LC duplex, SMF 10 km

10G Connection of IBM G8264 Switch

As mentioned, IBM G8264 switch supports 10G configuration. For 10G, there are mainly two methods: using DACs (direct attach cables) or using transceivers and patch cords. The beauty of using DAC is the eliminating of transceivers and reduction of cost. However, the transmission distance is limited to 7 meters by using DACs. If longer distances are required, 10GBASE-SR transceiver used with OM3 multimode fiber can support transmission distance up to 300 meters. If 10GBASE-SR transceiver is used with OM4 fiber optic cable, distance up to 400 meters could be reached. Using 10GBASE-LR transceiver with single-mode fiber optic cable, IBM G8264 switch can be connected with target devices that are 40 kilometers away.

IBM G8264 switch and 40GBASE QSFP+ transceiver

If the 10G ports number cannot satisfy the requirements, the one QSFP+ port on IBM G8264 can be split into four 10G ports, by using QSFP+ DAC breakout cables for distances up to 5 meters. For distances up to 100 meters, optical MTP-to-LC break-out cables can be used with the 40GBASE-SR4 transceiver. Kindly check the following table for IBM G8264 switch 10G cabling components solutions.

Data Rate Modules Cable & Connector Distance
10G-10G Connection BN-SP-CBL-1M: SFP+ Copper Direct Attach Cable (1 meter) 0.5-7 m
BN-SP-CBL-3M: SFP+ Copper Direct Attach Cable (3 meter)
BN-SP-CBL-5M: SFP+ Copper Direct Attach Cable (5 meter)
BN-CKM-SP-SR: SFP+ 10GBASE-SR Short Range Transceiver LC duplex, MMF 300 m(OM3)
400 m(OM4)
BN-CKM-SP-LR: SFP+ 10GBASE-LR Long Range Transceiver LC duplex, SMF 40 km
40G-10G Connection BN-QS-SP-CBL-1M: QSFP+ DAC Break Out Cable (1 meter) 5 m
BN-QS-SP-CBL-3M: QSFP+ DAC Break Out Cable (3 meter)
BN-QS-SP-CBL-5M: QSFP+ DAC Break Out Cable (5 meter)
BN-CKM-QS-SR: QSFP+ 40GBASE-SR Transceiver MTP-to-LC break-out cables 100 m

40G Connection of IBM G8264 Switch

For 40G connection, both fiber optic connection and copper connection can be built by using different components. A 40GBASE QSFP+ to QSFP+ DAC can provide connection between IBM G8264 and target devices up to 7 meters. With multimode fiber optic cables, distance up to 100 meters (OM3) and 150 meters (OM4) can be reached, when using with 40GBASE-SR4 QSFP+ transceivers. For long distance 40G transmission, 40GBSE-LR QSFP+ transceiver and single-mode fiber optic cable with LC connectors are suggested. Related components for IBM G8264 switch are concluded in the following chart.

Modules Cable & Connector Distance
49Y7884: QSFP+ 40GBASE-SR Transceiver MTP connector, MMF 100 m(OM3)
100 m(OM4)
00D6222: 40GBASE-LR4 QSFP+ Transceiver LC connector, SMF 10 km
BN-QS-QS-CBL-1M: QSFP-to-QSFP cable (1 meter) 1-7 m
BN-QS-QS-CBL-3M: QSFP-to-QSFP cable (3 meter)
Conclusion

To make full used of the switches in data center with great flexibility, both the selection of switch and cabling solutions is very important. IBM G8264 as a switch with great flexibility is an ideal solution for data center upgrading to 40G. The above mentioned modules and cables are all provided by FS.COM, which are IBM G8264 compatible and are fully tested on the IBM G8264 switches. Kindly contact sales@fs.com for more details, if you are interested.

Clean the Dirt and Dust In Data Center

dirt and dust in data center

If you are in a data center, wipe your finger on a distribution cabinet or a patch panel. Then watch your finger, can you see the scene shown in the following picture? Your finger is attached with dust or dirt. This situation is so familiar to most telecom engineers working in data centers. However, how many people really care about it? You might recognize that the data center needs cleaning, but you might just think about it. This is the contaminant that can be seen and checked directly by eyes or touching. How about those dust or dirt inside the equipment? Over time, without timely cleaning, the accumulation of dirt and dust will lead to problems like overheating and various network failures. This is just the start of troubles, more are there to be deal with, if no proper action was taken.

Why Clean the Data Center?

What would happen, if there is no regular cleaning in data center? As mentioned, the most direct result of contaminant is overheating. Why? Dust and pollutants in the data center are usually light-weight. If there is air flow, dust or dirt will move with it. The cooling system of the data center is largely depending on server fan which can bring the dust and dirt into the cooling system. The accumulation of these contaminant can cause fan failure or static discharge inside equipment. The heat dissipation will need more time and heat emission efficiency is limited. The following picture which shows the contaminant at a server fan air intake, answers this question intuitively.

dirt and dust at server

With the cooling system being affected by the dust and dirt, the risk of the data center will be increased largely. Contaminants won’t stop at cooling system, they will capture every possible place where they can get to. In addition, today’s data center is largely depend on electronic equipment and fiber optic components like fiber optic connectors, which are very sensitive to contaminants. Problems like power failures, loss of data and short circuit might be happened if the contaminants were not cleaned. What’s worse, short circuit might cause fire in data center, which could lead to irreparable damage (shown in the following picture).

OLYMPUS DIGITAL CAMERA

Dust and dirt can also largely affect the life span of data center equipment as well as their operation. Cleaning and uptime usually run hand-in-hand. The uptime of a data center will be reduced if there are too many contaminants. Cleaning the data center regularly would be a good deal to reduce data center downtime and extend the life span of data center infrastructure equipment, comparing the cost of restarting the data center and repairing or replacement of the equipment.

Last but not least, Data center cleaning can offer an aesthetic appeal of a clean and dust-free environment. Although it is not the main purpose, but a clean data center can present a more desirable working environment for telecom engineers, especially for those who need to install cable under a raised floor or working overhead racks and cabinet. No one would reject the cleaning data center.

Contaminants Sources of Data Center

It is clear that data center cleaning is necessary. But how to keep the data center clean? Before take action, source of contaminants of data center should be considered. Generally, there are two main sources, one is inside the data center, and the other is from outside of the data center. The internal contaminants are usually particles from air conditioning unit fan belt wear, toner dust, packaging and construction materials, human hair and clothing, and zinc whiskers from electroplated steel floor plates. The external sources of contamination include cars, electricity generation, sea salt, natural and artificial fibers, plant pollen and wind-blown dust.

Data Center Cleaning and Contaminants Prevention

Knowing where the dust and dirt come from, here offers some suggestions and Tip to reduce the contaminants.

  • Reduce the data center access. It is recommended that limit access to only necessary personnel can reduce the external contaminants.
  • Sticky mats should be used at the entrances to the raised floor, which can eliminate the contaminants from shoes largely.
  • Never unpack new equipment inside the data center, establish a staging area outside the data center for unpacking and assembling equipment.
  • No food, drink or smoking in the data center.
  • Typically all sites are required to have fresh air make-up to the data center, remember to replace on a regular basis.
  • Cleaning frequency depends on activity in the data center. Floor vacuuming should be more often is the traffic in the data center increased.
  • Inspect and clean the fiber optic components regularly, especially, fiber optic connector and interface of switched and transceivers.
  • The inside and outside of racks and cabinets should be cleaning.
Conclusion

Data center is the information factory today. It deals with numerous information and data. Data center cleaning is necessary. On one hand, If the “factory” is polluted by dust and dirt, how could it provide reliable and high quality services. On the other hand, data center cleaning can extend the life span of equipment and saving cost for both cooling and maintenance.

Source: http://www.fs.com/blog/clean-the-dirt-and-dust-in-data-center.html

Do You Know Virtual Data Center?

virtual data center

Data centers are important to our daily life and work. However, in traditional data centers, engineers are struggling with the need to use multiple tools to maintain the data center management, like provisioning, managing and monitoring server, which is complex, expensive, inefficient and labor-intensive. Is there any better method to solve those problems and improve data center? The answer is virtualization. A modern data center is considered to be the future of the data center, which is known as virtual data center.

What Is Virtual Data Center?

A traditional data center is built by adding more compute, more storage and more networking, while a virtual data center is a data center that operates using virtualization technology which partitions a single physical server into multiple operating systems and applications, thereby emulating multiple servers, known as virtual machines (VMs). In other words, in a virtual data center all the infrastructure is virtualized and the control of the hardware configuration is automated through intelligent software system. This is the largest difference between the traditional and virtual data centers as well as one of the outstanding advantages of virtual data center.

Benefits of Virtual Data Center

As mentioned that data center virtualization is considered as the trend of the future data center. But what can virtual data center provide? What’s the advantages of virtual data center compared with the traditional one? The main abilities and advantages of virtual data center are illustrated in this part.

Cost Saving: it is known that hardware is usually the highest cost in data center. The development of the traditional data center depends on hardware and storage. Except the cost of hardware, the management and maintenance of the hardware are largely depend on labor operation of which the cost would go well beyond that of hardware. Thus, with virtual data center, the costs would be cut largely.

Easier Management and Maintenance: in traditional data center, things like heat dissipation, physical server, data backup and testing should be considered. And a lot of labor work and money has gone into these part of daily management and maintenance. Virtualize the servers using less physical hardware, less heat would be generated. The backup steps would be much easier in a virtual data center operated by software. Both full backups and snapshots of the virtual server and virtual machines can be done and these VMs can be moved from one server to another easier and faster. As to testing, with virtual data center the testing environments can be isolated from end users while keeping them online. When you’ve perfected your work, deploy it as live.

closer-to-cloud

Faster Recovery: if the a physical server dies, the redeployment might need a lot of time. However, with virtualization, the redeploy can occur within minutes with just a few clicks. Facing disaster, if the data center is virtualized, with up-to-date snapshots of your VMs, you can quickly get back up and running. With virtual data center, a host of issues go away.

Closer to Cloud: with virtual machines, you will be closer to enjoying a Cloud environment. You may even reach the point where you can deploy VMs to and from your data center to create a powerful cloud-based infrastructure. But beyond the actual virtual machines, that virtualization technology gets you closer to a cloud-based mindset, making the migration all the more easy.

The First Step to Achieve Virtual Data Center

These outstanding benefits are the main factors that drive the trend of data center to virtualization. However, to take full advantages of the virtual data center, many challenges are ahead, like security, allocating resources, maximizing investments in infrastructure. One of the most conspicuous challenges is bandwidth. As mentioned, virtual data center is automated through intelligent software system, which need the support of high bandwidth. Currently 10 Gigabit Ethernet (GbE) is now being widely adopted which could be a good choice for a storage access network. However, there might be potential blocks in the journey to virtual data center with 10GbE. To fully realized the benefits of a virtualized environment, the implementation for 40/100 GbE is recommended as the first step to virtualization. Currently, more vendors are supplying solutions for 40/100GbE. The cost of 40/100GbE is decreasing in a foreseen future. The increased traction of 40/100GbE has led to its best practice status as the standard for the next generation of high-bandwidth virtualized application.

Conclusion

The trend of the data center virtualization is clear with great benefits and potential application that virtual data center can bring to us. 40/100GbE is just the first step of the journey to virtual data center, barriers like security and resources are there to be broken down.

Source:http://www.fs.com/blog/do-you-know-virtual-data-center.html