Home High Performance What should we build an open data center? Some details about the structure of Facebook’s DC in Altoona

What should we build an open data center? Some details about the structure of Facebook’s DC in Altoona

by admin

What should we build an open data center? Some details about the structure of Facebook's DC in Altoona
Over the past year, Facebook has brought a lot of interesting things to the standards of network hardware development. While most developers leave their projects proprietary, Facebook opens innovations to others. Actually, for a company whose goal is to provide the public with an exchange of information, this model of work makes sense. At the same time, the open scheme of work allows Facebook to save money. James Taylor, the company’s vice president of infrastructure, estimates that Facebook has saved more than $2 billion over the past 3 years by allowing members to "Open Compute Project." work to their own specifications.
Also draws attention to. Wedge , an open top-of-rack scroll developed by the OCP community. This was followed by. 6-Pack. , FBOSS and OpenBNC Facebook has built its new data center, based on Open Compute Project developments, in Altoona, Iowa, USA. In doing so, the company provided all the necessary information about the this project Here are some ideas that can be used in other companies’ data centers, with DCs of all sizes.

Facebook Cluster Design

The first image shows the aggregated cluster design of the data center in Altoona. The developers call the data center architecture "4-post." Here, up to 255 racks can be aggregated via ToR switches into high-density cluster switches (CSWs). RSWs can have up to 44 10G downlinks and 4 or 8 10G uplinks. Four CWSs and connected RSWs make up the cluster.
What should we build an open data center? Some details about the structure of Facebook's DC in Altoona
Four aggregated "FatCat" (FC) switches connect the clusters into a single system. Each CSW has a 40G connection to one of the four FCs. An 80G protection ring connects the CWS within each cluster, and the FCs are connected to a 160G protection ring.
This is a really good structure for several reasons, including the reliability and usability of the system. However, for Facebook, this has not been enough. The fact is that many of the problems in this kind of architecture result from the need for very large switches for CSWs and FCs.

What about Altoona?

The next-generation data center architecture from Altoona solves most of the problems of cluster architecture, while retaining the best features of this type of architecture.
For example, here we don’t use several large switches but many small ones. In this case, each switch is responsible for a small percentage of the load, and failure of one switch is not a significant problem;
Also, capital and operating costs are reduced in such a DC;
Increasing the size and capacity of this type of data center is done in a very short period of time, much cheaper than in conventional DCs.
The network topology of such a DC is shown in the following image, where you can quickly recognize Clos Instead of dealing with hundreds of racks in a cluster design, here each of the topological units is responsible for 48 racks.
What should we build an open data center? Some details about the structure of Facebook's DC in Altoona
What should we build an open data center? Some details about the structure of Facebook's DC in Altoona
Below is a high-volume diagram of this kind of data center topology.
What should we build an open data center? Some details about the structure of Facebook's DC in Altoona
Representatives of Facebook claim that the modular design of the data center allows you to quickly change the structure of the DC by adding or removing certain elements. All changes are made in minimal time and at minimal cost. This point of view is explained in more detail here :
The advantage of Facebook’s new type of data center is the ability to use small switches, in an architecture that allows the facility to scale to any size, without having to change the base units.
You can use switches from Accton, Quanta, Celestica, Dell and some other companies. The Quanta switch with 32 40G ports costs $7, 495 and the Juniper QFX5100 with 24 40G ports costs a little under $30, 000.

Hyper scaling – what is it?

Most telecom professionals apply the term only to giants like Amazon, Google, and Facebook. Nevertheless, the term refers to the ability to change scaling in a very short period of time. A hyper-scale data center can be relatively small, but it can be scaled up at any time without making fundamental changes to the infrastructure. It should also be possible to use the same switches and connections as originally used.
There may only be a few racks in a DC, but it may already be a hyperscale DC.
Another misconception about hyperscaling is the belief that DCs of this type are "honed" to work optimally with one or more major applications. This is not entirely true. Ideally, hyperscale design means being able to support hundreds of business applications as easily as a DC handles big-data, search applications, or social media.
As for Facebook DC – here you can add additional blocks and layers without much trouble, and as many as you need at any given time.

You may also like