I am back fellow blog readers! I have been knee deep in studying for DCUFI and I came across the topic I was most excited to read about and that is Unified Fabric! I am going to share some of what I learned in this post.
Unified Fabric is a term that aims to reduce cabling in the data center, meaning that when you provision a new server or chassis you now have the option to run 1 or 2 cables instead of 2 or 4. As some may have guessed, Unified Fabric unifies the Ethernet and Storage networks by utilizing FCoE, 10Gigabit Ethernet and a new adapter called a Converged Network Adapter, AKA the CNA.
The CNA can either be a single or a dual port card and it processes both Fibre Channel and Ethernet. But wait one second, this is a unified fabric posting, why does the CNA understand Fibre Channel, shouldn’t it speak FCoE? Well there in lyes the magic of the CNA, lets take a look at one from a basic standpoint
As depicted above the CNA has both a FC driver and an Ethernet Driver. From the point of view of the Nexus 5548, it is sending FCoE traffic down to the CNA and it is the responsibility of the CNA to decode and decipher what it is going to do with the frame, either send it to the FC driver or send it to the Ethernet driver.
As depicted in the diagram, from the 5548, there is only one cable running to the CNA, now in most cases you want two, for SAN and LAN redundancy, but we have a unified fabric.
With the advent of Unified Fabric, a few other standards pertaining to Data Center Bridging (DCB) have been introduced and some of them add some really neat new features to some old tried and true network concepts;
Priority Flow Control (802.1Qbb) – Before we examine PFC, we first need to understand what flow control is. Flow control is a mechanism where the receiving switch can stop transmission from the other side if the sending station is sending data at a fast rate and cant keep up with it. The only time you really have to worry about this is if you have a faster interface sending to a slower interface, for example, 10G to 1G. Lets look at the diagram below
What we see is Switch 1 is sending data at a rate that switch two can not handle and its ingress buffers are getting full at too fast a rate. Switch 2 sends a PAUSE frame to Switch 1 which looks like the following
What this does is it will pause all the traffic, and traffic will need to be retransmitted somehow, in the case of TCP this is done via TCP retransmission, but in the case of UDP you will lose all the packets that get paused.
What if there was a way to limit a specific CoS value from sending but allow other CoS values to send, enter Priority Flow Control. What Priority Flow Control does is it can pause based on CoS values so that you can designate CoS values to never be paused or values that will pause once it is at a certain level. Once it recovers from the pause time, the traffic for that CoS value will start to flow again.
The next piece of DCB is ETS or Enhanced Transmission Selection. What this does is you assign the amount of bandwidth a certain CoS class can use and give that class a certain percentage of bandwidth to use. Lets say you allocate bandwidth based on the following.
Lets say CoS 3,4 and 5 are not transmitting and CoS 2 needs to go above 10%, as long as the higher CoS values do not need the bandwidth, CoS 2 can borrow from CoS 3, 4 and/or 5. Once one of the higher CoS values does want to transmit data, CoS 2 must slow down to its minimum so that the higher CoS value that wants to send data can start.
The next piece of DCB is DCBX or Data Center Bridging Exchange. What this does is it utilizes LLDP and allows switches to discover each other and what there DCB capabilities are and it will allow for configuration replication between neighbors, for DBC protocols, so that all devices are in-sync with each other.