Hi folks,
I am specifying a new VMWare cluster. The core network fabric[1] will be 10gig (or better) - 2 links per host, one each to a separate switch for redundancy. The switches will have an interlink. Main Internet links will go on separate 1gig links.
[1] iSCSI, vMotion, management etc.
Someone put it to me that there may be some savings in cost using DAC links rather than 10GbE copper as I was planning. I understand the latter, but I have never used DAC cables. The only time I've gone near SFP(+) I was actually plugging (mini)GBICs in to use fibre.
Apart from restricted length (not a problem, all in one cabinet) are there any subtle pros and cons to using DAC cables and matchings switches and network cards in the ESXi hosts? For this discussion, assume I will be using vSAN so there will be no hardware SAN to worry about.
With SFP+ is it true that I might be able to get the links running at 4x10gig per link with ESXi6 ? Sorry - silly question, but this particular technology is new to me.
Ta
Tim
We have used DAC cables in our data center for two years. Our IBM Bladecenter has 4 connections to our Juniper core switch. Our Pure Storage array has 4 connections to our core switch. Works greats, never had a problem
Hi,
Might I ask for more information about hardware used in this solution? Type of switches, servers etc...
Tim,
In terms of networking capabilities there is absolutely no difference between 10GBase-T (copper, RJ45), 10GBase-SX (fiber, SFP+) or DAC (twinax, SFP+). In terms of performance fiber and DAC are comparable but 10GBase-T has slightly higher latency because additional error correction is required for physical link. In terms of power usage 10GBase-T has been much power hungry than fiber or DAC, I do not know if this has changed in newer network chips.
I personally always prefer fiber and DAC over 10GBase-T, and DAC over fiber if distance is short enought (10 meters or less).
Tomi