VMware Cloud Community
vmproteau
Enthusiast
Enthusiast
Jump to solution

HP DL380 G7 Hardware Questions

We are planning a new ESXi 4.1 environment with DL380 G7 Hosts. I'm trying to determine ideal hardware configurations (if applicable). I have tried some HP avenuesbut, that's becoming a dead end. I thought I'd try here:

  1. My understanding is the DL380 G7 has 2-HP NC382i Dual Port Adapters (4 total ports). I'm minimizing single points of failure as best I can and will be creating a couple 2-NIC teams out of these. Does anyone know about the shared circuity with respect to the onboard NICs? Ideally, I'd like to split the physical teams up across NICs with the least shared pieces (i.e. Is (1-3) better than (1-2) etc)?
  2. Also trying to determine if there is an ideal PCIe card placement. I'm not entirely clear of the bus layout on these boards and wondering if there is a prefeered way to populate to maximum performance and/or redundncies.The DL380 G7 will have the following PCIe cards:

  • 2-HP NC522SFP 10GbE dual port NICs
  • 2-HP 82Q 8Gb Dual Port PCIe FC HBA
0 Kudos
1 Solution

Accepted Solutions
sonofthor
Contributor
Contributor
Jump to solution

some (many?) G6 models had these limitations, and there were older ones that had for example 2 4x slots sharing an 8x bus etc.... I know what you mean, but as the link shows, they appear to be fully independent lanes for each slot in most configs on the G7.

View solution in original post

0 Kudos
4 Replies
sonofthor
Contributor
Contributor
Jump to solution

With regards to question 1 I cannot say.  I guess you would have to assume it is part of the motherboard which is itself technically a single point of failure.  Which is why we have things like clusters and virtualization and whatnot.  If this is part ofan ESXi cluster, then you would probably bring the node down and have it replaced in the next 4-24 hours anyhow, regardless of WHICH component on the motherboard failed, so what is the difference in a sense? Just treat it as a motherboard failure at that point if the services provided by those NIC teams is really that important to your design.

With regards to question 2:

Page 12-13 should answer your question about PCIe slots configuration http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02159833/c02159833.pdf

Looks like it's going to be tight but do-able, depending on the riser card configuration used.

vmproteau
Enthusiast
Enthusiast
Jump to solution

Thanks for the reply. For the NICs, after looking at the board, it's pretty obvious that OB-1 and OB2 make up one DP NIC and OB-3 and OB-4 make up the other. I admit this detail is slightly anal retentive and the odds of an onboard DP NIC failing independently is slim. Still they do share circuitry and I have seen this type of isolated failure before. By pairing OB-1 and OB-3 as my VM Network vSwitch (for instance), I insulate myself outage from even that rare occurence.

Thanks for the link on question 2. We did squeeze them in just fine. My question was more about logical population based on shared MB bus. It used to be that some slots shared buses so, you would ideally populate your cards to maximumize bus utilization. I'm not seeing references to that with these new boards so it's probably a non factor.

0 Kudos
sonofthor
Contributor
Contributor
Jump to solution

some (many?) G6 models had these limitations, and there were older ones that had for example 2 4x slots sharing an 8x bus etc.... I know what you mean, but as the link shows, they appear to be fully independent lanes for each slot in most configs on the G7.

0 Kudos
vmproteau
Enthusiast
Enthusiast
Jump to solution

Appreciate the replies. Thanks again.

0 Kudos