VMware Cloud Community
Alaskan_Malamut
Contributor
Contributor
Jump to solution

TCP Chimney Windows Server 2008 R2

I was wondering if it is wise to enable or disable the TCP chimney feature in Windows Server 2008 R2 which is virtualized in Vmware?

I saw an article regarding Windows Server 2008 R2, where it is recommended to enable the TCP chimney feature when running in a Hyper-V 2008 R2.

On Vmware I only found following article

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100951...which recomends to disable the
TCP chimney feature in Windows Server 2008 (In certain circomstances). Now in overall, what is recomended in regards to VMware. From my limited knowledge on the subject I would enable the feature. Which is recomended on Hyper-V 2008 R2, as you would pull away the required processing load away from the vertual machine and place it direct on the HyperVizor. Now does the same recomendation count for Vmware?

Another blog article on the subject does not realy clarifies it, or points the scale in either enable or disabling the deature:

http://virtualizations.wordpress.com/2011/11/05/disable-tcp-chimney-offload/

0 Kudos
1 Solution

Accepted Solutions
mshember1
Enthusiast
Enthusiast
Jump to solution

Well?

A small confession; the blog is mine  Smiley Happy  I only ask since feedback is always a good thing.  You may think you sound brilliant and communicate well but people could end up looking confused.

There is no simple answer to your question as it is heading into the realm of performance analysis and many factors come into play.

Examples in no particular order:

Equipment:  my company is large and can afford high end servers.

Network: my company can afford 10G and High Speed WANS.

Type of work:  What works in my build farms will not work for email servers or database servers.

Tuning always requires testing and it requires future testing as networks and workloads change.

Depending on what your server is doing; I would run a performance measurement for a few days.  A week is usually good if it’s a busy server.  Disable the feature and then measure again.

Chimney offset to me was lost because it’s designed for an OS to work directly with the NIC.  A guest is working with the VM host via the Vswitch.  If you have a simple setup and you server has decent power, you might see in increase.

View solution in original post

0 Kudos
5 Replies
mshember1
Enthusiast
Enthusiast
Jump to solution

TCP Chimney offset is used to place network workloads on the adapter.   Thus freeing cpu cycles.

If you have a really fast machine and a big memory and are network bound; turning it off might improve speed.

I saw reductions in run times but disabling it.   The VMs ran jobs which used network mounts and touch many files on the mounts.

It's easy enough to test.  I would try it out on the VMs first.   Run you job as you normally do and time it.   Disable it and then time it again.

What didn't you like about the blog page?

0 Kudos
Alaskan_Malamut
Contributor
Contributor
Jump to solution

My intention was not to decrease the value of the blog, the article is in sence the same as your answer. In which i conclude that there isn't really a best practice to follow here. Disabling the TCP Chimney feature could improve performance but that is something testing has to prove.

0 Kudos
mshember1
Enthusiast
Enthusiast
Jump to solution

Well?

A small confession; the blog is mine  Smiley Happy  I only ask since feedback is always a good thing.  You may think you sound brilliant and communicate well but people could end up looking confused.

There is no simple answer to your question as it is heading into the realm of performance analysis and many factors come into play.

Examples in no particular order:

Equipment:  my company is large and can afford high end servers.

Network: my company can afford 10G and High Speed WANS.

Type of work:  What works in my build farms will not work for email servers or database servers.

Tuning always requires testing and it requires future testing as networks and workloads change.

Depending on what your server is doing; I would run a performance measurement for a few days.  A week is usually good if it’s a busy server.  Disable the feature and then measure again.

Chimney offset to me was lost because it’s designed for an OS to work directly with the NIC.  A guest is working with the VM host via the Vswitch.  If you have a simple setup and you server has decent power, you might see in increase.

0 Kudos
Alaskan_Malamut
Contributor
Contributor
Jump to solution

Well to be honest, our virtual server performance is good and we had no reason to look into it untill a few weeks ago. A few weeks ago we start having intermitted problems with our two DFS file servers. As logging did not show any lead to what is performing the issue, we opened an MS support call. After installing some private patches (which did not solve the issue), the MSFT asked us to disable the TCP chimney feature. Which triggert me to see if there where any recommendations from VMware in concerns to this feature. 

0 Kudos
mshember1
Enthusiast
Enthusiast
Jump to solution

I would go with disabling it even though Microsoft sounds like they are guessing.

It's not a case where you will either see the speed increase to unheard off levels or it will drop to a snails pace.

You might see DFS benefit from this.

Keep in mind TCP Chimney offload was designed for windows server to offload work to the NIC.   You have a virtualized server.  It's not talking directly to the NIC.

0 Kudos