Hi,
Can anyone offer some guidance on how much RAM and vCPU I to use in the View Connection server when designing a View 3.1 deployment for 250 users?
I'm planning on having 2 connection servers for fail over.
Thanks
Peter Grant
I have created my virtual connection brokers as single vCPU with 1024MB RAM. this is considered a best practice. as the Connection brokers a tied together via an ADAM database there is no requirement to have a single large server.
For 250 users you could easily have 2 Brokers coupled with NLB and each serving 125 users. I personally have designed this to manage upto 500 heavy users each host with no difficulty. a Connection Broker can handle a 1000 VDI sessions with no issue.
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Tom Howarth VCP / vExpert
VMware Communities User Moderator
Blog: www.planetvm.net
Contributing author for the upcoming book "[VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment|http://my.safaribooksonline.com/9780136083214]”. Currently available on roughcuts
Hey Peter,
Take a look at the View Manager Administration Guide on page 14 for the system requirements.
I would make sure the connection servers have at least 4GB RAM as it states to have 3GB or more for 50+ connections.
Hi,
Thanks, yes I've seen that.
I was planning on 4GB RAM. I'm really wondering what other people have done in their deployments and if they would configure it with a 2 or 4 vCPU?
I'm inclined to go with 2 vCPUs based on this document.
Peter
Peter,
I would start with one vCPU and expand if you need to. We have seen
where 1 vCPU is plenty depending on the processor you have in your ESX
host.
Thank you,
Joel C. Butler
NYS Departmant of Labor
Planning and Technology
Information Technology Specialist II
T: 518-457-9099
Joel.Butler@labor.state.ny.us
Thanks Joel,
Can I ask approx how many concurrent VDI sessions do you have?
Peter
Peter,
We are just in the demo phase at this time and waiting to see if
management wants to use this software for our home agents. If they do
then we are looking at possibly 30-50 users to start.
If that happens we will be using our current setup to start.
ESX Hosts:
3 IBM HS20 blades 3.2Ghz
6 GB RAM
IBM DS800 SAN LUN
Connection Server:
1 vCPU
4 GB RAM
30 GB HDD
Thank you,
Joel C. Butler
NYS Departmant of Labor
Planning and Technology
Information Technology Specialist II
T: 518-457-9099
Joel.Butler@labor.state.ny.us
Hi,
I would recommend using a 2vCPU configuration. You can get away by having 1vCPU if you are going to use direct mode connections only, but the View broker performs best with 2vCPU, especially if you are planning of using tunnel mode.
Most of the CPU is utilized by the tunnel service, so a 2vCPU would keep the server more stable in cases of traffic spikes through the tunnel. If you are planning on a direct connection mode only, then the View server main task would be to authenticate and communicate to vCenter and Active Domain. The second CPU in this case would help if you expect to receive many concurrent or very close connection request from large numbers of clients.
Hope it helps
dCerri
QA - Vmware View Sys Test
I have created my virtual connection brokers as single vCPU with 1024MB RAM. this is considered a best practice. as the Connection brokers a tied together via an ADAM database there is no requirement to have a single large server.
For 250 users you could easily have 2 Brokers coupled with NLB and each serving 125 users. I personally have designed this to manage upto 500 heavy users each host with no difficulty. a Connection Broker can handle a 1000 VDI sessions with no issue.
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Tom Howarth VCP / vExpert
VMware Communities User Moderator
Blog: www.planetvm.net
Contributing author for the upcoming book "[VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment|http://my.safaribooksonline.com/9780136083214]”. Currently available on roughcuts
If you are in a LAN situation, the clients connect through RDP directly with the VM. So the RDP traffic does not go through the View Connection broker. Therefore, the Connectionbroker does not have to have a lot of horsepower. It can easily be a VM. No problem.
When users connect through a security-server, than that server sets up the tunnels and that is a lot more work.
Also, as VM's don't have the same network-performance as a physical machine (the latency of a VM can vary when the ESX server is more busy) i would highly recommend a physical server for the security-server(s). The back-end connection-broker server(s) can still be a VM as the load there is relatively light.
In real-life, we get very good and consistent performance from security-servers when they are physical than with them as VM's. With VM's the connections can lag a bit more, depending on what other tasks the ESX server is doing at the same time.
In all cases, the view-server is a VM.