VMware Horizon Community
scrappy
Contributor
Contributor

Pilot View 4.5 Capacity planning

I am looking at testing 100 View sessions from repurposed older Desktops.

I would like any advice on if the following hardware config makes sense for the following scenario:

Scenario:

College Lab environment - Win 7, MS Office and a few other non intensive apps. May try Photoshop and some Adobe products aswell.

Network Switch(s) - 100Mbit to desktop with 1000Mbit to Servers

2 Dell 710 Servers - 48GB memory & 4TB disk space each

Usage: Strictly Lab usage so non-persistent

Would this in theory work well? For those that have done VDI do you see any possile issues?

This would be a pilot so any suggestions to stress test would be helpful aswell.

Ohh, in addtion we have a high requiremnt for students to view youtube videos and view WebCast sessions from trainers, so sound and video must be in sync. Comments?

Cheers

0 Kudos
8 Replies
regnak
Hot Shot
Hot Shot

Hi,

I'd normally give Windows 7 a pair of vCPUs and 2 GB Ram. With 48GB you'll be hard pressed to deploy 50 per Server. The Storage size sounds ok but how is it delivered? I'd be concerned if you're not using SSD anywhere you're going to hit performance issues by not having enough IOPS? Give a breakdown on number of disks and how they are configured so we can work out IOPS and see if you'll survive. The latest VMware reference architecture used a pair of Intel SSD drives as local storage and cheaper SAN storage for their testing and got very good performanace out of it. Also how many cores in each Box on the CPUs? I'd be looking at PCoIP to give you the best results but it places a heavier load on the Servers, hence you could end up being CPU bound, i.e. max out your cores before hitting 50 users.

Anyway, just some thoughts!

Mike

0 Kudos
Tibmeister
Expert
Expert

For a lab scenerio this would work fine. I wouldn't think that it would work for production, but I also haven't tested the page sharing and memory overcommit that vSphere 4.1 offers. I think the 4TB is more than what you would need for a Linked Clone, I am currently running 50 WinXP Linked Clones on 500GB of space for the clones and 50GB for the replica and master images with room to spare. I don't think there's any performance difference between Linked Clones and Persistent View desktops.

As far as network, I would definately use the PCoIP protocol for a 100Mbit connection (figuring lowest bandwidth), I have seen really good performance out of this protocol.

Dual vCPUs is definately the way to go for Flash and other high frame items, it gives enough horsepower for the PCoIP server on the client to do it's work while leaving enough for the guest to run smoothly. I would stick withy a minimum of 48GB of RAM.

The best suggestion I can give is try it out using View 4.5 and see how things work for you, it may be exactly what you need or you may need to add more RAM to the host servers, which is easy enough.

0 Kudos
scrappy
Contributor
Contributor

Thanks for the config suggestions.

Our current config is striped across 3 disks per volume for testing.

On that note, when it comes to IOPS, if we were to use non-persitant images then would the better config be not to use the SAN at all?

If the testing looks good, further testing I am thinking should involve SSD disks as local storage(1 TB) and 64GB memory

Also, if we were to test persistant images for a few users would the best config be: SSD local to Server(Golden image) and changes kept on SAN. Our SAN is an Equallogic SATA drives.

Any persieved issues with this config for persistant sessions?

Thanks everyone.

0 Kudos
mittim12
Immortal
Immortal

Also if you want to do some stress testing you can try to utilize VMware's RAWC , http://www.vmware.com/files/pdf/VMware-WP-WorkloadConsiderations-WP-EN.pdf. or try to get on with the beta of VMware View planner. I haven't heard much about planner since VMworld so not sure how much help that would be.






If you found this or any other post helpful please consider the use of the Helpful/Correct buttons to award points

Twitter: http://twitter.com/mittim12

0 Kudos
regnak
Hot Shot
Hot Shot

Hi,

SATA is going to be hard pressed to deliver IOPS. The latest View reference architecture used 2 x Intel SSD drives with non-persistent linked clones - they placed the replicas and C: drives on SSD, and the user data drives and other workloads on SAN (15K SAS Disks). This arrangement might work for you with the SATA but will need to be tested for performance. They used Dell R610's I think also for their servers with a Netapp SAN. The increase in Ram you're proposing would definately help. RAWC is a great tool, easy to spin up a few hundred VMs and test, the current one offers Windows 7 & Office 2007 tests, a later one (not released yet) works with 64-bit Windows 7 and Office 2010. You can get it from a partner - they confirmed at VMworld that you are allowed use it directly yourself, the partner doesn't have to be directly involved, they just have to supply it.

For persistent VMs you might look at a profile management product, especially if you can get discounted licensing if you in the education sector or something like that. LiquidwareLabs ProfileUnity is one of the cheapest on the market - worth evaluating! This might alleviate the need to use user data disks and save you a SAN?

You could get away without a SAN but there are some server workloads that you won't have much redundancy for (No HA / vMotion etc) - View Broker, Domain Controller, File Servers, vCenter possibly etc - you should be able to go with a cheaper one at least as you aren't serving 100's of VMs from it. Also with the View licenses you can use VDR as your backup tool.

Mike

Just some more thoughts for you!

Mike

coffeelover
Contributor
Contributor

We are a Community College in Fla and implementing similar system(s) as yours.

However, while our servers are also Dell 710s with 48 GIGs ram they only have dual 15K 144 SAS drives. We are using EMC AX4-5i for ISCSI storage with a single shelf of 12 15K 300GB SAS drives. Each Server has 6 Nics with two dedicated for ISCSI storage. Performance is quite good and the SAN itself was only about $11,000 ... does not include ISCSI switches. We did get quotes from DELL but their pricing for the same SAN was much higher so we are actuaolly using an EMC branded EMC SAN! Smiley Happy

So far we have a 70 user setup for a shared use facility and the major limitation appears to be memory. With the three servers I have observed satisfatory performance with 120 VMs enabled.

-Patrick-

scrappy
Contributor
Contributor

Very intersting. Smiley Happy

Just to clarify you mention 120 VMs, is this spread across 3 servers?

Are these 120VMs running concurrenlty in various capacity?

Lastly, you meniotn satisfactory performance. Could please be so kind to elaborate a little?

Thanks very much.

0 Kudos
coffeelover
Contributor
Contributor

these are rather basic VMs with Office and Internet and not much else. Still I have spot checked performance a few times by using the view client on my desktop machine. The View system is at a joint use building about 15 miles away. Anyhow, I have been impressed by the performance of the desktop(s).

We are implementing a new 100 user system on the main campus, this is primarily for a 60 station testing Center and then for two labs. We willalso be upgrading the Technology's 30 user system to VIEW4.5 and evaluating the feasability of offering E-Learning graphics classes with Photoshop, Flash, and even AutoCad on the targeted course lists.

Overall I am greatly impressed by the Dell 710 servers as well as the EMC AX4-5i.

NOTE: I have heard some god reports for Starwind's ISCSI server but a Decnet box with SAS drive storage is going to push at least 4-5 thousand anyhow.

-Patrick-

0 Kudos