VMware Cloud Community
Khue
Enthusiast
Enthusiast
Jump to solution

Performance Change - SQL Related

I p2v'd an IBM 3850. It had 2 drives connected to a SAN. The 3850 had 8 gigs of RAM and 2 Quad Core processors. It was running Windows Server 2003 Enterprise along with SQL Server 2005.

Nightly, a job would run that would reindex a database located on one of the drives. The reindex would operate on a 60 gig database. I have metrics that illustrate while on the hardware device, the reindex job would roughly take 30 to 35 minutes. I p2v'd the server, importing the 2 drives responsible for SQL I/O as RDMs in physical compatibility mode. Upon p2v'ing the device, the only thing I changed was the amount of CPUs available to the server. Instead of the 8 available CPUs on the physical box, I allocated 4 CPUs on the virtual box. The aftermath of all this work, is that the reindex job now takes roughly double the amount of time. The most recent run took an hour and 20 minutes. During the run time, the processor maxed out at 40% use and only spiked to this level during a small window if the reindex job. I am not sure where to begin to try and get some of the performance back. Can anyone make some suggestions? Thanks in advance!

0 Kudos
1 Solution

Accepted Solutions
vmroyale
Immortal
Immortal
Jump to solution

Would there be a benefit to adding another? How would that even work with 2 RDM disks?

Yes. Check out Scott Drummonds' response in this discussion, as to why this would help.

There is also some really great information on disk performance monitoring in the vSphere Basic System Administration Guide - check out p.278 for information on how to get started with investigating this issue.

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com

View solution in original post

0 Kudos
9 Replies
vmroyale
Immortal
Immortal
Jump to solution

Upon p2v'ing the device, the only thing I changed was the amount of CPUs available to the server. Instead of the 8 available CPUs on the physical box, I allocated 4 CPUs on the virtual box. The aftermath of all this work, is that the reindex job now takes roughly double the amount of time.

How about your storage controllers? How was the physical server set up in comparison to the virtual machine?

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
Khue
Enthusiast
Enthusiast
Jump to solution

The storage controllers did not change. Essentially there was a pointer change from the physical server to the virtual server but other then that, the storage controller setup has been constant. The storage system is an IBM svc, but for the sake of argument, just assume the performance is slightly better then a 12 disk RAID 5 array comprised of 33 gig 10K SAS drives.

0 Kudos
vmroyale
Immortal
Immortal
Jump to solution

How many SCSI controllers are in your virtual machine now? How many disks are there attached to them?

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
Khue
Enthusiast
Enthusiast
Jump to solution

Oh I think I see where you are going. There is one SCSI controller and three disks. One is a standard vmdk the other two are RDMs. Would there be a benefit to adding another? How would that even work with 2 RDM disks?

0 Kudos
msemon1
Expert
Expert
Jump to solution

I think Vmroyal has the right answer. Add a second SCSI controller for your RDM disks.

Mike

vmroyale
Immortal
Immortal
Jump to solution

Would there be a benefit to adding another? How would that even work with 2 RDM disks?

Yes. Check out Scott Drummonds' response in this discussion, as to why this would help.

There is also some really great information on disk performance monitoring in the vSphere Basic System Administration Guide - check out p.278 for information on how to get started with investigating this issue.

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
Khue
Enthusiast
Enthusiast
Jump to solution

Theoretically this looks like this could be the solution I need. The RAID controller that was present in the hardware was a pretty hefty device by itself. I have to wait until off production hours to do this. Thanks for the help.

0 Kudos
Odurasler2
Enthusiast
Enthusiast
Jump to solution

hello,

i was wondering if adding another controller helped your issue. I'm on the same boat and this is one of the task that i will perform to fix my performance issue.

0 Kudos
Khue
Enthusiast
Enthusiast
Jump to solution

I have been doing more work with this as of late. I have a controlled environment setup that I have been testing various configurations. Using CheckDB on a sizable 60 gig database, I was able to improve times. Here is what I did:

The server was configured with a total of 4 disks and was using MSSQL 2005:

Bus0

-OS Partition, one vmdk

-Data Partition (SQL Engine and Install), one vmdk

Bus1

-Database Datafile, one rdm

-Database Transaction, one rdm

I ran CheckDB several times and got an average run time of about 5:15 seconds. I then created another drive (read vmdk) and attached it to a new Bus and moved the TempDB

Bus0

-OS Partition, one vmdk

-Data Partition (SQL Engine and Install), one vmdk

Bus1

-Database Datafile, one rdm

-Database Transaction, one rdm

Bus2

-TempDB Datafile

-TempDB Transactionfile

Under this slightly modified setup bus 2 housed only a single drive (rdm) with both the Transaction and Datafile for SQL The average Check db went down to 4:43 seconds. The interesting thing is that this did not directly affect reindexes on this database. I am still trying to figure out why. If people are interested I will post more on this later.

0 Kudos