Hello all!
This is my setup:
HP ProLiant ML350 G5
The server is on the HCL for vSphere 5.
My problem:
In vSphere Client, the 250GB hard disk connected to the E200I controller is recognized correctly, and I can create a VMFS5 datastore without any problems. However, the 4TB array only shows up as a 512byte device, and I can't create a datastore on it.
I also deleted the 4TB RAID0 array and created two 2TB JBOD arrays (one per disk). Now the first one shows up as 1.82TB disk, but the second one as 0.00byte.
Just to add, that the setup worked fine with MS Hyper-V Server 2008 R2 which had no problems recognizing the 4TB RAID0 array out of the box.
So why is that? I thought that vSphere 5 was supposed to be finally able to handle LUNs >2TB, so shouldn't I be able to see the full capacity of both disks in vSphere Client andto create a datastore on it? And what can I do to rectify the situation?
BTW: does anyone know when the HP and Dell editions of vSphere 5 will be available for download?
Thanks,
Ben
Hi, did you find the solution? maybe was the version 5.0 ?
I have HP DL 385p, same array P400 (P420i) with 12 Drives of 3TB each on Raid6 with vmware ESXi 5.1
and I can see all 27TB on the vmware and the controller.
but when I create the datastorage1 with 1.9TB, I dont know what I did, but know, I can not add more datastorage.
I can increase my actual datastorage (upto 27TB) but I can not add a new one, since nothing show at "Disk/Lun"
any idea ?
thanks
I think I know exactly what happened to you.
Here is diagram
P400 (P420i) with 12 Drives of 3TB --- Single virtual Drive of Raid 6 30GB --Shown to -> Vmware ESXI
Meaning, you have entire 12 Drives grouped ( (12 -2) X 3) in HW controller as single Virtual Drive of 30 GB.
Of course , Vmware can only see single Lun ( LUN = Virtual Drive from Controller). This is why you can
resize it, but you can not add more lun/drive since there is none.
There are many ways to config HW raid controller. Say you want to have 2 LUN/Drives on Vmare,
you would do 2 virtual drives in Raid OS.
For performance reason, you want to stay with SINGLE virtual drive.
Thank you!
so... I have two options:
Option (A)
1) I stay as it is, one single RAID6-30TB Logial drive, for perfomance reason. (why is this has better performance?)
2) Can I "increase" my "datastorage1" upto the max space available that is about 27TB ? and no problem with the 2TB limitation?
3) Create/add new hard drives to my Virtual Machines but less that 2TB, since the vmware limitation.
Option (B) (this is just to know)
If I want to create a second virtual drive, to see 2 LUN availables on vmware, I should:
1) reebot my server, go to HP Raid config, DELETE my actual RAID6, (which will make loose all my data and virtual machines)
2) select the drives, say on two groups of 6 drives, create two RAID6 and then two Logical drives. (6-2)x3TB=12TB Logical drive x2 =24TB;
which is less than 30TB on option (A), (because I am lossin 2 drives on reach raid)
right?
just to add,
This server will record video from many cameras.
I know many people recomend RAID10, or 5 (high write speed) but I read many recomendations that for video recording, Raid 6 is better since is very common to lose a 2nd drive during recovery of the 1st drive.
I really appreciate your time and the help on this !!
regards,
Andres
aagredo wrote:
This server will record video from many cameras.
I know many people recomend RAID10, or 5 (high write speed) but I read many recomendations that for video recording, Raid 6 is better since is very common to lose a 2nd drive during recovery of the 1st drive.
I really appreciate your time and the help on this !!
regards,
Andres
Since you are using 3TB drives, you are using the disks most prone to failure and I would tend to agree.
Assuming you have the battery of flash backed write cache, your hardware will overcome the majority of the performance difference.
RAID10 stock performs significantly slower than RAID6 with a FBWC on this hardware. Yet you can guarantee someone will chime in and tell you what trouble you're on for
1) I stay as it is, one single RAID6-30TB Logial drive, for perfomance reason. (why is this has better performance?)
Because HW Raid Controller scheduling is simplest when grouping is flat (Single GROUP)
2) Can I "increase" my "datastorage1" upto the max space available that is about 27TB ? and no problem with the 2TB limitation?
Yes, ESXI 5.0 already can go beyond 2 TB limit now
3) Create/add new hard drives to my Virtual Machines but less that 2TB, since the vmware limitation.
Each Virtual drive will show as LUN on ESXI 5.0/5.1
Option (B) (this is just to know)
If I want to create a second virtual drive, to see 2 LUN availables on vmware, I should:
1) reebot my server, go to HP Raid config, DELETE my actual RAID6, (which will make loose all my data and virtual machines)
correct
2) select the drives, say on two groups of 6 drives, create two RAID6 and then two Logical drives. (6-2)x3TB=12TB Logical drive x2 =24TB;
which is less than 30TB on option (A), (because I am lossin 2 drives on reach raid)
Correct,
3) Make sure you upgrade FIRMWAREs for both Machine/Controller
Raid 6 is fine. Having 2 smaller groups of Raid 6 defeat original purpose of Maximizing use-able space without
increase safety.
Please do not get offended. Just my few cents
Jimmy
P420i is not the same as p400. P400 esxi driver has this known bug, no solution to date.
I just ran into this issue over the weekend. I extended a 2TB array to 3TB and ESXi 5.1 no longer sees the array. The array is connected to a P400 controller on a DL380 G5.
I purchased a P410 on eBay and two of these cables from monoprice.com:For only $9.30 each when QTY 50+ purchased - 1m 28AWG Internal Mini SAS 36pin (SFF-8087) Male to SAS...
As soon as I get the parts in I'll let you guys know if a swap from a P400 to a P410 saves the day.
ESXi 5.0 has limit 2 TB (2 TB - 512 Bytes)
ESXi 5.x with vmfs5 does not have this limitation. You can create upto 64 TB datastore, at the same time ESXi 5.x with vmfs3 has this limitation.
Now I know why the original poster of thread is so pissed. READ PAGES 1-4 before commenting.
This has NOTHING TO DO with a 2TB limit and has EVERYTHING to do with a DRIVER limitation. I extended my 2TB array to 3TBs and ESXi 5.1 does not see the array because I am using a DRIVER that does not support 2TB+ arrays on a P400 controller. The HSPA driver supports 2TB+ arrays but is not compatiblewith the P400 controller. The P410 controller is compatible with the HSPA driver hence the need to upgrade/replace the controller.
NOTHING to do with a 2TB limit in ESXi, everything to do with a non-compatible driver.
For everybody else that has read the entire thread like I originally did, I'm waiting for my P410 controller to arrive and I'll post back if the swap will "magically" restore my currently non-accessible virtual machines.
Hi ALl,
Here is KB which talks about this issue. Yes, it is driver issue
http://kb.vmware.com/kb/2006942
Regards
Mohammed Emaad
*** Update ***
As promised, here is a working update:
1: Installed the HP P410 adapter in my DL 380 G5 (officially the P410 is not supported in the G5 BUT it works without issue)
2. For sh*ts and grins, I updated the firmware of the controller before connecting or doing anything else
3. In order to adapt the P410 connector to the DL380 G5 backplane, I ordered two of these from monoprice: http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025408&p_id=8191&seq=1&for...
4. Booted into the P410 controller (F8 after the iLO F8 message) to ensure my array was recognized
Caveats
ESXi 5.1 recognized that the array was 3TB in the storage configuration properties in the host however it would not let me extend the VMFS past the original 1.8 TB size.
Since I was able to at least get into the array I migrated the virtual hosts to a second ESXi box (using vCenter). Since this was a test machine in a lab I didn't migrate the virtual hosts before initially modifying/extending the array because if I lost everything it wouldn't be a big deal. Now that I was able to get into the array using the P410 controller I went ahead and moved the virtual hosts to a second box.
Once everything was migrated, I re-created the array: 2.7 TB of useable space.
Hi John,
Thank you for your post. Since you've tested this solution, i will now go ahead and upgrade my HP P400 raid controller to P410 so that i can have one single 4Tb partition.
A question for you - I currently have a P400 Raid10 4 x 2Tb 7200rpm WD Enterprise RE4 HD's, ProLiant G5 2 x quad xeons cpu, 27Gb ram. However, if i do a speed test with HDtune, i can't get speeds above 60Mbps...
What's your set-up? What kind of Raid, what kind of HD's? Could you do a speed test, maybe with HDtune and let me know what you're getting?
I'm wondering if by upgrading to P410 controller, i will see higher transfer speeds than with P400.
Thanks in advance!
You might want to start a new thread so we don't hijack this guy's thread but in the mean time...
8 Western Digital Scorpio Blue 1TB 5400 RPM SATAs
RAID 1+0 - 6 drives in RAID, 2 hot spares
P410 (and previous P400) both had cache battery (lack of battery can make a difference)
For this particular server I needed storage space, not speed (SAN, NAS not possible at this site). My other box at this site has 15,000 RPM SAS. I can run an HD tune but I'm sure I'm not going to get great results. What is your virtual hard drive configuration on your host running HD tune? Thin, thick?
I'm far from a VMware expert but I'll help where I can.
In terms of thick/thin confirmation, is that a setting in VMware? If so, then i used the thick one - one that allocates the total amount of space to the hard-drive (not the salable one that takes up less space - which may be slower). When i was diagnosing this issue about 1 year ago on my server, i believe that i tried all possible configurations, but wasn't able to get speeds about 60-70MBps - i think the issue wither the ESXi 5.0 P400 drivers or the p400 itself. I believe that these 2Tb WD RE4 64Mb cache hard-drives should be performing better than that, especially in raid10 setting!
I may get the P410 controller so i'll have a single 4Tb volume instead of 2 x 2Tb volumes that i had to create in the ESXi in order to use the space... i wonder if because ESXi sees it as 2 datastores of 2Tb each, maybe that's why it's slower... who knows.
Thanks
Yeah the RE4's should be doin better than that. Probably a controller issue but I'm guessing.
Since they are RE4's, I don't think they're prone to the TLER foolery found in non-enterprise drives. In that case, have you tried RAID 6?
Thick pre-allocates the space. Thin dynamically expands.
When you get your P410, make sure you get a cache battery and memory chip. I paid around $120 ish on eBay last week. I got mine from this guy: HP P410 SAS RAID Controller 256MB 462919 001 013233 001 w Cables Battery | eBay
Yes, definitely will get a battery back-up with it.
The controller you've got, it seems used? I'm not sure if i want to get a used one... maybe something can be wrong with it, or may only have 1/2 of its life left? What's your opinion on that?
With regards to memory chip, would i get a better performance if i get 1Gb cache chip on the p410? Like this one: http://www.ebay.com/itm/HP-P410-1GB-Flash-Back-Cache-Controller-572532-B21-/161072841831?pt=US_Serve...
I think i already have the 2 cables that are currently connected to my P400, so probably don't need the cables.
Thanks
The cache size does make a difference. I'm going to get a 1GB module soon.
Somebody else has a similar problem to yours: hardware - Incredible low disk performance on HP DL385 G7 - Server Fault
As far as controller lifespan goes, I've been recycling old hardware for as long as I can remember (in non-production environments). I don't think there would be a problem with a used controller. Batteries might go bad but you can refurb them on your own. If this is production I would get all new components.
In regards to your cables, the P400 cables won't work on the P410 controller and neither will the P400 battery. It's a shame the P400 is a dime a dozen. It would be nice to recoup some $$$.
I had managed to run P400 on ESXi 5.1 using hpsa driver, slightly modifying cciss and hpsa drivers and making .vib's from them. The rough translation of my article (original is in Russian) with link to .zip with two .vibs to make your P400 recognize >2Tb on ESXi 5.1 is here:
It will support P400i too, both PCI IDs are remapped. Use at your own risk, no detailed testing was conducted.