That looks pretty good and personally I would separate out the search DB to alleviate any IO contention (I'm not sure what you're using to connect be it 1 Gig ISCSI or 10 Gig ISCSI). Right now with our VM - Nimble environment we're doing an entire remapping of the ISCSI layout so it's a big chore to get it right. Since you won't run into any target limits with using a VMDK for another volume for the search, why not just make another volume? It's all direct attached to the VM anyway so seems like it wouldn't hurt.
Just my thoughts on it.
Looks like this is more a SQL question than a Sharepoint one
I've got some SQL on Nimble Storage related articles on my blog which I'll be adding to more over time
Without knowing your snapshot backup and replication requirements, your layout looks good. The only difference might be the block size on the transaction log volume. IO size for tlogs varies quite a lot but they can be very small <2KB so a 4KB block size is what I use.
The advise of splitting databases across drives is more of an old school thing when DBA's and Storage admins used to create various raid sets for specific workloads for the database (remember they mention physical hard disks in your reference which is not what we are dealing with on a Nimble Storage Array). All iop writes are channelled from network to memory, compressed then written as a stripe across all spindles. Reads are essentially the reverse. Locating IO on different volumes makes no difference to this process, so adding extra volumes is only going to consume valuable volumes on the array (there is a 255 volume per array limit), for no extra benefit (if anything more of an overhead to maintain). So keep all your databases on the same pair of volumes. You can potentially push more IOPs by adding more data files to the database - SQL can then use some threads on one data file while other threads are busy on the other data file.
That's a good blog post. I'm really interested in your migration path from NetApp/NFS datastores. We DID NOT migrate from NetApp. However, I can't seem to figure out what the big hurdle is to migrate off of NFS.
That's a good point on the "different RAID sets" note. We are only seperating volumes if the "cache" option is different.
I've heard the point on "4KB for Logs" before. Either way (4 or 8 KB) I think a user should be ok. As long as they don't cache it and they format the NTFS volume with 64KB blocks.
If you could briefly share your thoughts on moving from NetApp, that would be awesome.
We migrated from NetApp but with iSCSI attached luns to Nimble iSCSI attached volumes. There are probably a few different ways to do this but I chose wearing an outage and performing a straight data copy due to simplicity and being able to semi-automate the process (in total about 2600 Netapp luns were migrated to 1300 Nimble Volumes over 7 arrays in about 8 weeks).
Yes totally agree the important property of the TLog performance profile will be cache being disabled. 4KB vs 8KB block size probably matters very little.
We didn't migrate VMDK's and these still exist on Netapp NFS datastores. I would love to see these move to Nimble as well but I have no control in this area. There was great hesitation for the team to move from NetApp/NFS to Nimble/vmfs and in the end they didn't even try it. I would have thought the benefit of increased performance and reliability (not to mention cost, space power) would be incentive enough to suffer some short term pain in migration and process change. Would love to see Nimble support NFS natively as I think they would increase their market share.
downtime? what down time? I migrated all my VMs off Isilon which was NFS mounted datastores over to nimble, took a while, but wasnt that big of a deal, the PowerShell script to do it was also really straight forward.
$ds = get-datastore Nimble* #mine are all called nimble01 02 etc etc
$vms = get-vm #can filter based on harddisk or whatever, depends on what needs to move
move-vm -vm $vms[$i] -Datastore $ds[$i%($ds.count)] -DiskStorageFormat Thin
let it rip and come back in a few days
you could get a little more complex to monitor datastore usage and also move multiple at a time, but for me and the VMs i had to move, this was fine. If you wanted to be a little safer about things it wouldnt really be all that hard.
im so glad we are moving away from isilon
Let me find out the reasons they were not keen to move, but I think it was first the migration process, then the process change around provisioning and DR. I really think they were simply happy with how things work currently and did not want to move. The VM environment is HUGE! Approx 65 clusters, 500 hosts and 5800 VM's so I guess any task that size is daunting, but in the IT world nothing stands still and its either move with the times or get left behind. The pros to moving this amount of storage from NetApp to Nimble should be a no-brainer especially when a datacenter migration is on the horizon where space and power is so constrained. Should be killing 2 birds with 1 stone, but I'm sure I don't have to convince anyone here
Yeah i only moved 100 or so VMs so it wasnt too big of a deal, but, it also wasnt really hard, just took a lot of time.
I REALLY hate isilon, and NAS as a datastore location. especially with err... cant recall if it was 5.1 or not, but the storage DRS and the fact that you get more insight in to performance. oh well
my systems have been fine. There really shouldnt be any issues with doing that. Actually i think what it proves is you need to get off the NAS storage
really depends on how the env is setup, but, if its all setup correctly there should be little impact from doing that.
you could also easily modify the script so that it only works at night.