Has anyone setup exchange 2013 on nimble yet?
Is it supported?
Our Exchange 2013 environment serves about 160 mailboxes. We are running a single Windows 2012/Exchange 2013 server (all roles) as a guest on an ESXi 5.1 cluster. The OS volume sits on the datastore with the Exchange 2013 data and log volumes mapped from the VM directly to volumes on our Nimble. There are 3 x 1GB paths with MPIO for the connections within the VM to each of the volumes on Nimble. When we created the volumes on the Nimble we used the Exchange 2010 data store performance profile for the data volume and the Exchange Log profile for the log volume. I haven't had any issues with performance and it's been up for about 2 months. Our 220CS is on the latest 126.96.36.199-45626-opt and the WIT is version 188.8.131.52. We have the data and log volumes in a protection group and snapshots run no problem.
Were just looking at setting up a 2013. But have not looked into it yet.
If anyone has any input that would be greatly appreciated.
So when you say 'Exchange 2013 data and log volumes mapped from the VM directly to volumes on our Nimble'. Are you running Microsoft iSCSI connections or doing RDM mappings through vSphere. I know Nimble suggestion direct MS iSCSI connections with MPIO.
If your doing MS iSCSI, do you have seperate NIC's from the vSphere NIC's for the iSCSI traffic? or do you just setup a VM Switch port group and a VMKernel Port group on the virtual switches?
Sorry I'm totally new to Exchange setups.
In each ESXi host I have a pair of dual 1GB Intel NICs for iSCSI traffic - total of 4 connections per server. I have them configured per the Nimble Best Practice guide for VMware where each one is a separate vSwitch with VMkernel port bound to iSCSI. VMware uses them for datastore connections to the Nimble. I can then add 3-4 NIC to the VM (assigned to the corresponding vSwitches for iSCSI) and inside the guest use Microsoft MPIO to create multiple connections to the volumes. One limitation we ran into with the 4x1 GB on the server and 4x1 GB on the Nimble is that VMware automatically creates 16 connections per volume - limiting us to 64 datastores per ESXi host because of ESXi's 1024 iSCSI connection limitation. We are at a point where I either have to start manually managing the connections from each server, disable one GB interface, or upgrade to 10G. I'm shooting for the 10G upgrade next year.
Randy would you be able to take a screen shot of one of your iSCSI switch setups in vSphere. I'm pretty sure I know how you have it setup but I just want to make sure before I make the changes on our unit.
So you would be adding 4 separate NIC's to the virtual machine just for iSCSI traffic right? Plus your Nic's for normal LAN traffic.
I don't know how well this is going to show but I grabbed my Network Adapters, then two vSwitches, 1 and 2, and finally the details of vSwitch1. Our storage network is a physically separate pair of Dell PowerConnect 5400 series switches so we aren't running any VLANs on the storage side.
Thanks... This is exact same setup that I was going to use. Just wanted to make sure i wasn't doing something wrong. We even have the same Dell switches.
Similar setup, but we run Exchange (although 2010) with VMDK files instead of iSCSI volumes in the Guest. I'm more of a fan of the VMware iSCSI stack than the Windows iSCSI stack, but that's just me :-)
Make sure to put the OS on a datastore which has the vSphere 5 policy set. Exchange data & log go into seperate datastore(s) with the Exchange 2010 policy set.
Retrieving data ...