As a suggestion you could consider attaching the current NTFS partitions as RDMs to the newly created VM, after doing so cold storage migrate(offline) the RDMs to VMDKs to the Nimble array.
IF you don't like RDMs, the other alternative is to go Guest Initiator. install additional Vnics in the VM that you are planning take over the volumes, add the ports onto the iscsi network in VMware, configure the new ports in the guest onto your iscsi subnet and install NCM in the guest. Basically the VM traverses the VMware iscsi network and establishes its own connection. The VM will need its own initiator group on the Nimble array-don't mask the data LUN for ESX, mask it for that guest VM.
this is usually preferred over RDMs as it has fewer restrictions and makes the data volume more independent of the VM, which adds some flexibility in data recovery. The only thing is that SRM will not work with Guest Initiator mounted volumes.
I did this same thing about 2 months ago, send me a message if you'd like further details. My scenario was 2x 2TB and 1x 1 TB luns connected to physical windows server running SQL database. I actually did a P2V of the OS partition a week ahead of time to clean up all the hardware drivers, devices, services, disabled iscsi, setup networking, etc... At migration time I connected the luns via RDM then did a storage vmotion as outlined in the link provided by Moshe Blumberg. Verify your RDM SCSI ID's before booting. If drives don't show up in windows, check disk management. If they're not there, check your event log for for failed service dependencies.
Some things to check, depending on your array config (not sure how AF is affected):
- Block Size. A block size other than the default 4k, could be optimal. You can ask support for a "block size analysis" on your affected volumes before you move them and create a performance policy accordingly. I didn't think to check that ahead of time and ended up with a 4x duration increase for our sql backup jobs, however normal use was unaffected. With help from support, we increased the cache in our expansion shelf and performance has been good.
- Caching. To minimize the duration of cache re-population I contacted support and we created an "aggressive cache" performance policy. I temporarily assigned that policy until cache hits were at an expected rate then switched it back to the long-term performance policy, of course with the same block size. I guess the aggressive cache setting could just be removed too... your preference