Prior to removing the volume from the volume collection, you can set number of snapshots to retain to 1, this will clear all the snapshots for that schedule (bar one). However bear in mind if you have any other volumes in the volume collection it will also affect those.
Currently there isn't another method, the snapshots will be orphaned when you change the volume collection (protection group), you have to manually delete them.
Is there no other solution to this? I have a volume that now needs to be in its own volume collection with specific protection settings. It currently is in a VC with a few other volumes. Are you saying that I need to remove all but one snapshot of all these volumes to make this happen? Is there any plan to make this better in the future?
I think this is by design...
When you snapshot a volume collection, the volcol snapshot is just a reference to the snapshots of each the volumes contained in the volcol. The real snapshot is tied to the child volumes.
When you remove the volume from the volcol, the volume retains all its snaps which is a good thing (they are your backups after all). The volume snaps however become orphaned from the volcol snaps, so the volcol snaps need to be separately cleaned up. If you return the volume to the volcol they are no longer orphaned.
Similarly if you delete snaps from the volume, the reference to these snaps still exists in the volcol, so again they need separate cleanup.
When you delete the snapshot from the volcol however, all the related volume snapshots will get deleted too.
If you are moving a volume from a volcol and don't want it's snaps, you need to delete all the snaps from the volcol first. This will however delete the snaps from all the other child volumes contained in the volcol.
If you need to retain the other volume snaps, the only option I can think of is to remove the volume from the volcol, then delete all the volume snaps which might be what you are doing....
Forgive me for posting in an older thread, but I came across this discussion when looking for a solution to the same problem. Since one didn't exist, I wrote a way to find and (optionally) remove orphaned snapshots using PowerShell and Nimble Storage's RESTful API's. The script can be found here: Use PowerShell and Nimble's RESTful API's to remove orphaned snapshots.
@Paul Great work! Any human "busy work" is saved is a win in my book.. I do want to mention couple of gotcha's regarding unmanaged snapshots on Nimble Storage array.
Currently, an unmanaged snapshot is defined as any snapshot not currently managed by an active volume collection schedule.
This means that snapshots created on volume level manually, snapshot collections created manually, third party app snapshots/collections are considered "unmanaged" by the system.
Only schedule-generated snapshots which no longer have said schedule are the snapshots which become unmanaged due to configuration error/change.
An interesting case is where volume collection or a schedule is deleted, but one with the same name is created. In this case, the snapshots are in-fact unmanaged, but many scripts which use name comparison cannot detect them.
The more frequent and more impacting scenario, however, is where a schedule name or a volume collection name was changed without deletion/recreation first. In those scenarios, the snapshots with different name are actually still managed by the schedule and will follow the retention policy. Using name comparison, however will show those snapshots as unmanaged which could lead to unexpected deletion of the snapshot and restore point data loss.
Another very important to note, that it is possible to delete a snapshot which was left in "online" state. Thus, script should account for such scenario and not attempt to delete an "online" snapshot.
I hope this information provides clarification of the potential impacts of using scripts. However, as you have mentioned, script is marked as "Use at your own risk".