Skip navigation
1 2 3 Previous Next

Setup & Networking

78 posts

In my past couple of blog posts, I covered leveraging the Nimble Windows Toolkit version 4 (NWT4), to quickly create clones of SQL Server databases, using new functionality like host level cloning automation provided by included PowerShell cmdlets, or querying application metadata stored as part of VSS enabled snapshots. But, these enhancements don't just benefit SQL Server. I can use the same workflows to clone Exchange databases as well. For example, I can use Exchange database clones to quickly create a Recovery Database, allowing me to accelerate individual restore requests that fall outside of the configured deleted items retention period, improving my SLAs for data recovery. By using NWT4, I can initiate these clones from the appropriate Exchange host, without involving storage administrators as part of the restore process. Let's take a closer look at this type of workflow.

 

Exchange Mailbox Recovery with Nimble Clones:

 

This workflow is broken down into four parts:

 

(1) Identify the snapshot collection that will be used to create the database clone;

(2) Clone and mount database volumes to the Exchange server;

(3) Create the Recovery Database and execute the appropriate item or mailbox level restore;

(4) Clean up.

 

Identifying the Snapshot Collection used for the Exchange Database clone:

 

With NWT4, I can easily identify the appropriate snapshot collection to use for my Exchange database clone by using the VSS application metadata stored with the snapshot as the base of my query. To display the a list of available snapshot collections, I can use the the "-MaxObjects" parameter. A value of one uses the latest snapshot collection. A value greater than one displays the recent number of collections, in order from most recent. I will be leveraging the "-AppObject" parameter to identify the database for which I want snapshot collection information. Once I know the snapshot collection I would like to clone from, I can use that information with the "InvokeCloneNimVolumeCollection" cmdlet, that will automatically create volume clones and mount the clones to the Exchange Server. The workflow will be executed from the Exchange server that holds the active mailbox database copy that I need to clone.

 

Identifying the Appropriate Snapshot Collection

# Use the Get-NimSnapshotCollection cmdlet to get the latest snapshots for my Exchange database. I will chose the most recent snapshot collection before the data loss. #

$> Get-NimSnapshotCollection -AppObject "test_db_nwt4" -MaxObjects 5

 

GroupMgmtIP : 192.168.35.26

Name : master-exch-db-hourly-2017-05-16::10:00:22.054

VolumeCollectionName : master-exch-db

CreationTime : 5/16/2017 10:00:22 AM

Snapshots : {master-exch-db-hourly-2017-05-16::10:00:22.054, master-exch-db-hourly-2017-05-16::10:00:22.054}

 

GroupMgmtIP : 192.168.35.26

Name : master-exch-db-hourly-2017-05-16::09:00:21.678

VolumeCollectionName : master-exch-db

CreationTime : 5/16/2017 9:00:21 AM

Snapshots : {master-exch-db-hourly-2017-05-16::09:00:21.678, master-exch-db-hourly-2017-05-16::09:00:21.678}

 

GroupMgmtIP : 192.168.35.26

Name : master-exch-db-hourly-2017-05-16::08:00:21.567

VolumeCollectionName : master-exch-db

CreationTime : 5/16/2017 8:00:21 AM

Snapshots : {master-exch-db-hourly-2017-05-16::08:00:21.567, master-exch-db-hourly-2017-05-16::08:00:21.567}

 

GroupMgmtIP : 192.168.35.26

Name : master-exch-db-hourly-2017-05-15::20:00:21.562

VolumeCollectionName : master-exch-db

CreationTime : 5/15/2017 8:00:21 PM

Snapshots : {master-exch-db-hourly-2017-05-15::20:00:21.562, master-exch-db-hourly-2017-05-15::20:00:21.562}

 

GroupMgmtIP : 192.168.35.26

Name : initial

VolumeCollectionName : master-exch-db

CreationTime : 5/15/2017 7:59:17 PM

Snapshots : {initial, initial}

 

Cloning and Mounting the Exchange Database Volumes:

 

After identifying the most recent snapshot collection before the point of data loss, I can move forward with creating clones of the database volumes. I will use the name of the snapshot collection ("-SnapshotCollectionName"), as well as the volume collection ("-VolumeCollectionName"), with the "InvokeCloneNimVolumeCollection" cmdlet. The values for both parameters are listed in the output of the "Get-NimSnapshotCollection" command as shown above. I will also connect the volumes, provide a custom suffix for my volume clones ("-Suffix), and assign access paths ("-AccessPath") as part of the same command.

 

Cloning and Mounting Exchange Database Volumes

# Creating volume clones and mounting clones to Exchange server #

$> Invoke-CloneNimVolumeCollection -SnapshotCollectionName "master-exch-db-hourly-2017-05-16::08:00:21.567" -VolumeCollectionName "master-exch-db" -Suffix "-dbrecovery" -AccessPath "c:\Recoverydb\db","C:\Recoverydb\tl"

 

DeviceName : \\.\physicaldrive3

SerialNumber : 44867277635d0d396c9ce9008b7f40ca

GroupManagementIP : 192.168.35.26

NimbleVolumeName : master-exch-db1-dbrecovery

DiskSize : 52428127.5

BusType : iScsi

WindowsVolumes : {C:\Recoverydb\db\}

FCTargetMappings : {}

Clone : True

Snapshot : False

BaseSnapshotName : master-exch-db-hourly-2017-05-16::08:00:21.567

ParentVolumeName : master-exch-db1

 

DeviceName : \\.\physicaldrive4

SerialNumber : fb94d7821412a0066c9ce9008b7f40ca

GroupManagementIP : 192.168.35.26

NimbleVolumeName : master-exch-tl1-dbrecovery

DiskSize : 20964825

BusType : iScsi

WindowsVolumes : {C:\Recoverydb\tl\}

FCTargetMappings : {}

Clone : True

Snapshot : False

BaseSnapshotName : master-exch-db-hourly-2017-05-16::08:00:21.567

ParentVolumeName : master-exch-tl1

 

Creating the Recovery Database and Executing the Restore:

 

Now that my database volumes have been cloned and mounted to the Exchange server, I can go ahead and create my Recovery Database. To create the Recovery Database, I will need to first create the configuration for the database using the "New-MailboxDatabase" cmdlet, using the files in my cloned volumes. I will then ensure the database is in a "Clean Shutdown" state before mounting it to the Exchange server. After the database is successfully mounted, I will execute my restore request. The following commands were run from the Exchange Management Shell.

 

Creating the Recovery Database

# Adding the Recovery Database to the configuration. Using custom locations for database and log files. The following set of commands are run from the Exchange Management Shell #

$> New-MailboxDatabase -Server master -Name RecoveryDB -Recovery -EdbFilePath "C:\Recoverydb\db\test_db_nwt4\test_db_nwt4.edb" -LogFolderPath "C:\Recoverydb\tl\test_db_nwt4\"
WARNING: Recovery database 'RecoveryDB' was created using existing file C:\Recoverydb\db\test_db_nwt4\test_db_nwt4.edb.
The database must be brought into a clean shutdown state before it can be mounted.

 

Name                           Server          Recovery        ReplicationType
----                           ------          --------        ---------------
RecoveryDB                     MASTER            True               None

WARNING: Please restart the Microsoft Exchange Information Store service on server MASTER after adding new mailbox databases.

 

# Checking the status of the database #

$> eseutil.exe /mh "C:\Recoverydb\db\test_db_nwt4\test_db_nwt4.edb"

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 15.01
Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...
         Database: C:\Recoverydb\db\test_db_nwt4\test_db_nwt4.edb

.....

DB Signature: Create time:05/15/2017 23:56:11.394 Rand:3807340063 Computer:

         cbDbPage: 32768

           dbtime: 3043 (0xbe3)

            State: Dirty Shutdown

     Log Required: 4-55 (0x4-0x37)

    Log Committed: 0-56 (0x0-0x38)

   Log Recovering: 0 (0x0)

   Log Consistent: 4 (0x4)

......

Operation completed successfully in 0.265 seconds.

 

# Performing a soft recovery #

$> eseutil /r E00 /l "C:\Recoverydb\tl\test_db_nwt4\" /s "C:\Recoverydb\tl\test_db_nwt4\" /d "C:\Recoverydb\db\test_db_nwt4\"

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 15.01
Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating RECOVERY mode...
    Logfile base name: E00
            Log files: C:\Recoverydb\tl\test_db_nwt4\
         System files: C:\Recoverydb\tl\test_db_nwt4\
   Database Directory: C:\Recoverydb\db\test_db_nwt4\

Performing soft recovery...
                      Restore Status (% complete)

          0    10   20   30   40   50   60   70   80   90  100
          |----|----|----|----|----|----|----|----|----|----|
          ...................................................

 

Operation completed successfully in 1.484 seconds.

 

# Checking status of database after recovery #

$> eseutil.exe /mh "C:\Recoverydb\db\test_db_nwt4\test_db_nwt4.edb"

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 15.01
Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...
         Database: C:\Recoverydb\db\test_db_nwt4\test_db_nwt4.edb

.....

DB Signature: Create time:05/15/2017 23:56:11.394 Rand:3807340063 Computer:

         cbDbPage: 32768

           dbtime: 45109 (0xb035)

            State: Clean Shutdown

     Log Required: 0-0 (0x0-0x0)

    Log Committed: 0-0 (0x0-0x0)

   Log Recovering: 0 (0x0)

   Log Consistent: 0 (0x0)

.....

Operation completed successfully in 0.265 seconds.

 

# Mounting recovery database #

$> Mount-Database -Identity "RecoveryDB"

 

# Execute restore operation. In this case, I will restore the mailbox for user "User, Test." #

$> New-MailboxRestoreRequest -SourceDatabase "RecoveryDB" -SourceStoreMailbox "User, Test" -Name TestUserRestore -TargetMailbox "User, Test" -AllowLegacyDNMismatch

Name            TargetMailbox                                     Status
----            -------------                                     ------
TestUserRestore rtpdemo.rtplab.nimblestorage.com/Users/User, Test Queued

 

Post-Restore Clean Up:

 

Once the mailbox restore request completes, the only steps left are to remove the Recovery Database, disconnect cloned volumes from the Exchange Server, and delete the cloned volumes from the array. The new NWT4 cmdlet "Remove-NimVolume" allows me to automate the last two steps, and like previous cmdlets in this workflow, the amount of array knowledge required is minimal. I will be leveraging the "-NimbleVolumeAccessPath" parameter to identify the volumes that I would like to offline and disconnect from the server, as well as delete from the array. Three simple commands, and my clean up is done!

 

Cleaning Up

# Detach and remove Recovery Database. The following commands are executed from the Exchange Management Shell #

$> Dismount-Database -Identity RecoveryDB

Confirm
Are you sure you want to perform this action?
Dismounting database "RecoveryDB". This may result in reduced availability for mailboxes in the database.
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [?] Help (default is "Y"): A

 

$> Remove-MailboxDatabase -Identity RecoveryDBConfirm
Are you sure you want to perform this action?
Removing mailbox database "RecoveryDB".
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [?] Help (default is "Y"): A
WARNING: The specified database has been removed. You must remove the database file located in
C:\Recoverydb\db\test_db_nwt4\test_db_nwt4.edb from your computer manually if it exists. Specified database: RecoveryDB

 

# Using NWT4 cmdlets to disconnect cloned volumes, then offline and delete cloned volumes from the array. These cmdlets are executed outside of the Exchange Management Shell #

$> Remove-NimVolume -NimbleVolumeAccessPath "c:\Recoverydb\db","c:\Recoverydb\tl"

 

Final Thoughts:

 

With NWT4, I was able to accomplish the workflow in this blog in a matter of minutes thanks to the new automation available through our cmdlets. But, there is more to see! For more information on NWT4 or other use cases, please be sure to check out my other blog posts:

 

NOS4 Use Case: Rapid Deployment of SQL Developer Containers with Nimble Storage

NOS4 Use Case: Leveraging NWT4 cmdlets for SQL Server Reporting or Dev/Test Workflows

 

Also, be sure to check out Anagha's blog post on the NWT4 cmdlets:

 

The “Power” of PowerShell cmdlets in Nimble Windows Toolkit 4.0

Nimble OS4 brings an exciting set of enhancements for Windows application data management, including snapshot application metadata for Exchange and SQL Server, host level cloning automation provided by the Nimble Windows Toolkit version 4 (NWT4), and the Hyper-V VSS Requestor. In this blog, I am going to cover using NWT4 cmdlets, as well as our REST API, to perform two of the most common Proof of Concept requests for SQL Server cloning that I receive:

 

(1) Cloning a SQL Server Database to a Reporting Host;

(2) Cloning a SQL Server Database for Development and Test

 

The new clone workflow automation included with the NWT4 cmdlets greatly simplifies and accelerates SQL Server database cloning, especially when clones are required to occur at frequent intervals (ie. Development and Test or Reporting). Leveraging these cmdlets also allows for more self service capabilities, where workflows can be completed without logging into the array GUI, or knowing a detailed amount of storage level infrastructure. For example, application metadata allows for the quick identification of snapshot and volume information, and all cmdlets may be issued from the source or destination SQL Server host.

 

So let's begin....

 

Cloning a SQL Server Database to a Reporting Host:

 

This workflow is broken down into four parts:

 

(1) Get the most recent snapshot information for the volumes backing the SQL Server database (not a necessary step if using the latest snapshot collection, though I included the output for show and tell);

(2) Clone the volumes;

(3) Attach the volumes to the appropriate reporting host.

(4) Mount the database to SQL Server.

 

The reporting host does not have to be the source SQL Server, and more often than not, the desire is to offload reporting to another host altogether. I will be running these cmdlets from the reporting host itself.

 

Note that you have the option to display a list of available snapshot collections by changing the "-MaxObjects" parameter. A value of one uses the latest snapshot collection. A value greater than one displays the recent number of collections, in order from most recent. Also, since cloned volumes inherit the ACL from the parent, I will be using the "-InitiatorGroup" parameter to specify the initiator of the reporting host. Access paths are assigned in alphabetical order.

 

Cloning Production Server Data to Reporting Host with NWT4 cmdlets

# Find the appropriate SnapshotCollection for the SQL Server database, change the MaxObjects variable to show more snapshot collections. #

$> Get-NimSnapshotCollection -AppObject "\SQL03\virtdb1" -MaxObjects 5

 

GroupMgmtIP          : 192.168.35.26

Name                 : sql03udb-hourly-2017-05-11::16:01:26.480

VolumeCollectionName : sql03udb

CreationTime         : 5/11/2017 4:01:26 PM

Snapshots            : {sql03udb-hourly-2017-05-11::16:01:26.480, sql03udb-hourly-2017-05-11::16:01:26.480}

 

GroupMgmtIP          : 192.168.35.26

Name                 : sql03udb-hourly-2017-05-11::15:01:27.660

VolumeCollectionName : sql03udb

CreationTime         : 5/11/2017 3:01:27 PM

Snapshots            : {sql03udb-hourly-2017-05-11::15:01:27.660, sql03udb-hourly-2017-05-11::15:01:27.660}

 

GroupMgmtIP          : 192.168.35.26

Name                 : sql03udb-hourly-2017-05-11::14:01:25.649

VolumeCollectionName : sql03udb

CreationTime         : 5/11/2017 2:01:25 PM

Snapshots            : {sql03udb-hourly-2017-05-11::14:01:25.649, sql03udb-hourly-2017-05-11::14:01:25.649}

 

GroupMgmtIP          : 192.168.35.26

Name                 : sql03udb-hourly-2017-05-11::13:01:26.935

VolumeCollectionName : sql03udb

CreationTime         : 5/11/2017 1:01:26 PM

Snapshots            : {sql03udb-hourly-2017-05-11::13:01:26.935, sql03udb-hourly-2017-05-11::13:01:26.935}

 

GroupMgmtIP          : 192.168.35.26

Name                 : sql03udb-hourly-2017-05-11::12:01:25.996

VolumeCollectionName : sql03udb

CreationTime         : 5/11/2017 12:01:26 PM

Snapshots            : {sql03udb-hourly-2017-05-11::12:01:25.996, sql03udb-hourly-2017-05-11::12:01:25.996}

 

GroupMgmtIP          : 192.168.35.26

Name                 : sql03udb-hourly-2017-05-11::16:01:26.480

VolumeCollectionName : sql03udb

CreationTime         : 5/11/2017 4:01:26 PM

Snapshots            : {sql03udb-hourly-2017-05-11::16:01:26.480, sql03udb-hourly-2017-05-11::16:01:26.480}

 

# Use the latest SnapshotCollection to create clones of the database volumes, and then mount cloned volumes to reporting host. Assigning mount points as part of the mount process.#

$> Get-NimSnapshotCollection -AppObject "SQL03\virtdb1" -MaxObjects 1 | InvokeCloneNimVolumeCollection -Suffix "-clone" -InitiatorGroup "sql04-fci-n1" -AccessPath "c:\sqlclone-virtdb1\data","c:\sqlclone-virtdb1\log"

 

DeviceName        : \\.\physicaldrive9

SerialNumber      : 8d164eae19e8766d6c9ce9008b7f40ca

GroupManagementIP : 192.168.35.26

NimbleVolumeName  : sql03-tl-clone

DiskSize          : 20964825

BusType           : iScsi

WindowsVolumes    : {C:\sqlclone-virtdb1\data\}

FCTargetMappings  : {}

Clone             : True

Snapshot          : False

BaseSnapshotName  : sql03udb-hourly-2017-05-11::16:01:26.480

ParentVolumeName  : sql03-tl

 

DeviceName        : \\.\physicaldrive10

SerialNumber      : bc8185b178dc434f6c9ce9008b7f40ca

GroupManagementIP : 192.168.35.26

NimbleVolumeName  : sql03-udb-clone

DiskSize          : 52428127.5

BusType           : iScsi

WindowsVolumes    : {C:\sqlclone-virtdb1\log\}

FCTargetMappings  : {}

Clone             : True

Snapshot          : False

BaseSnapshotName  : sql03udb-hourly-2017-05-11::16:01:26.480

ParentVolumeName  : sql03-udb

 

# Attach the database to the SQL Server #

$> $attachSQLCMD = @"

>>USE [master]

>>GO

>>CREATE DATABASE [virtdb1-clone] ON (FILENAME = 'C:\sqlclone-virtdb1\data\virtdb1.mdf'),(FILENAME = 'C:\sqlclone-virtdb1\log\virtdb1.log') for ATTACH

>>GO

>>"@

>>  Invoke-Sqlcmd $attachSQLCMD -QueryTimeout 3600 -ServerInstance "sql04-fci-n1"

 

Cloning a SQL Server Database for Development and Test:

 

Much like the reporting workflow, requested POC's for development and test usually have clones mounted on separate hosts. The difference I often see between these two workflows is that for development and test, virtual machines are commonly used as the destination systems. This workflow has a couple of extra pieces, and it is a little more complicated, as we need to interact with vCenter in order to attach volumes to the guest, but it is still worth showing, as it is high on my list of requests. I will also be making use of our REST API to modify the ACLs on the cloned volumes. Check out Julian Cates most recent post about API enhancements in NOS4:

 

Enhanced REST API in Nimble OS 4.

 

Don't be intimidated by the REST API functions defined at the start of the script block. The real work starts when the clones are created. The REST functions are completely portable, and can be used with any PowerShell scripting against Nimble Arrays running NOS3 and NOS4.

 

For instances where the guest is running iSCSI, the previous workflow would be used to attach the cloned volumes. The following workflow focuses on attaching cloned volumes as RDMs to the guest machine. It is broken down into five parts:

 

(1) Get the most recent snapshot information for the volumes backing the SQL Server database;

(2) Clone the volumes and assign ACL;

(3) Connect to vCenter via PowerCli, attach volumes to ESXi hosts and add the volumes as RDMs to the guest;

(4) Modify attributes of the cloned volumes on the Dev/Test guest, assign mount points;

(5) Mount the SQL database.

 

Like the previous workflow, I will be running all commands from the Dev/Test guest. PowerCli is also required for the workflow. An alternative to using the REST API is to leverage our PowerShell toolkit to assign ACLs.

 

Cloning Production Data to Dev/Test Host with NWT4 cmdlets and REST API

# Function Definitions for REST API. DO NOT EDIT THESE. Port as they are written.#

# Function to get token #

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }

function get-token

{

  param

  (

  [string]$array,

  [string]$uid,

  [string]$password

  )

  $data = @{

  username = $uid

  password = $password

  }

  $body = convertto-json (@{ data = $data })

  $uri = "https://" + $array + ":5392/v1/tokens"

  $token = Invoke-RestMethod -Uri $uri -Method Post -Body $body

  $token = $token.data.session_token

  return $token

}

# Function to create new ACL on volume #

function Create-ACL

{

  param

  (

  [string]$array,

  [string]$token,

  [string]$apply_to,

  [string]$vol_id,

  [string]$igroup_id

  )

  $data = @{

  apply_to = $apply_to

  initiator_group_id = $igroup_id

  vol_id = $vol_id

  }

  $body = convertto-json (@{ data = $data })

  $header = @{ "X-Auth-Token" = $token }

  $uri = "https://" + $array + ":5392/v1/access_control_records"

  $result = Invoke-RestMethod -Uri $uri -Method Post -Body $body -Header $header

  return $result.data

}

# Function to get the volume information for specific volume, so we can use the ID for other purposes, ie. adding an ACL #

function Get-volID

{

  param

  (

  [string]$token,

  [string]$array,

  [string]$volume

  )

  $header = @{ "X-Auth-Token" = $token }

  $uri = "https://" + $array + ":5392/v1/volumes/detail"

  $volume_list = Invoke-RestMethod -Uri $uri -Method Get -Header $header

  $vollist = $volume_list.data

  foreach ($vol in $vollist)

  {

  if ($vol.name -eq $volume)

  {

  $volid = $vol.id

  $volserial = $vol.serial_number       

  $volinf = $vol | select @{ Name = "Name"; Expression = { $volume } }, @{ Name = "VolID"; Expression = { $volid } }, @{ Name = "Serial_Number"; Expression = { $volserial } }

  $volinfo += $volinf

  break

  }

  }

  Write-Output $volinfo

}

# Function to get the igroup id for a specific igroup, so we can use the id for other purposes, ie. adding an ACL to a volume. #

function Get-igroupID

{

  param

  (

  [string]$token,

  [string]$array,

  [string]$name

  )

  $header = @{ "X-Auth-Token" = $token }

  $uri = "https://" + $array + ":5392/v1/initiator_groups?name=" + $name

  $igroup_list = Invoke-RestMethod -Uri $uri -Method Get -Header $header

  Write-Output $igroup_list.data.id

}

 

# Use the latest SnapshotCollection to create clones of the database volumes, and then assign ACLs with the functions listed above. #

$> Get-NimSnapshotCollection -AppObject "SQL03\virtdb1" -MaxObjects 1 | InvokeCloneNimVolumeCollection -Suffix "-cloneRDM" -DoNotConnect

 

DeviceName        : Unknown

SerialNumber      : 73e2b0c22991a2236c9ce9008b7f40ca

GroupManagementIP : 192.168.35.26

NimbleVolumeName  : sql03-tl-cloneRDM

DiskSize          : 20480

BusType           : Unknown

WindowsVolumes    : {}

FCTargetMappings  : {}

Clone             : True

Snapshot          : False

BaseSnapshotName  : sql03udb-hourly-2017-05-12::09:01:25.745

ParentVolumeName  : sql03-tl

 

DeviceName        : Unknown

SerialNumber      : a28d3b90d274efc36c9ce9008b7f40ca

GroupManagementIP : 192.168.35.26

NimbleVolumeName  : sql03-udb-cloneRDM

DiskSize          : 51200

BusType           : Unknown

WindowsVolumes    : {}

FCTargetMappings  : {}

Clone             : True

Snapshot          : False

BaseSnapshotName  : sql03udb-hourly-2017-05-12::09:01:25.745

ParentVolumeName  : sql03-udb

 

$> $token = Get-token -array 192.168.35.26 -uid admin -password XXXXXX

$> $db_clone_id = Get-volID -array 192.168.35.26 -token $token -volume sql03-udb-cloneRDM

$> $tl_clone_id = Get-volID -array 192.168.35.26 -token $token -volume sql03-tl-cloneRDM

$> $igroup_id = Get-igroupID -array 192.168.35.26 -token $token -name "ESX-HOSTS"

$> Create-ACL -apply_to "both" -array 192.168.35.26 -token $token -igroup_id $igroup_id -vol_id $db_clone_id.VolID

$> Create-ACL -apply_to "both" -array 192.168.35.26 -token $token -igroup_id $igroup_id -vol_id $tl_clone_id.VolID

 

# Attach cloned volumes to guest.#

$> Get-Cluster | Get-VMHost | Get-VMHostStorage -RescanAllHba -RescanVmfs

$> $snapdb = Get-VMhost | Get-ScsiLun | where { $_.CanonicalName -match "eui." + $db_clone_id.Serial_Number }

$> $snaptl = Get-VMhost | Get-ScsiLun | where { $_.CanonicalName -match "eui." + $tl_clone_id.Serial_Number }

$> New-HardDisk -VM "sql04-fci-n1" -DiskType RawPhysical -DeviceName $snapdb.ConsoleDeviceName

$> New-HardDisk -VM "sql04-fci-n1" -DiskType RawPhysical -DeviceName $snaptl.ConsoleDeviceName

 

# Change Nimble Volume attributes and assign mount point #

$> Set-NimVolume -SerialNumber $db_clone_id.Serial_Number -ReadOnly $false -ShadowCopy $false -Hidden $false -Online $true -Verbose

$> Set-NimVolume -SerialNumber $tl_clone_id.Serial_Number -ReadOnly $false -ShadowCopy $false -Hidden $false -Online $true -Verbose

$> Get-Disk | Where-Object -FilterScript { $_.SerialNumber -eq $db_clone_id.Serial_Number } | Get-Partition | Add-PartitionAccessPath -AccessPath "C:\sqlclone-virtdb1\db-rdm"

$> Get-Disk | Where-Object -FilterScript { $_.SerialNumber -eq $tl_clone_id.Serial_Number } | Get-Partition | Add-PartitionAccessPath -AccessPath "C:\sqlclone-virtdb1\tl-rdm"

 

# Attach the database to SQL Server #

$> $attachSQLCMD = @"

>>USE [master]

>>GO

>>CREATE DATABASE [virtdb1-clone] ON (FILENAME = 'C:\sqlclone-virtdb1\db-rdm\virtdb1.mdf'),(FILENAME = 'C:\sqlclone-virtdb1\tl-rdm\virtdb1.log') for ATTACH

>>GO

>>"@

>>  Invoke-Sqlcmd $attachSQLCMD -QueryTimeout 3600 -ServerInstance "sql04-fci-n1"

 

Final Note:

 

If you are interested in other Dev/Test workflows, be sure to check out the other SQL Server focused blog in this series:

 

NOS4 Use Case: Rapid Deployment of SQL Developer Containers with Nimble Storage.

 

In that blog post, we cover cloning production SQL Server databases to SQL Server Developer instances running in Windows Containers.

 

Also, check out these existing blogs in our NOS4 series that provide more information about NWT4, or cover use cases that leverage new functionality:

 

NWT 4: The “Power” of PowerShell cmdlets in Nimble Windows Toolkit 4.0, by Anagha Barve

Hyper-V VSS Requestor: Nimble OS 4 – Hyper-V VSS Requestor, by Jason Monger

Earlier in the blog series, both Julian Cates and I have blogged on the Enhanced REST API in Nimble OS 4 in NOS4 and Quality of Service (QoS-Limits).

 

I therefore thought it would serve as a useful example to join the two subjects together, and share a simple script that will allow you to set a QOS limit using the API.  Of course this can be done via the GUI and the Command Line (as demonstrated in my earlier blog), however I see many use cases where the setting/lifting of QOS may wish to be automated, as part of batch operation or a predefined workflow.   I also thought this would serve as a good introduction to the API, as many of the interactions with other objects in NimbleOS using the API is very similar to what is demonstrated in this blog.

 

Throughout my career I’ve meddled with Perl scripting, so that is the programming language (interpreter to be precise) I have chosen to write my script in.   The script is attached below and with best practice in mind, I have annotated it with lots of comments, however I thought I would step through it in this blog post.

 

The script itself is made up of some initial definitions, a main body and then three sub-procedures:

  - The first sub-procedure logs into the array using the REST API.

  - The second sub-procedure looks for the Volume that we are interested in setting QOS on (I need to look for the volume in order to obtain the Volumes unique identifier).

  - the final sub-procedure, sets the defined QOS on the volume in question and then prints the result.

 

So let’s break this down into a little more detail:

 

Initial Definitions


The library use definitions are third-party libraries extensions, that have been loaded to extend the standard Perl functionality.  These include functions like logging into the array using certificates, decoding/encoding JSON responses and communicating using REST and SSL.   These libraries need to be downloaded and compiled to your Perl interpreter in order for the script to successfully work.


QOSBLOG1.png


Next, we have the Variable definitions, these define the array IP address/hostname that hosts the volume we wish to set QOS on, the username/password to login to the array and the certificate file and location in order to create the SSL connection.   We can then see the individual QOS parameters that we are going to pass, which dictates the Volume we are working with, the QOS limit that we intend setting and the limit that we will restrict the volume too.

 

Note: LimitType can be limit_iops or limit_mbps in order to set the appropriate QOS.  The limit value expresses the QOS limit, in IOPS or Megabytes per Second.  If -1 is supplied then the limit is removed.

 

The Main Code

The main code is the main functions that call the three sub-procedures:

&LoginArray - logs into the defined array

$VolID = &GetVolID($VolName) - grabs the unique volume ID for the given Volume

&BuildQOSRequest - uses the Volume ID from the previous step to set the QOS Limit and prints out the result.

 

QOSBLOG2.png


Let’s look at each sub-procedure:

 

LoginArray Sub-procedure

 

This is a generic procedure that will be common when logging into any array when you want to work with the API.  It takes the parameters that are defined in the initial definitions section (IP Address/Hostname, Login Name, Password and Certificate File location), it then builds the REST/API call to supply those parameters, and POST’s them to the array in order to receive a client connection ($client) in return.


QOSBLOG3.png

 

The client connection is then used in our subsequent communication with the array.

 

GetVolID Sub-procedure

 

This procedure is provided with a parameter Volume Name (the volume in this instance that we wish to set QOS on), it queries the API for all the volumes and there associated volume ID’s and returns the volume ID for the volume in question.

 

This is an important step because most of the API’s that Edit or Read volume details, utilise the VolumeID rather than the Volume Name.

 

From below you can see the API call to GET, all the information about all the volumes.

 

QOSBLOG4.png

 

Hint: if you uncomment the print line under the first GET call, you will see the output of this request, which is a list of all of the volumes and their volumeID’s in an XML/JSON format.

 

Next we recurse the list of all the volumes that were supplied and look for the volume we are interested in.  Once we find it we query the VolumeID and return it to the next procedure.

 

 

BuildQOSRequest sub-procedure

 

The final procedure is to take the Volume ID from the previous step and then set the QOS on the volume based on the parameters at the top of the script.

 

This is achieved by building the JSON/API call and the PUTting the API call to specific volumes/$volID.

 

Hint: Once again if you uncomment the line at the bottom of this section you will see all the parameters that are associated to the Volume and can be manipulated using the API.   There is a lot of good information in there!


QOSBLOG5.png

 

The script is attached here.

Note: there is no formal support for any script that you download from a community website, so always take care to not be like this poor fellow when testing your scripts, "with great power comes great responsibility" - Spidey.

Web hosting company accidentally deletes part of the internet while trying to clean up its servers | The Independent


There is also a short video demoing the script below (note: the video has no sound!):



 

Setting QOS on a Volume using the Nimble API - YouTube


Hopefully you have found this blog useful and informative, please post comments or queries below.  However I am not a developer by profession so please don’t be to critical on my limited scripting ability :-)


Thanks

Towards the end of February, Microsoft announced the general availability of SQL Server 2016 Developer Edition in Windows Containers. For most people, this announcement went unnoticed. For me, it was a moment of great excitement, and I feverishly set about the task of developing a demo environment. Gone were the database size limits placed upon me by the SQL Server Express images. I could now test workflows with larger datasets (greater than 10GB). Even more exciting, Nimble Storage had the tools, a combination of Nimble Windows Toolkit version 4 (NWT4) for Nimble OS4, and the Docker Volume Plugin for Windows Containers, available for me to get SQL Developer containers up and running in no time.

 

Building the Environment:

 

Getting up and running on Windows Containers is pretty well documented. I began by enabling the Containers feature on Server 2016, installing the Docker Volume Plugin for Windows Containers, and installing NWT4, as I knew I would leverage the new cmdlets and cloning enhancements for the workflows ahead. The next step was installing Docker and Docker-Compose, and I was off an running.

 

Installing Docker and Docker-Compose

# Install Docker #

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force

Install-Package -Name docker -ProviderName DockerMsftProvider

Restart-Computer -Force   

 

# Install Docker-Compose #

Invoke-WebRequest -UseBasicParsing -Outfile $Env:ProgramFiles\docker\docker-compose.exe https://github.com/docker/compose/releases/download/1.11.2/docker-compose-Windows-x86_64.exe

 

 

Next, I installed the Docker PowerShell Module, and then I downloaded the SQL Server 2016 Developer container image. With that, I was ready to deploy my first Docker SQL Server Container.

 

Installing Docker PowerShell Module and Pulling SQL Server Developer Image

# Install Docker PowerShell #

Register-PSRepository -Name DockerPS-Dev -SourceLocation https://ci.appveyor.com/nuget/docker-powershell-dev

Install-Module Docker -Repository DockerPS-Dev

 

# Pull container image. Can use native commands "docker image pull <image name>" to download image, or "docker run <image name>" to download and install image. #

Pull-ContainerImage microsoft/mssql-server-windows-developer -Verbose

 

Humble Beginnings:

 

Before doing anything fancy, I decided to deploy my first container using a manual, and simple process. First, I used the Docker Volume Plugin to clone and import the volumes for an existing database I already had running, and then I created my SQL Server Developer container using the "docker run" command syntax. In order to pass the appropriate information to the Docker Volume Plugin, I leveraged NWT4 cmdlets to query available snapshot collections for my existing database volumes. By using NWT4 as part of my toolset, I was able to build my query based on application metadata, that is stored as part of the application consistent snapshot of my SQL database.

 

The whole process from start to finish took less than two minutes!!! That's right, in less than two minutes, I had an entire SQL instance up and running, with a complete clone of the production database attached and ready for operations. Creating another one took another hot minute, and before you knew it, I had 10 SQL instances running on my Windows Server, each with their own version of the cloned production database. Excessive? Maybe, but I just couldn't help myself.

 

Creating the SQL Server Developer Container

# Using NWT4 to get SnapColl information #

Get-NimSnapshotCollection -AppObject "\SQL03\virtdb1" -MaxObjects 1 | select -ExpandProperty Snapshots | select SnapshotCollectionName

 

SnapshotCollectionName

InitialSnap

InitialSnap

 

# Leverage Docker Volume Plugin to clone volumes from last snapshot collection and import into Docker #

docker volume create svr3s-udb-iscsi-array1 -d nimble -o importVolAsClone=exch2016-svr2s-udb-iscsi -o snapshot=InitialSnap -o forceImport 

docker volume create svr3s-tl-iscsi-array1 -d nimble -o importVolAsClone=exch2016-svr3s-tl-iscsi -o snapshot=InitialSnap -o forceImport

 

# Option 1: Create Docker Container, bind Nimble Volumes #

docker run --name sql40 -it --rm -v svr3s-tl-iscsi-array1:c:\sqldata\tl\svr3s-tl-iscsi-array1 -v svr3s-udb-iscsi-array1:c:\sqldata\udb\svr3s-udb-iscsi-array1 --network=BridgeNetwork --ip=10.10.10.10 -e sa_password=XXXXXXXX -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

 

# Option 2: Create Docker Container, bind Nimble Volumes, and automatically attach SQL Database using image attach_dbs option #

docker run --name sql40 -d -v svr3s-udb-iscsi-array1:c:\sqldata\udb\svr3s-udb-iscsi-array1 -v svr3s-tl-iscsi-array1:c:\sqldata\tl\svr3s-tl-iscsi-array1 -e attach_dbs="[{'dbName':'virtdb1_clone','dbFiles':['c:\\sqldata\\udb\\svr3s-udb-iscsi-array1\\MSSQL\\DATA\\virtdb1.mdf','c:\\sqldata\\tl\\svr3s-tl-iscsi-array1\\MSSQL\\DATA\\virtdb1.ldf']}]" --network=BridgeNetwork --ip=10.10.10.10 -e sa_password=XXXXXXX -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

 

I Feel Some Automation Coming:

 

So now that I had my first SQL Server container, I started thinking, there has to be a way to automate this, right? What if I needed to create multiple copies of a production SQL database for a team of developers? What if developers needed to refresh the dataset periodically? Is there any way to make the cloning/refreshing of data into more of a self service workflow? All of these things are possible by leveraging NWT4 with the Docker Volume Plugin.

 

I knew I could get the latest snapshot based on application metadata. But, for this workflow to be complete, I needed to be able to get path information for SQL Server database files, so that I could construct the appropriate "attach_dbs" string and automatically mount my cloned SQL Server database to the container as part of the build. So I broke the workflow down into four parts:

 

(1) Leverage SQL Management Objects (SMO) to query SQL Server for database files and paths;

(2) Match the paths to the appropriate Nimble Storage Volume leveraging NWT4 cmdlets;

(3) Normalize the paths into the appropriate format for the "attach_dbs" option;

(4) Use the information gathered in the previous 3 steps to automatically build my SQL container.

 

I also wrapped the script in a function, with user supplied variables.

 

To make the entire workflow more appealing to the Docker crowd, with some advice from Michael Mattson (check out Michael Mattsson's Blog) I also had the script create a Docker-Compose file, and the whole project could then be built upon to make multi-container applications. I created a video to show three workflows with the new Docker-Compose automation:

 

(1) Cloning a SQL database to a container using a self-service workflow;

(2) Refreshing SQL container data;

(3) Lift and shift of a SQL Server database to a container.

 

We demoed the video a few weeks ago at DockerCon 2017 in Austin, TX, and it was quite a hit!  Here is a link to the video on Vidlet: Vidlet

 

Useful Links:
Tech Preview: Windows Containers Docker Volume Plugin: Tech Preview: Windows Containers Docker Volume plugin

The "Power" of PowerShell Cmdlets in Nimble Windows Toolkit 4: The “Power” of PowerShell cmdlets in Nimble Windows Toolkit 4.0

Nimble Connect: Docker : Docker

The Nimble REST API has come a long way since it's introduction in Nimble OS version 2.3.  That version was the initial rollout of the REST API, intended to eventually completely replace the previous SOAP API.  (In case you missed it, I wrote an introductory blog post covering the NOS 2.3 REST API here.). While it was very useful and covered many object sets, there was still much work to be done.  Nimble OS 3 built upon this by introducing several new object sets, providing even more functionality.  Which brings us today to Nimble OS 4 which contains the most complete REST API set yet, with more object sets supported than ever before.  Not only that, but you'll also find that for many objects, new attributes have been added - while still maintaining backwards compatibility with the objects in previous Nimble OS versions. Finally, many object sets support actions which go beyond the typical CRUD operations - create, read, update, and delete.  Let's take a look at each of these in turn.

 

Object Sets

 

As always, the best way to explore the Nimble REST API is through the online REST API Reference. There you'll find not only the object sets referenced here, but all the relevant information you can get from them and the actions you can perform on them. To give you an idea of what's available as of Nimble OS 4, take a look at the following table:

 

 

access_control_records

active_directory_memberships

alarms

application_categories

application_servers

arrays

audit_log

chap_users

disks

events

 

fibre_channel_configs

fibre_channel_initiator_aliases

fibre_channel_interfaces

fibre_channel_ports

fibre_channel_sessions

folders

groups

initiator_groups

initiators

jobs

 

snapshot_collections

snapshots

software_versions

space_domains

subnets

tokens

user_groups

users

versions

volume_collections

volumes

 

New Attributes

 

While I won't spoil all the fun of exploring the API and finding out for yourself what is available, I do want to point out that many object sets contain new and useful attributes which weren't present when the original REST API rolled out. Have I mentioned the REST API Reference?  You'll find everything nicely documented there.  But how about an example in the meantime?  Certainly.  Let's take a look at the venerable "volumes" object set.  We'll grab a sample from the API reference and look at a "volumes" object in JSON format:

 


{
  "data" : {
  "serial_number" : "5596fd1da1c87b8d6c9ce900d3040000",  "block_size" : 4096,
  "warn_level" : 80,
  "dest_pool_id" : "",
  "pool_id" : "0a00000000000004d3000000000000000000000001",
  "snap_usage_compressed_bytes" : 0,
  "name" : "vol0.762157726640911",
  "last_modified" : 1426776077,
  "snap_usage_populated_bytes" : 0,
  "perfpolicy_id" : "0300000000000004d3000000000000000000000001",
  "num_connections" : 0,
  "description" : "",
  "move_bytes_migrated" : 0,
  "iscsi_sessions" : null,
  "app_uuid" : "",
  "num_fc_connections" : 0,
  "agent_type" : "none",
  "pool_name" : "default",
  "multi_initiator" : false,
  "base_snap_name" : "",
  "size" : 100,
  "perfpolicy_name" : "default",
  "owned_by_group" : "g1a1",
  "snap_limit" : 9223372036854775807,
  "snap_limit_percent" : -1,
  "move_start_time" : 0,
  "encryption_cipher" : "none",
  "total_usage_bytes" : 0,
  "vol_usage_compressed_bytes" : 0,
  "snap_usage_uncompressed_bytes" : 0,
  "snap_reserve" : 0,
  "volcoll_name" : "",
  "projected_num_snaps" : 0,
  "full_name" : "",
  "online" : true,
  "cache_policy" : "normal",
  "vol_state" : "online",
  "num_snaps" : 0,
  "cache_pinned" : false,
  "clone" : false,
  "read_only" : false,
  "reserve" : 0,
  "creation_time" : 1426776077,
  "id" : "0600000000000004d3000000000000000000000005",
  "metadata":[
  {"key":"key1","value":"val1"},
  {"key":"key2","value":"val2"}
  ],
  "caching_enabled" : true,
  "snap_warn_level" : 0,
  "thinly_provisioned" : true,
  "move_aborting" : false,
  "move_bytes_remaining" : 0,
  "fc_sessions" : null,
  "num_iscsi_connections" : 0,
  "offline_reason" : null,
  "pinned_cache_size" : 0,
  "online_snaps" : null,
  "access_control_records" : null,
  "parent_vol_name" : "",
  "cache_needed_for_pin" : 104857600,
  "limit" : 100,
  "target_name" : "iqn.2007-11.com.storage:vol0.762157726640911-v00000000000004d3.00000005.000004d3",
  "volcoll_id" : "",
  "usage_valid" : true,
  "search_name" : "vol0.762157726640911",
  "vol_usage_uncompressed_bytes" : 0,
  "parent_vol_id" : "",
  "base_snap_id" : "",
  "dest_pool_name" : "",
  "upstream_cache_pinned" : false,
  "folder_id": "",
  "folder_name": "",
  "avg_stats_last_5mins": {
  "read_iops": 10,
  "read_throughput": 11,
  "read_latency": 20,
  "write_iops": 25,
  "write_throughput": 90,
  "write_latency": 10,
  "combined_iops": null,
  "combined_throughput": 100,
  "combined_latency": 90
  }
  }
}

 

OK, now admittedly that's a ton of information to wade through.  I was hoping that something towards the end might have caught your eye, though.  See it there?  Just at the end.  Yep, finally some volume performance data that's available via the API!  The avg_stats_last_5mins attribute returns an array of performance metrics which gives you an insight into how the volume has been faring in terms of IOPS, throughput, and latency. That's certainly new!  And a welcome addition, I might add.

 

Now, to the best of my knowledge none of the already existing attributes have been removed. So, existing automation should work just fine.  Now though, you'll have even more detail to work with once you decide to take advantage of it.

 

Actions

 

Not everything can easily be framed in terms of CRUD - create, read, update, delete. Now many of the object sets support custom actions. Let's take a look at another example object set - this time we'll use "replication_partners".  The operations for "replication_partners" are as follows:

 

Create 

Delete 

Pause  

Read   

Resume 

Test   

Update

 

As you can see, in addition to the CRUD operations we now have Test, Pause, and Resume.  Each of these would be POST operations using the following URI format:

 

https://yourarray:5392/v1/replication_partners/id/actions/[test|pause|resume]

 

You probably saw it coming a mile away that I would point you towards the REST API Reference for further detail.  You're getting good at this.  Go forth now, and automate all the things.

Today's post is the latest in the NimbleOS 4 - A Detailed Exploration and is written by Jan Ulrich Maue.   Is this post we will explore the deeper integration of Oracle with regards to the use of Snapshots and Clones, for rapid and efficient backup/restore and cloning.  This can allow businesses to dramatically improve the time to value, by providing every developer with a full and current clone of a production environment rapidly and efficiently.   This feature also snaps and clones databases, not just the storage volumes, therefore it manages the clone, rename and recover process to make the cloned database fully useable.

 

In the latter versions of NimbleOS3 and with the advent of NimbleOS4, Nimble launched the Nimble Linux Toolkit (NLT) which not only provides host utilities for making administration with the Linux storage stack much more simplified but also provides much deeper integration into Linux based applications. The first release of NLT2.0 was made Generally Available in November!  In addition to the well-known Nimble Connection Manager NCM, the Toolkit now also includes a Nimble Docker Volume Plugin and the Nimble Oracle Application Data Manager. This blog post will focus on the latter specifically the Oracle integration.  This feature allows database administrators to easily create storage-based snapshots of their Oracle database. The snapshots generated in this way can then be cloned and mounted on the same server or on a second "remote" server. The cloned database is then automatically recovered and opened. All this is possible with only one command and no deep knowledge of the Nimble array and of course the utilises capacity efficient snapshots, zero-copy cloning for rapid cloning of databases, without compromising the performance of the source database.

 

In order to use this feature there are a number of prerequisites:

  • NimbleOS 3.5 or higher
  • Oracle 11.2.x.x (11gR2) - Single Instance using ASM, currently no RAC Support
  • RHEL 6.5 and above
  • RHEL 7.1 and above
  • Oracle Linux 6.5 and above*
  • CentOS 6.5 and above*

 

* Note: Oracle Linux and CentOS are supported but not QA verified in this release

 

Step 1: Installation and configuration

When installing the Nimble Linux Toolkit, you can choose which components that you wish to install. In the example below we have decided to only install NCM and the Oracle Application Manager - somewhat bulky called NORADATAMGR. The Docker plugin we have chosen not to install in this example.  In this example I have also installed Oracle 12c, since 11g on CentOS 7 no longer runs. On the first Nimble array, I create six volumes: one OCR (Oracle Cluster Registry) volume for the two CentOS VMs and four volumes for a "DATA" disk group in the ASM, where the database will be hosted. For the four ASM volumes I create a volume collection "oracle" and specify the second virtual array as a replication partner. This is important for later because the NORADATAMGR can also manage the replication of the snapshot.

 

OracleCloning_Volumes.jpeg

 

For the Oracle users to use the Nimble Oracle App Manager on the Linux servers, the rights and permissions must be adapted after installation. The process to do this is well described in the Nimble Linux Integration Guide:

 

NORA-rights.png

 

To use the Oracle App Manager, the NLT's Oracle Service must first be "enabled" and then started:

 

[root@ora-prod oracle]# nltadm --enable oracle

Successfully enabled oracle plugin

[root@ora-prod oracle]# nltadm --start oracle

Done

 

Then check the status:

 

[root@ora-prod oracle]# nltadm --status

Service Name               Service Status

------------------------------+--------------

Connection-Manager         RUNNING 
Oracle-App-Data-Manager    RUNNING

 

 

Afterwards, the Oracle App Manager  must be registered with the Nimble group, and --verify checks if everything is running. The management IP of the Nimble Group Leaders is used as an IP address.

 

[root@ora-prod oracle]# nltadm --group --add --ip-address 192.168.43.100 --username admin --password xxxx

Successfully added Nimble Group information for 192.168.43.100.

[root@ora-prod oracle]# nltadm --group --verify --ip-address 192.168.43.100

Successfully verified management connection to Nimble Group 192.168.43.100.

 

As a final step in the preparation process, the Oracle App Manager must be informed of the server from which the snapshot and cloning processes for a particular Oracle instance may be initiated. Snapshots can only be generated by the local server on which the instance is running. The cloning can be initiated by the local and also by a second server (also called "remote server"). For this reason, I enter both CentOS VMs - the configuration is done as an Oracle user with the command noradatamgr and the options --edit and --allow-hosts:

 

 

[oracle@ora-prod oracle]# noradatamgr --edit --instance ORCL --allow-hosts ora-prod,ora-clone

Allowed hosts set to: ora-prod,ora-clone

Success: Storage properties for database instance ORCL updated.

 

If, at this time, there was no volume collection for the ASM disks on the Nimble array, it would now be created automatically. The volume mapping and also the allowed hosts can be displayed with the --describe option:

 

[oracle@ora-prod oracle]# noradatamgr --instance ORCL --describe

Diskgroup: DATA

    Disk: /dev/oracleasm/disks/NIMBLEASM1

        Device: dm-3

        Volume: ora-asm1

        Serial number: a48de9fac373d3286c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/NIMBLEASM4

        Device: dm-4

        Volume: ora-asm4

        Serial number: e3c26f0c9aa46f266c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/NIMBLEASM3

        Device: dm-1

        Volume: ora-asm3

        Serial number: 12e67015f9980a0f6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/NIMBLEASM2

        Device: dm-0

        Volume: ora-asm2

        Serial number: 75e25e2d1b9086886c9ce900a5a40084

        Size: 20GB

 

Allowed hosts: ora-prod,ora-clone

 

That's it! The Nimble Oracle App Manager does not need a repository or similar. All the data required for the Oracle cloning process (such as Oracle pfile and redologs) are stored as metadata directly in the snapshot on the nimble array. In this way, a snapshot can even be mounted by a second Linux system (also called "remote" system), and the database is automatically recovered and started. I would like to describe this in the following sections.

 

 

Step 2: Create snapshots

The Nimble Oracle App Manager can create two types of snapshots for Oracle instances on the server on which it is installed: 1. crash-consistent and 2. application-aware snapshots. In the first case, an IO-consistent snapshot is created for all Nimble volumes on which the Oracle database is located. The database is therefore in a state as if the server experienced an unplanned outage. When you open it later, Oracle must perform a crash recovery. In the second case, the database is first put into the "Hot Backup" mode, the snapshot is generated, and then the database is taken back from the "Hot Backup" mode. When you open it later, you only need a normal media recovery.

 

As already described, only one command is required for this. The --hot-backup option allows you to set the database to hot backup mode. Using --replicate option, I can also specify whether the snapshots should be copied to a second array using nimble replication. A prerequisite for this is that a replication partner is configured for the volume collection as described above.

 

[oracle@ora-prod oracle]# noradatamgr --snapshot --snapname snap1-hotbackup --instance ORCL --hot-backup --replicate

Putting instance ORCL in hot backup mode...

Success: Snapshot backup snap1-hotbackup completed.

Taking instance ORCL out of hot backup mode...


[oracle@ora-prod oracle]# noradatamgr --snapshot --snapname snap1-crash --instance ORCL

Success: Snapshot backup snap1-crash completed.


With --list-snapshots and additionally with --verbose you can have a detailed description of the snapshots for the corresponding instance:

 

[oracle@ora-prod oracle]# noradatamgr --list-snapshots --instance ORCL --verbose

Snapshot Name: snap1-crash taken at 16-11-28 12:07:39

    Instance: ORCL

    Snapshot can be used for cloning ORCL: Yes

    Hot backup mode enabled: No

    Replication status: N/A

    Database version: 12.1.0.2.0

    Host OS Distribution: CentOS

    Host OS Version: 7.2

Snapshot Name: snap1-hotbackup taken at 16-11-28 12:05:17

    Instance: ORCL

    Snapshot can be used for cloning ORCL: Yes

    Hot backup mode enabled: Yes

    Replication status: complete

    Database version: 12.1.0.2.0

    Host OS Distribution: CentOS

    Host OS Version: 7.2


This command can also be executed on the second "remote" server on which the productive instance is not running:

 

[oracle@ora-clone oracle]# noradatamgr --list-snapshots --instance ORCL

-----------------------------------+--------------+--------+-------------------

Snap Name                      Taken at        Instance     Usable for cloning

-----------------------------------+--------------+--------+-------------------

snap1-crash                    16-11-28 12:07  ORCL         Yes               

snap1-hotbackup                16-11-28 12:05  ORCL         Yes           



Step 3: Create and mount clones and start the database

After creating two snapshots, we might want to use them for a test and development system. This is quite simple, as the entire process of creating the clones based on the snapshots, mapping to the server and rescan of the devices up to the database activities (mounting, recovering, opening and starting the database) is completely automated with a single command!   As an output I get a "Description" of the cloned database. Optionally I could even change individual Oracle parameters in the PFILE during the database start or assign another Oracle SID. That is beyond the scope of this blog but I wanted to highlight that there is an option to change the cloned database.

 

 

[oracle@ora-prod oracle]# noradatamgr --clone --instance ORCL --clone-name clonedDB --snapname snap1-crash

Cloning diskgroups ... completed.

Mounting diskgroups ... completed.

Building instance clonedDB ... completed.

Diskgroup: CLONEDDBDATADG

    Disk: /dev/oracleasm/disks/CLONEDDB0001

        Device: dm-9

        Volume: clonedDB-DATA1

        Serial number: 2030631af2be85fc6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/CLONEDDB0002

        Device: dm-3

        Volume: clonedDB-DATA4

        Serial number: 7e03a8ce03e7c9296c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/CLONEDDB

        Device: dm-5

        Volume: clonedDB-DATA3

        Serial number: 4fee6d9baef2f29a6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/CLONEDDB0003

        Device: dm-8

        Volume: clonedDB-DATA2

        Serial number: f1c42b1793cab81a6c9ce900a5a40084

        Size: 20GB

 

Allowed hosts: ora-prod


You can also omit the --snapname option. The Oracle App Manager first creates a snapshot and uses it as the basis for the clones. However, this only works on the production database server; On the remote server, it is not possible, since no snapshot can be generated from here.

 

[oracle@ora-prod oracle]# noradatamgr --clone --instance ORCL --clone-name clonedDB

Initiating snapshot backup BaseFor-clonedDB-45512c96-0337-497b-962b-d2867fbe6827 for instance ORCL...

Success: Snapshot backup BaseFor-clonedDB-45512c96-0337-497b-962b-d2867fbe6827 completed.

Cloning diskgroups ... completed.

Mounting diskgroups ... completed.

Building instance clonedDB ... completed.

[....]



To prove that both instances are running, I will connect with SQL * Plus:


[oracle@ora-prod oracle]# echo $ORACLE_SID

ORCL

[oracle@ora-prod oracle]# export ORACLE_SID=clonedDB

[oracle@ora-prod oracle]# sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Wed Nov 28 14:23:13 2016

 

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics

and Real Application Testing options

 

SQL> select dbid, name, created

  2  from v$database;

 

      DBID NAME      CREATED

---------- --------- ---------

1456485204 CLONEDDB  28-NOV-16

 

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics

and Real Application Testing options


[oracle@ora-prod oracle]# export ORACLE_SID=ORCL

[oracle@ora-prod oracle]# sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Wed Nov 28 14:25:14 2016

 

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics

and Real Application Testing options

 

SQL> select dbid, name, created

  2  from v$database;

 

      DBID NAME      CREATED

---------- --------- ---------

1456485204 ORCL      23-NOV-16

 

 

Cloning on the remote server

As already described, the cloning process can also be started on the remote server. However, I have to specify an existing snapshot, since no new snapshot can be created. In the near future, it will even be possible to perform the cloning on the "remote" nimble array, that is, on the replication partner of the primary array. In this way, a test environment can be started which is physically separated from the primary side, or a DR solution can be established by simple means.

 

[oracle@ora-clone oracle]# noradatamgr --clone --instance ORCL --clone-name remoteDB -snapname snap1-hotbackup

Cloning diskgroups ... completed.

Mounting diskgroups ... completed.

Building instance remoteDB ... completed.

Diskgroup: REMOTEDBDATADG

    Disk: /dev/oracleasm/disks/REMOTEDB0001

        Device: dm-9

        Volume: remoteDB-DATA1

        Serial number: 3009abe001bc8c0f6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/REMOTEDB0002

        Device: dm-3

        Volume: remoteDB-DATA4

        Serial number: 5fcbec2bc8ef23fa6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/REMOTEDB

        Device: dm-5

        Volume: remoteDB-DATA3

        Serial number: 8e32ed641de4c68f6c9ce900a5a40084

        Size: 20GB

    Disk: /dev/oracleasm/disks/REMOTEDB0003

        Device: dm-8

        Volume: remoteDB-DATA2

        Serial number: 01a2aaa7acc7e9796c9ce900a5a40084

        Size: 20GB

 

Allowed hosts: ora-clone


Step 4: Delete the clones and snapshots

Deleting the clones and the ASM disk groups is also very easy. After the databases with SQL * Plus are stopped, I can "clean up" using the --destroy command on each server:

 

[oracle@ora-prod oracle]# noradatamgr --destroy --diskgroup CLONEDDBDATADG

Success: Diskgroup CLONEDDBDATADG deleted.


[oracle@ora-clone oracle]# noradatamgr --destroy --diskgroup REMOTEDBDATADG

Success: Diskgroup REMOTEDBDATADG deleted.

 

Individual snapshots can be deleted from the primary server using the --delete-snapshot option. The --delete-replica option is also available for deleting the replicas on the remote array.

 

[oracle@ora-prod oracle]# noradatamgr --delete-snapshot --instance ORCL --snapname snap1-crash

Success: Snapshot snap1-crash deleted.

 

 

Conclusion

The Nimble Oracle Application Data Manager is a very easy to implement and configure. It gives database administrators the ability to create storage-based snapshots without additional knowledge of the connected Nimble array, and to provide them as a separate environment for a test or development system, virtually at the touch of a single command. All processes occur the background like creating the clones, mapping to the target server, mounting the clones and even integrating with ASM disk groups is completely automatic. Even the necessary Oracle steps like mounting the DB, copying the necessary logs, recovery and opening the DB are completely automated.  Of course snapshots are hugely efficient in that they take zero time to create, storage only the compressed block incremental changes and finally have no degradation to the running production server instance,  allowing DBA's to provide rapid and fast data recovery options.

 

One of the best bits is there are no additional license costs! The new Nimble Linux Toolkit (NLT), it is available to Nimble Storage customers free of charge and of course the snapshot, restore, cloning and replication capability within Nimble controllers has always been integral!

 

It is suitable for customers who run Oracle on physical Linux servers directly connected to a Nimble array. In virtualized environments, the VMs must have direct access to the nimble volumes, e.g. Through iSCSI host initiated connections.

In the latest instalment of the NimbleOS 4 - A detailed exploration blog series, I will be introducing you to Quality of Service feature.

 

Quite often, you will see a NimbleOS feature that has developed through several releases. Quality of Service (QoS) is precisely one such feature!

 

Every storage array has shared resources, governing access to those shared resources is critical to ensure no single workload is able to consume or 'hog' those resources to the detriment of all other applications.  If we consider a Nimble controller the following are shared resources:

 

  • CPU/NVDIMM/Memory
  • Backend throughput (sequential access to the media - Flash or Disk)
  • Cache (applicable only in an Adaptive array)

 

For the last several NimbleOS releases, our Engineering team has been implementing fencing algorithms that stop any one resource from being consumed by one workload. For example, in 2.2 we introduced CPU fair scheduling, in 2.3 we introduced Volume Pinning (Adaptive-only) and Disk (Bandwidth) fair scheduling and more recently in Nimble OS 3 we introduced QoS-Auto also known as Noisy Neighbour Avoidance, which is detailed in Dimitris Krekoukias excellent blog: The Well-Behaved Storage System: Automatic Noisy Neighbor Avoidance.  Of course, none of these features you can directly 'manage' as they are all functionality that reside within NimbleOS to ensure the array (and it's associated services) are self-healing and no single workload is being starved. One of the simplest forms of management is something that requires zero-management!  You could also argue that with some of the Nimble arrays, the performance is so over-provisioned that managing quality of service or governance is something that is seldom required.

 

The Use Cases for Quality of Service

Most implementations of QoS prioritise or govern access to a given resource that is 'topped' out,  in order to decide who gets priority when things are busy, in reality, this is defined as Class of Service.   QoS-Limits is quite different, as it sets and maintains utilisation regardless of the available resources.  Generically, I see three use cases where implementing Quality of Service is highly desirable:

 

Providing Only The Performance That is Needed (or Purchased)

This requirement is incredibly common in the Service Provider landscape where there is a customer/tenant/application, and there is the desire to restrict that application to a prescribed performance level based on the requirements or the  service level that has been purchased.  Note: Noisy Neighbour Avoidance wouldn't limit this use case, if the performance was available then it would be honoured allowing the application or tenant to exceed their prescribed amount.

Limiting performance in this case allows the array to restrict the level of the performance to desired level (regardless to what is available to the array) and also offers the hosted provider the ability to provide a 'bursting' service where more performance can be optionally be made available for a period of time.  Fundamentally, it provides control to limit a user to a prescribed level of performance.

 

Consistent Service Levels

Consider an environment where a brand new controller has been deployed.  The first application is deployed and has free reign to use all the resources available.  On day 1 the workload experience is fantastic as the workload as unrivalled access, but later as more and more applications have been deployed, the first application is now competing with all the other applications for fair usage of the resources.  The perception of the first application owner is that performance has gradually worsened (as it no longer has unrivalled access) but in realistic terms it really as it no-longer as dedicated sole use of the array.  Again, QoS here would assist by limiting the workload from day one to ensure the same level of performance was maintained regardless of what other workloads were hosted on the controller.

 

Service Introduction or 'Fear of the Unknown'

Quite often a user may not know how busy a specific workload maybe or the impact of a change.  Of course QoS-Auto helps here as it ensures no one workload once it is introduced, however QoS-Limits in this instance allows the admin to once again control the resource and provides limits to what can be consumed by the application providing a fencing algorithm to introduce new services in a staged and safe manner.  As applying QoS limit is dynamic, this allows infrastructure admins to increase the performance as and when required once the service has been introduced.

 

What does QoS limit?

QoS-Limit allows a user to limit either the IOP or MB/s performance of a specific workload.  Having the ability to limit both IOPS and MBs is important as quite often any single workload will have different peaks and troughs during the operational day.  For instance, an OLTP workload maybe very latency sensitive to small block updates during the working day when rows and tables are frequently being accessed or updated (this will tend to be very IOP/latency sensitive) yet in the evening the same database maybe receiving feeds from other systems (or providing bulk updates/analysis or index rebuilds), the same application will cease to be IOP sensitive and will now be bandwidth (MBs) sensitive.  In NimbleOS4 a user can limit a workload by either IOPS or MBs and also specify limits to both IOPS and MBs.  If either limit is reached then the volume will be restricted accordingly.

 

QoS-Limits is set completely dynamic so as soon as a limit is set/unset then it's enforced/lifted appropriately.

 

 

What level of Granularity can QoS-Limit be applied?

QoS-Limits can be set on either am individual volume or it can be set on a Folder (which are a collection of volumes).  The concept of Folders was introduced formerly in NimbleOS 3 - you can read more about them here in Nick Dyer's Blog: NimbleOS 3 - Introduction To Folders, essentially a Folder could represent several volumes that make up an app (or a environment) or define a tenant or internal customer.

 

What is the impact of setting QoS-Limits?

Setting QoS-Limits clearly has the potential to limit IOP/MBs performance on the volume/folder, that is after all the nature of the feature.  It essentially limits the performance by applying a delay to the IO (so that only the set amount of IO's are serviced in accordance to a virtual clock that is maintained with each object in NimbleOS).  If a QoS-Limit is in place and the Folder/Volume is exceeding is limited level of resources then a delay is introduced to slow the volume down and limit it's performance.  A simple analogy is a motorway which has several lanes, if the motorway is free and a car wishes to travel fast down it to it's destination.  However, the car has been restricted by the use of cruise control which limits the accelerator to pre-determined speed, of course the conditions exist to allow the car to go faster but in this instance it is artificially being limited by the cruise control.  The side affect of this is latency will clearly increase (in the same way it will take my car in the analogy longer to reach it's destination).  So don't be surprised if you set IOP QoS-Limit to see your latency increase!

 

How do I set QoS-Limit?

QoS-Limits can be set in several ways:

  • NimbleOS GUI
  • Command Line Interface
  • Scripted via API
  • VMWare vSphere Plugin and via VVOLS Policy

 

The NimbleOS GUI in 4.x has been redesigned, there is a great blog by Craig Sullivan here which details the new GUI (NimbleOS 4 - Next Generation HTML 5 GUI) , but essentially if you go to Create Volume workflow (or the Edit Volume workflow) you will see the ability to set QoS-Limit on the volume in the Performance tab. Here is an example:

 

NOS-GUI.jpg

 

The same can be accomplished via the Command Line Interface by setting the volume limit and then returning the volume QoS-Limit to unrestricted using the following command:

 

NOS-CLI.jpg

 

and finally within vCenter if your creating/editing a Datastore using the vCenter plugin:

 

Datastore.jpg

 

or if your using Storage Policy Based Management with VVOLS.  In order to access this, from vCentre click on Home > VM Storage Policies > Select or Create your policy > Edit Settings > Rule-Set:

 

VVOL.jpg

 

As mentioned above you can also set QoS-Limit using the API - full documentation is found on Infosight. You will find API's at the Volume and Folder objects, I will be posting a sample API script to set QOS on a volume later in the series.

 

 

How do I know QoS-Limit is set?

The array performance graphs will show you when QoS-Limit is set.  We wanted to show a visual representation so that if someone was looking at performance it should be obvious to see that QoS-Limit might be at play.  When QoS-Limit is set on a volume you will see an orange perforated line that shows at what level QoS is set to, here is an example, where I have just set both an IOP and MBs QoS-Limit policy on the volume:

QOS.jpg

 

The orange lines represent the QoS-Limit setting and one can see how the performance has dropped to met/enforce that setting.

 

 

 

Whats the right level of QoS to set?

Finally, a common question is what should I set my volumes tool?  What is a good value to use?  At Nimble, we always want to give you good guidance on when and how to use a feature.  Fortunately the array and Infosight allows us to be much more predictive than recommending you set and tweak over time!

Firstly the array itself will look at the past 24 hours performance and give you guidance what the max and peak IOPS and MBs have been for that object.  You can see this when you go and set QoS-Limit on the volume:

 

Recommend.jpg

 

We expect our data scientists to publish data soon around generically whats sorts of typical IO levels we see per GiB for typical workflows hosted on Nimble, you got to love the power of Infosight, the telemetry it produces and the Insights into the install base it provides.

 

 

Whats the License Cost or Overhead for this feature?

Come on, you know better than to ask this question.  As with all features with NimbleOS, this feature is free to use once you have upgraded to Nimble OS 4.x.  There is no performance overhead on the controller by using this feature (clearly limiting performance on a volume or a folder will have a potential impact to the performance of that volume!)

 

Where can I see this in action?

I have posted two videos demonstrating Nimble QoS-Limit here:

 

Nimble QOS using the GUI

 

Youtube link: NimbleOS4: Setting QOS Limits using the NimbleOS GUI - YouTube

 

Nimble QOS using the CLI

 

 

Youtube link: NimbleOS4: Setting QOS Limits using the NimbleOS CLI - YouTube

 

 

Please post any comments, questions or queries below!

I’m pleased to kickoff the first blog in the series NimbleOS4: A Detailed exploration.

 

With the release of Nimble OS version 4, customers will notice a significant change to the look and feel

when they first login to their arrays after upgrading to NimbleOS4.

 

The Next Generation User Interface (NextGen UI) for Nimble Storage represents a big milestone. Having used

it quite a lot myself over the last few months, I’ve come to really appreciate the work that went into it.

 

When developing the NextGen UI, Nimble had three primary goals: first, migration to a modern UI framework,

second, create a system to facilitate efficient design, implementation, and multi-platform support, and finally,

remove the dependency on Adobe Flash. These changes will allow us to continue to develop and enhance the

functionality moving forward and extend the interface to devices that previously weren’t possible.

 

When users login, the first thing they will notice is the new login banner with a usage warning. This can be

customized to meet each customer’s needs or disabled. It can also be configured to require a user agreement

before connecting.

Once connected, users will be presented with a more compact and intuitive dashboard view. We have changed the

main workspaces to better reflect how users interact with our arrays. This main view gives users a more complete

picture of the health of their array in a single pane.

 

Under the manage workspace, users can navigate and make changes to storage, protection, access, and performance configuration.

One significant change is the summary view which shows more information about each volume on the main screen.

Users can move between tabs for space, performance, and protection to get more detailed information.

 

Accessing the other management functions no longer requires a drop down menu selection, but are easily accessed

by selecting the focus area on the main manage screen.  This makes it much easier to navigate to different

workflows and tasks compared to the previous GUI.

 

The hardware workspace is also much easier to access. By clicking on hardware on the top menu bar, users are

immediately presented with a complete overview of their arrays. We find this much quicker than the previous

view of having to navigate to the Group (wait for the graphs to load) and then to drill down to each specific array.

The visual representation is much more aligned to the physical array making it easier to correlate event activity

with the physical hardware platform. Events are easily accessed on the right hand side of the screen and can be

filtered based on severity. A convenient properties section gives complete information about the array.

 

The monitor workspace gives easy one-click access to each area without the need to go through the dropdown list.

Users can quickly and easily navigate through capacity, performance, interfaces, replication, connections, and the

audit log. Links to Infosight are provided where applicable and further detail based on application type and volume

collection are also presented on the main screen.

 

 

The interaction of the workflows and the improved performance of the GUI is hard to articulate in a blog with pictures,

we have therefore created some common workflow videos to demonstrate the changes to the Nimble OS user interface

and help customers become familiar with the new framework.

 

NextGen UI Overview

Rapid Volume Provisioning

Data Protection

Events and Alarms

Hardware View

I thought I would write a quick blog to bring in the New Year! It's the time of year that we make New Year's resolutions, maybe to challenge ourselves to accomplish something; Drink/eat less, exercise more, save more or set a goal to complete a marathon, visit a place or achieve a qualification!   I thought I would dedicate this post to all the things you should check with your running Nimble array (whether it be an All-Flash or an Adaptive Array) once it has been installed.   These are my best practices that should be checked after every install (whether it's been completed by yourself, a Partner Engineer or even an Nimble Engineer).

 

So here goes:


Check and Test Autosupport is enabled

What to check?

Is Autosupport enabled within NimbleOS.  To check this simply go to the management GUI and check Administration > Alerts and Monitoring > Autosupport & HTTP Proxy.  Next check that Send AutoSupport Data to Nimble Storage Support checkbox is ticked and then Test AutoSupport Settings.  The test should come back all green, like the picture below:

 

Blog1.jpg

 

Top Tip: If the check comes back red it means one of the tests has failed.  Click the triangle icon to explode all the tests to verify which one failed.  If the failure is Ping from controller A | B IP failed then this probably means that ICMP is being blocked by your firewall.  It actually doesn't cause too much of an issue if this fails but if you want the test to pass green then set your firewall rules to allow outbound ICMP from the controllers management IP address and the two diagnostic IP addresses and retry to check.  The full list of firewall ports to be enabled is here.


Why is this important?

This check is really important to ensure alerts and auto support telemetry are getting back to Nimble Support.   If this checkbox is not checked or there is a failure on the verification then Nimble Support will not be receiving any telemetry from your controllers and therefore InfoSight will also not work.



Check and Test Email Alerts are enabled

What to check?

This check is very similar to AutoSupport above, to ensure email alerts are configured.   To check this simply go to the management GUI and check Administration > Alerts and Monitoring > Email Alerts.  Check the Send Event Data to Nimble Storage is ticked and then click the Test button at the bottom. 


Blog2.jpg


After clicking Test the mailbox that is listed in the 'Send To' address should receive an email similar to the one below:

Blog3.jpg

Why is this important?

This check is really important as really it's a safety mechanism, if AutoSupport was to fail for some reason (e.g, the firewall ports were to become blocked) then it provides an alternative mechanism to get the alerts to Nimble and ensure any events on the array are known by Nimble Support. 


Top Tip:  If the tests succeed and no mail arrives, double check your junk folder in mail, if there is still no mail then check that your Exchange relay has been configured correctly.  The Knowledge base article for the check is here

 

Check the Physical Health of the Array (Interfaces, Disks, Fans etc)

What to check?

This is an obvious check, but one well worth making. Click on Manage > Arrays > Select your array serial number at the bottom and check to ensure all the onboard and ensure everything is green. Any red means there is a problem (this will also be shown on the events page). 


Your view should look similar to this AFA:


Blog4.jpg


Ideally all of your onboard ports (eth1/eth2) and data ports tg1-tg4) should be lit green.  Any red here means those ports are down and not connected.

The array above is a lab system so it's not fully cabled.  In a production environment every port should be green, if it's not please talk to your partner and/or Nimble SE to understand why and implications.

 

Top Tip: Mousing over the ports/disks/fans will show more information and physically identify the port incl it's MAC address and negotiated speed:

 

Blog5.jpg

 

Why is this important?

It ensures that your controllers are fully resilient and there has been no failures in the shipping.  Any failures can be rectified before live services are migrated and of course a full resilient system will provide peace of mind as you start to migrate your applications to their new home.



Asset Address is correct in Infosight

What to check?

Again this can be fairly obvious check but can often be overlooked.  Login to Infosight and click Administration > Asset Registry


You should see a row for each controller that is registered to your company (similar to below):


Blog6.jpg


Check that the install address is correct and most importantly check that the RMA Part Delivery address is correct.


Why is this important?

The Install address is where we will send the Engineer to site (assuming you have purchases 4Hr OnSite Engineering Assistance).  Also the RMA Part Delivery address is the location we will send any replacement parts.  There are many customers that have a central IT function but have their arrays installed in different locations and may want their parts all delivered to Central IT rather than the install address.  This makes sure we get it right first time and there is no confusion should any replacement parts be needed in the future.

 

Top Tip: You can verify the RMA Part Delivery address by clicking confirm.  Also, if the address is incorrect or you physical move the array you can update the arrays location using this and also provide instructions on when each site is attended and parts can be delivered.



InfoSight is receiving data and VMVision is configured (currently VMWare only customers)

What to check?

If you have setup AutoSupport correct (as described in the first step) then the array should be sending it's telemetry to Nimble Support.

You an check to see whether AutoSupport heartbeats are being received by checking with the Asset Dashboard in InfoSight.   Go to InfoSight and click on Reports > Asset Report.

 

Each array will be shown as per below:

 

Blog7.jpg

Notice the icons in the bottom left hand corner.  Each will tell you whether AutoSupports are being received (these can take up 24 hours), Heartbeats are being received and also the Support Contract for the array (and when it expires).

 

Blog8.jpgBlog9.jpg

 


Quite often we will see installs where VMVision hasn't been configured.  If you are not sure what VMVision is, then please check out my blog here, it basically provides per-VM monitoring to VMWare customers, despite the value it provides it costs nothing and requires no software to be installed.  There are two steps to ensuring VMVision is configured:


Ensure the VMPlugin has been installed / registered within VMWare.  This is achieved by selecting Administration > VMWare Integration from the Array GUI and providing your VMWare vCenter credentials:

Blog10.jpg


Once the plugin has been registered, the second step requires you to Enable VM Vision, to provide Nimble permission to collect and display this information.   This is achieved by heading to Infosight and clicking Manage > VMVision.  If you see a screen similar to the one below, then VMVision hasn't been enabled:


Blog10.jpg


Enable VMVision by simply clicking the Enable VMVision hyperlink, clicking Configure and then Enabling VM Streaming Data for the vCenter Instance:


Blog11.jpg


Why is this important?

Enabling Infosight and ensuring it is streaming data allows you to enable Predictive and Proactive monitoring which is supported by Infosight.   VMVision allows you to manage the end-to-end infrastructure and spotting cross-stack root cause problems quickly and efficiently.  There is also no license or cost associated to either of InfoSight/VMVision.   Hopefully the data and foresight they provide will make you look like a hero to your boss and peers alike!

 

 

Ensure the Support number to hand

What to check?

In 25 years of being in this industry, one best practice I have picked up is to have the Support number handy for all mission-critical products, you'll find the Nimble Storage global numbers here (they are manned 24 x 7 x 365).

 

Why is this important ?

It is a common best practice to follow a process and call Support to ensure that you've been through the process (rather than the first call into support being at a time of need and stress).


Top Tip: a good check is to enable the support tunnel (the checkbox underneath enabling AutoSupport) and then calling into Support to check they can contact the array okay (this also checks that Support can remotely connect to the array to provide world-class support when it's needed).




Review InfoSight Events are being reviewed

What to check?

It's important to frequently review your infrastructure to ensure performance, capacity and availability isn't likely to be compromised in the near future (a watched pot doesn't boil over).   Fortunately this is relatively easy and straight forward with InfoSight. 

Set aside a time to check the following:


Events Page (InfoSight > Wellness Tab)

 

Each row is a discreet event and details what requires your attention.  Expanding the event, shows more details on the resolution and what is required.  The events should be reviews, acknowledged and actioned.  There is no need to contact Nimble Support unless you need assistance with the resolution.

Over 90% of our cases are identified automatically using InfoSight and over 80% of cases are resolved simply by following the actions below.

Blog12.jpg


Capacity (InfoSight > Reports > Capacity Report)

This view allows you to monitor the capacity of each controller and to predict when capacity will be breached (based on historic growth trends).   Infosight will alert when the capacity reaches 90% as it is at 95% the lack of space will start to impact performance.

 

Blog13.jpg

 

Performance (InfoSight > Reports > Performance Report)

This view allows you to monitor the performance of each controller and to predict when performance will be breached (based on historic growth trends).   In an Adaptive array there are two trends to monitor CPU (which equates to available IOPS), and Cache (which equates to predictable read latency).  In an All-Flash array the only metric is CPU (Available IOPS).

 

Blog14.jpg

 

Should any of the graphs be sustaining red regions then please contact your Partner SE and/or Nimble SE to discuss the performance characteristics in more detail.

 

Top Tip: Don't be alarmed if the array in the first couple of weeks alerts that it's running out of space or performance.  It's quiet common when data is being migrated to the array for this to be an intensive data processing load and in addition the sudden increase of data capacity can fool the heuristics to predict the array is running out of capacity, performance (CPU) and cache.  Give Infosight a week or two to normalise.



Running the latest GA Code

What to check?

By running the latest General Available Code you will automatically running the Nimble Support recommended Mission Critical code. Nimble OS only is awarded General Available status when tough and rigorous criteria is met with regards to uptime, critical bugs and run time in the field.  It is the code we recommend for all mission-critical systems.   Running the latest code also ensures you have the latest features available and the latest patch fixes.  


You can check the whether your array is running the latest code by clicking the Nimble OS version in any of the InfoSight graphs or by running Administration > Software Update > Check Software in the Nimble GUI.  InfoSight will also give you the valid install paths to get to the latest code (if several steps are available):


Blog15.jpg


As all Nimble code upgrade are non-disruptive you should be able to upgrade to the latest GA release, with zero disruption by following the Software Update process in the Nimble GUI.


If when checking the software upgrade there is a red mark next to the code release.  That is an indication that Support have black-listed your array to prevent a potential problem.  If you see this we would recommend placing a call to Nimble Support to understand why the array has been black-listed.


Top Tip: I would also recommend before migrating applications to the array to perform a code upgrade using a workload generation tool to satisfy yourself that code updates are truly non-disruptive.  This should be part of any customers commissioning testing plan.


Why is this important ?

It ensures that your array is running the most mission-critical and stable code available and is a mainstay of Nimble to achieve 99.9999% availability.  You also get the benefit of getting all the latest code features and optimisations!



Ensure Nimble Connection Manager is installed (Windows, Linux and VMWare)

Nimble Connection Manager is a host host side utility that manages path management and connection management (in a Windows, Linux and VMWare environment).   It is an essential piece of software when scale-out groups are deployed, but with a single array it allows the provisioning of volumes and path management to be accomplished in the most efficient and stable manner in that it ensures a number of best practices are automatically set rather than relying on users to remember to set options manually.   There is a much more in-depth deep dive into the functions on Nimble Connection Manager for Windows and VMware at these links - Linux, Windows and VMWare.


Why is this important ?

This ensures that your hosts are fully optimised to use and integrate with Nimble Storage, which in turn leads to a much more stable and easier environment to manage.

 

 

Finally, are you Thrilled to be a Reference Customer? 

What to check?

At Nimble, we take huge pride to make every customer thrilled to be a reference.  If your not happy with setup then please let your Nimble SE know.   We aim to make every customer exceptional happy!


Top Tip: Did you know if you recommend Nimble to a friend or a colleague (to an opportunity that was previously unknown to Nimble), then if they become a Nimble customer too then we will reward your referral with a free gift.  Please consider introducing us, the referral page is here

 

Why is this important ?

We want our customers to be successful, ultimately looking after our customers will ensure our business thrives.  This philosophy has served us well since the Nimble's inception in 2008!

 

 

Finally, I have attached a quick check list here, which provides a quick checklist for each of these items.  Please ensure it's checked after every install!

 

If you have further items then you feel should be added to this list then please let me know and I will add them!

 

Thanks, Happy New Year and I hope you manage to stick to all your New Year's Resolutions!!!

It’s tough to beat PowerShell for automating administrative tasks, or simply providing an elegant and powerful command line based alternative to the Nimble Web UI. With the introduction of NimbleOS 2.3, we made the Nimble REST API available for those interested in building scripts or software that integrated with our arrays. While it’s possible to use PowerShell to invoke the REST API (I blogged about exactly that HERE), there’s admittedly a learning curve to picking up the nuances of working with REST if all you really want to do is execute some cmdlets to help in administering your Nimble environment.

 

With that in mind, we are pleased to announce the availability of the Nimble PowerShell Toolkit 1.0.  It is available for immediate download via InfoSight.  You can get it from this page.

 

The toolkit makes the functionality we’ve exposed through the API easier to consume via native PowerShell cmdlets. Let’s look at the prerequisites, install the toolkit, see what's included, and walk through some examples.

 

Prerequisites

Host OS

 

Essentially, the toolkit requires PowerShell version 3 or higher. So basically whether you're running Windows Server 2012, 2012R2, or Windows 7/8/10, just make sure your PowerShell version is 3 or greater. You can check your version by inspecting the $PSVersionTable variable and noting the value for PSVersion:

 

PS C:\> $PSVersionTable

 

Name                          Value

----                          -----

PSVersion                      3.0

WSManStackVersion              3.0

SerializationVersion           1.1.0.1

CLRVersion                     4.0.30319.42000ee

BuildVersion                   6.2.9200.16398

PSCompatibleVersions           {1.0, 2.0, 3.0}

PSRemotingProtocolVersion      2.2

NimbleOS

 

As mentioned earlier, the Nimble PowerShell Toolkit is built on the REST API and as such it requires a NimbleOS version which provides this API.  That means NimbleOS 2.3.x at a minimum.  NimbleOS 3.x is also supported.

 

Installation

 

The toolkit is downloaded as a zip file.  Once you have it on your system, installation is a simple matter of unzipping the contents of the file and placing them into the appropriate location.

 

To install for all users, copy the NimblePowerShellToolkit folder into the C:\Windows\system32\WindowsPowerShell\v1.0\Modules directory.

 

To install for the current user only, copy the NimblePowerShellToolkit folder into $env:HOMEDRIVE\users\<user>\Documents\WindowsPowerShell\Modules

 

Cmdlets

 

The following cmdlets are available as part of the toolkit. The readme file included with the download goes into a bit more detail describing the purpose of each cmdlet, so I won't reproduce that here.  Additionally, most of the cmdlets proivide example usage within the help file.

 

Connect-NSGroup

Connect-NSGroup

Disconnect-NSGroup

Get-NSAccessControlRecord

Get-NSArray

Get-NSAuditLog

Get-NSChapUser

Get-NSFibreChannelPort

Get-NSGroup

Get-NSInitiator

Get-NSInitiatorGroup

Get-NSJob

Get-NSNetworkConfig

Get-NSNetworkInterface

Get-NSPerformancePolicy

Get-NSPool

Get-NSProtectionSchedule

Get-NSProtectionTemplate

Get-NSReplicationPartner

Get-NSRole

Get-NSSnapshot

Get-NSSnapshotCksum

Get-NSSnapshotCollection

Get-NSSoftwareVersion

Get-NSSubnet

Get-NSToken

Get-NSUser

Get-NSVolume

Get-NSVolumeCollection

Get-NSVolumeFamily

Invoke-NSVolumeCollectionDemote

Invoke-NSVolumeCollectionPromote

Merge-NSPool

New-NSAccessControlRecord

New-NSChapUser

New-NSClone

New-NSDebugLog

New-NSInitiator

New-NSInitiatorGroup

New-NSNetworkConfig

New-NSPerformancePolicy

New-NSPool

New-NSProtectionSchedule

New-NSProtectionTemplate

New-NSReplicationPartner

New-NSSnapshot

New-NSSnapshotCollection

New-NSToken

New-NSUser

New-NSVolume

New-NSVolumeCollection

Remove-NSAccessControlRecord

Remove-NSChapUser

Remove-NSInitiator

Remove-NSInitiatorGroup

Remove-NSNetworkConfig

Remove-NSPerformancePolicy

Remove-NSPool

Remove-NSProtectionSchedule

Remove-NSProtectionTemplate

Remove-NSReplicationPartner

Remove-NSSnapshot

Remove-NSSnapshotCollection

Remove-NSToken

Remove-NSUser

Remove-NSVolume

Remove-NSVolumeCollection

Restore-NSVolume

Set-NSChapUser

Set-NSInitiatorGroup

Set-NSNetworkConfig

Set-NSPerformancePolicy

Set-NSPool

Set-NSProtectionSchedule

Set-NSProtectionTemplate

Set-NSReplicationPartner

Set-NSSnapshot

Set-NSSnapshotCollection

Set-NSUser

Set-NSVolume

Set-NSVolumeCollection

 

Examples

 

Connecting to an array/group

 

First we'll import the NimblePowerShellToolKit module, then connect to an array so that we can run further commands.  The Connect-NSGroup takes a PSCredential object as an argument.  If you forget to provide one, it will prompt you for one:

 


PS C:\Users\jcates> ipmo NimblePowerShellToolKit

PS C:\Users\jcates> $cred = Get-Credential

PS C:\Users\jcates> Connect-NSGroup -group myarray.nimble.com -credential $cred


 

Group        : myarray

ManagementIP : 1.2.3.4

User         : admin

Model        : AF7000

SerialNo     : AC-123456

Version      : 3.3.0.0-346604-opt


 

 

Create a Volume

 

Here we'll create a very simple volume.  It's possible to specify many more options to the command, but we'll accept the defaults for now.  Feel free to experiment!

 


PS C:\Users\jcates> New-NSVolume -name newvol -size 10240

 

access_control_records        :

agent_type                    : none

app_category                  : Other

app_uuid                      :

base_snap_id                  :

base_snap_name                :

block_size                    : 4096

cache_needed_for_pin          : 10737418240

cache_pinned                  : False

cache_policy                  : normal

caching_enabled               :

clone                         : False

creation_time                 : 4/29/2016 1:55:07 PM

dedupe_enabled                : False

description                   :

dest_pool_id                  :

dest_pool_name                :

encryption_cipher             : none

fc_sessions                   :

folder_id                     :

folder_name                   :

full_name                     :

id                            : 06239f8bea1874ff06000000000000000000000095

iscsi_sessions                :

last_modified                 : 4/29/2016 1:55:07 PM

limit                         : 100

metadata                      :

move_aborting                 : False

move_bytes_migrated           : 0

move_bytes_remaining          : 0

move_est_compl_time           : 0

move_start_time               : 0

multi_initiator               : True

name                          : newvol

num_connections               : 0

num_fc_connections            : 0

num_iscsi_connections         : 0

num_snaps                     : 0

offline_reason                :

online                        : True

online_snaps                  :

owned_by_group                : group-rtp-afa8

owned_by_group_id             : 00239f8bea1874ff06000000000000000000000001

parent_vol_id                 :

parent_vol_name               :

perfpolicy_id                 : 03239f8bea1874ff06000000000000000000000001

perfpolicy_name               : default

pinned_cache_size             : 0

pool_id                       : 0a239f8bea1874ff06000000000000000000000001

pool_name                     : default

previously_deduped            : False

projected_num_snaps           : 0

read_only                     : False

reserve                       : 0

search_name                   : newvol

serial_number                 : 38129d49930fc4d46c9ce900ec74eb3b

size                          : 10240

snap_limit                    : 9223372036854775807

snap_reserve                  : 0

snap_usage_compressed_bytes   : 0

snap_usage_populated_bytes    : 0

snap_usage_uncompressed_bytes : 0

snap_warn_level               : 0

target_name                   :

thinly_provisioned            : True

upstream_cache_pinned         : False

usage_valid                   : True

vol_state                     : online

vol_usage_compressed_bytes    : 0

vol_usage_uncompressed_bytes  : 0

volcoll_id                    :

volcoll_name                  :

vpd_ieee0                     : 38129d49930fc4d4

vpd_ieee1                     : 6c9ce900ec74eb3b

vpd_t10                       : Nimble  38129d49930fc4d46c9ce900ec74eb3b

warn_level                    : 80


 

Wow, volumes sure do have a lot of attributes!  It sure would be nice to focus on only the most important attributes, but still have access to all of the attributes if and when we need them.  The Nimble PS toolkit has been designed to be configurable in this regard.  You may have noticed an included JSON file when you installed the toolkit.  This file determines which fields will be displayed by default and which fields will be hidden (but still part of the  resulting PS object. You can edit this JSON file to customize the output for your environment, and the changes will take effect once the module has been unloaded/reloaded.  Let's take a look at an example of these in action.

 

Viewing Volumes

 


PS C:\Users\jcates> $myvols = Get-NSVolume -name newvol


PS C:\Users\jcates> $myvols

 

name            : newvol

size            : 10240

perfpolicy_name : default

vol_state       : online

iscsi_sessions  :

num_connections : 0

pool_name       : default

volcoll_name    :

target_name     : iqn.2007-11.com.nimblestorage:newvol-v18609ac9edd6a7e6.00000932.f5b63d2f

agent_type      : none

 

 

PS C:\Users\jcates> $myvols | select *

 

access_control_records        :

agent_type                    : none

app_uuid                      :

base_snap_id                  :

base_snap_name                :

block_size                    : 4096

cache_needed_for_pin          : 10737418240

cache_pinned                  : False

cache_policy                  : normal

caching_enabled               : True

clone                         : False

creation_time                 : 4/29/2016 2:03:22 PM

description                   :

dest_pool_id                  :

dest_pool_name                :

encryption_cipher             : none

fc_sessions                   :

full_name                     :

id                            : 0618609ac9edd6a7e6000000000000000000000932

iscsi_sessions                :

last_modified                 : 4/29/2016 2:03:22 PM

limit                         : 100

metadata                      :

move_aborting                 : False

move_bytes_migrated           : 0

move_bytes_remaining          : 0

move_start_time               : 0

multi_initiator               : False

name                          : newvol

num_connections               : 0

num_fc_connections            : 0

num_iscsi_connections         : 0

num_snaps                     : 0

offline_reason                :

online                        : True

online_snaps                  :

owned_by_group                : mktg-cs02

parent_vol_id                 :

parent_vol_name               :

perfpolicy_id                 : 0318609ac9edd6a7e6000000000000000000000001

perfpolicy_name               : default

pinned_cache_size             : 0

pool_id                       : 0a18609ac9edd6a7e6000000000000000000000001

pool_name                     : default

projected_num_snaps           : 0

read_only                     : False

reserve                       : 0

search_name                   : newvol

serial_number                 : 1b3023950c91c8056c9ce9002f3db6f5

size                          : 10240

snap_limit                    : 9223372036854775807

snap_reserve                  : 0

snap_usage_compressed_bytes   : 0

snap_usage_populated_bytes    : 0

snap_usage_uncompressed_bytes : 0

snap_warn_level               : 0

target_name                   : iqn.2007-11.com.nimblestorage:newvol-v18609ac9edd6a7e6.00000932.f5b63d2f

thinly_provisioned            : True

upstream_cache_pinned         : False

usage_valid                   : True

vol_state                     : online

vol_usage_compressed_bytes    : 0

vol_usage_uncompressed_bytes  : 0

volcoll_id                    :

volcoll_name                  :

warn_level                    : 80

 

Alright, as you can see the object still has all of the fields, but will only display the hidden fields if you tell it to. Cool!

 

Create a Snapshot

 

This one is a little more nuanced. The underlying API requires a volume id, so we need to provide that to the Get-NSsnapshot cmdlet instead of a volume name.  No problem, though.  We can grab the volume id from Get-NSVolume.

 


PS C:\Users\jcates> New-NSSnapshot -name mysnapshot -vol_id $(Get-NSVolume -name newvol).id

 

access_control_records      :

app_uuid                    :

creation_time               : 4/29/2016 3:04:06 PM

description                 :

id                          : 0418609ac9edd6a7e60000000000000a000005211c

is_replica                  : False

is_unmanaged                : True

last_modified               : 4/29/2016 3:04:06 PM

metadata                    :

name                        : mysnapshot

new_data_compressed_bytes   : 0

new_data_uncompressed_bytes : 0

new_data_valid              : False

offline_reason              : user

online                      : False

origin_name                 : myarray

replication_status          :

schedule_name               :

serial_number               : 581dea4c5692478a6c9ce9002f3db6f5

size                        : 10737418240

snap_collection_id          : 0518609ac9edd6a7e600000000000000000005211c

snap_collection_name        : 113671168983149651921554878598198852288510378691027866640113

target_name                 : iqn.2007-11.com.nimblestorage:newvol-mysnapshot-v18609ac9edd6a7e6.00000932.f5b63d2f.s18609ac9edd6a7e6.00000a00.0005211c

vol_id                      : 0618609ac9edd6a7e6000000000000000000000932

vol_name                    : newvol

writable                    : False

 

 

Get a List of Snapshots

 

The one thing you need to be aware of with Get-NSsnapshot is that, like the API, it requires a volume to be specified.  (If you really need ALL of the snapshots on the array, a foreach loop through each of the volumes should do the trick.)  Here we can specify the volume by name.

 


PS C:\Users\jcates> Get-NSSnapshot -vol_name newvol

 

name                                                                                                id

----                                                                                                --

mysnapshot                                                                                          0418609ac9edd6a7e60000000000000a000005211c

If we want to dig into the details of each snapshot, we can query by snapshot id and select all of the hidden fields.

 


PS C:\Users\jcates> Get-NSSnapshot -id 0418609ac9edd6a7e60000000000000a000005211c | select *

 

access_control_records      :

app_uuid                    :

creation_time               : 4/29/2016 3:04:06 PM

description                 :

id                          : 0418609ac9edd6a7e60000000000000a000005211c

is_replica                  : False

is_unmanaged                : True

last_modified               : 4/29/2016 3:10:22 PM

metadata                    :

name                        : mysnapshot

new_data_compressed_bytes   : 0

new_data_uncompressed_bytes : 0

new_data_valid              : True

offline_reason              : user

online                      : False

origin_name                 : mktg-cs02

replication_status          :

schedule_name               :

serial_number               : 581dea4c5692478a6c9ce9002f3db6f5

size                        : 10737418240

snap_collection_id          : 0518609ac9edd6a7e600000000000000000005211c

snap_collection_name        : 113671168983149651921554878598198852288510378691027866640113

target_name                 : iqn.2007-11.com.nimblestorage:newvol-mysnapshot-v18609ac9edd6a7e6.00000932.f5b63d2f.s18609ac9edd6a7e6.00000a00.0005211c

vol_id                      : 0618609ac9edd6a7e6000000000000000000000932

vol_name                    : newvol

writable                    : False

Clone a Volume

 

Now that we have a snapshot in place, we could create a Nimble Zero-Copy Clone volume from it.

 


PS C:\Users\jcates> New-NSClone -name myclone -base_snap_id 0418609ac9edd6a7e60000000000000a000005211c -clone $true

 

access_control_records        :

agent_type                    : none

app_uuid                      :

base_snap_id                  : 0418609ac9edd6a7e60000000000000a000005211c

base_snap_name                : mysnapshot

block_size                    : 4096

cache_needed_for_pin          : 21474836480

cache_pinned                  : False

cache_policy                  : normal

caching_enabled               : True

clone                         : True

creation_time                 : 4/29/2016 3:24:42 PM

description                   :

dest_pool_id                  :

dest_pool_name                :

encryption_cipher             : none

fc_sessions                   :

full_name                     :

id                            : 0618609ac9edd6a7e6000000000000000000000933

iscsi_sessions                :

last_modified                 : 4/29/2016 3:24:42 PM

limit                         : 100

metadata                      :

move_aborting                 : False

move_bytes_migrated           : 0

move_bytes_remaining          : 0

move_start_time               : 0

multi_initiator               : False

name                          : myclone

num_connections               : 0

num_fc_connections            : 0

num_iscsi_connections         : 0

num_snaps                     : 0

offline_reason                : user

online                        : False

online_snaps                  :

owned_by_group                : myarray

parent_vol_id                 : 0618609ac9edd6a7e6000000000000000000000932

parent_vol_name               : newvol

perfpolicy_id                 : 0318609ac9edd6a7e6000000000000000000000001

perfpolicy_name               : default

pinned_cache_size             : 0

pool_id                       : 0a18609ac9edd6a7e6000000000000000000000001

pool_name                     : default

projected_num_snaps           : 0

read_only                     : False

reserve                       : 0

search_name                   : myclone

serial_number                 : 7e527c1b273f93ee6c9ce9002f3db6f5

size                          : 10240

snap_limit                    : 9223372036854775807

snap_reserve                  : 0

snap_usage_compressed_bytes   : 0

snap_usage_populated_bytes    : 0

snap_usage_uncompressed_bytes : 0

snap_warn_level               : 0

target_name                   : iqn.2007-11.com.nimblestorage:myclone-v18609ac9edd6a7e6.00000933.f5b63d2f

thinly_provisioned            : True

upstream_cache_pinned         : False

usage_valid                   : True

vol_state                     : offline

vol_usage_compressed_bytes    : 0

vol_usage_uncompressed_bytes  : 0

volcoll_id                    :

volcoll_name                  :

warn_level                    : 80

 

Delete a Volume

 

Once a volume has served its purpose, we can delete it. We need to first set it offline with the Set-NSvolume cmdlet, then we can delete it from the array with Remove-NSVolume.

 

 

PS C:\Users\jcates> Set-NSVolume -name myclone -online $false

PS C:\Users\jcates> Remove-NSVolume -name myclone

 

Once again, we have merely scratched the surface of what's possible through PowerShell.  I encourage you to download the toolkit and give it a spin.  As always, we'd love to see the cool things you create with it.  Feel free to share them here on Nimble Connect!

Do you keep your Windows hosts and VMs updated on a regular basis? Do you worry about data integrity issues and data loss? If you (Customers who) need to keep your systems up and running then you should know about Nimble Hotfix Monitor service, which will help application uptime and safeguard data.

 

As a Windows IT administrator you can download hotfixes, and service packs to keep your Windows-based servers up and running. These patches, Windows Service packs, and hotfixes are updates to Windows operating systems to resolve a known issues or workarounds. Moreover, service packs update hosts or VMs to the most current code base. Being on the current code base is important because that's where Microsoft focuses on fixing problems.  Microsoft releases a number of critical hotfixes every month. However, there are countless combinations of hardware and software for given a given Windows host or VM.  Many hotfixes patch issues that are not relevant to your installation, causing needless installs and reboots. There has been no easy way to find the latest hotfixes that apply to given Windows storage components or to get notifications on them. This makes it very hard for storage/ Windows teams to self-diagnose issues and keep your Hosts and VMs running without downtime.

 

At Nimble, we’ve developed a “hotfix awareness system” to make sure our customers stay informed about hotfixes. Our system balances IT’s need for security and performance with the company’s need for access and reliability.

 

Windows Nimble toolkit contains the Nimble Hotfix Monitor service, which monitors the presence of storage hotfixes our customers might need.  This is quite a useful tool when you need to check if a particular hotfix is required.

 

  • Deploying every required must-fix hotfix reduces your downtime.
  • Deploying non-essential hot-fixes as they come up leads to unnecessary installs and reboots.
  • Some admins want to queue up their non-essential hotfixes so they can all be installed at once.


The Nimble Hotfix Monitor service reliably informs the users if their Windows Server systems have applied the Microsoft hotfixes that are known to cause data integrity issues or system outages.  It checks the missing hotfixes during installation as well and provides a report of missing hotfixes. These reports are customized to a user’s particular environment: no more patching problems that you don’t actually have.

 

It is a non-intrusive service, as it understands that certain hotfixes require a reboot and that customers will need to schedule downtime. It can monitor, collect, and report the required hotfixes for admins to download from Microsoft on their schedule.

 

We built the Nimble Hotfix Monitor to help our customers keep track of MS and Nimble recommended patches. But we didn’t stop there. We also considered that our customers might like to stay aware of patches from other vendors running on their server. It provides the framework for additional hotfix sources and hotfixes to be added and checked, i.e. vendor FC HBAs and Drivers. If your software vendor supports Nimble, you can spend less time looking for patches or reading update bulletins.

 

  • The Hotfix monitor service is compatible with Windows Server 2008 R2 SP1 onwards (Windows 2008R2, 2012 or 2012R2)
  • The hotfix monitor service is data driven in an XML file that can be pushed independently of the framework that checks for hotfixes.
  • The process checks hotfixes and logs Event log warnings for missing hotfixes (messages that you would pick up on their system management software) so that they can take action.

 

Hotfix Monitor Service.jpg

 

Nimble certified latest storage hotfixes are available on the Nimble's InfoSight. The hotfix .xml is updated and kept on Nimble's InfoSight for users to download. Nimble Windows Toolkit installer, which installs hotfix monitor services, also checks for critical System timeout values and tunes it to minimize host application downtime.


If you are a Nimble customer who feels overwhelmed by critical Windows alerts, patches, and update notifications, I highly encourage you to make use of this feature to improve you data availability. For non-Nimble customers please contact info@nimblestorage.com about how to get storage related hotfixes.

Hi everyone,

 

So last week I set about configuring my CS400 array that runs NimbleOS 3.1.x to integrate with my AD server, and allow AD user login to the array.  I am by no means an AD expert, but I documented the process...

 

Log into the NimbleOS 3.1 Web GUI.  Go to the Administration drop down and select Security, then Microsoft Active Directory.  Fill in AD server details, account credentials and system name.

 

1.jpg

 

You will see the following message.

2.jpg


And you should be joined to the domain.

3.jpg

4.jpg

 

On the AD server, create a group called “Nimble” (you can choose any group name, to separate users and the different roles you could have NimbleAdmin, NimblePower, NimbleRO, etc.)

5.jpg

 

Add your user and the group “Domain Users” to the group you just created.

6.jpg

 

On the Nimble, go to Administration, Security, Users and Groups.  Create a group of the same name called Nimble and give it administrator rights.


7.jpg

 

Log out of the Web GUI and then log back in as your domain user (in my case RUDDLE\dave).

Thats it !!!

Dave

Nimble Storage introduced support for Microsoft System Center Virtual Machine Manager (SCVMM) in version 2.2 of NimbleOS, and enhanced that initial support with additional functionality in the 2.3 release. We covered both releases on Nimble Connect here and here.

Now with the introduction of NimbleOS 3.0, we have enhanced our support for SCVMM even further. I’d like to cover a few of the latest innovations here.

 

Fibre Channel Support


Until now, support for SCVMM has been for iSCSI arrays only. Our Fibre Channel array customers have been asking for it, we’ve told you that it’s coming, and now we’re pleased to announce that the much anticipated support for Fibre Channel arrays with SCVMM is here. Fibre Channel array customers can now register their arrays as SMI-S providers and manage them from directly within SCVMM just like their iSCSI counterparts. In fact, it’s even easier now with Fibre Channel. If you recall from the previous blog posts on SCVMM, since our iSCSI array is a dynamic target iSCSI array – meaning each volume has a unique target address and a LUN ID of 0 – Microsoft requires us to have at least one volume present when registering the SMI-S provider with SCVMM. We refer to this required volume as a “starter volume”. With Fibre Channel – where our arrays have fixed World Wide Port Names and dynamic LUN IDs – this just isn’t required. (More on how we’ve also simplified this for iSCSI later in the post.)

 

Folder Support


Folders are a new construct available within NimbleOS 3.0, available in both Fibre Channel and iSCSI arrays. Think of them as logical storage containers that you can use to group volumes by function, put capacity limits in place, and define certain behaviors for volumes within the folders.  Let’s take a look at an example:

 

scvmm1.png

Here we are creating a new folder on the Nimble array. We have:

 

  1. Given the folder a name and optionally a description
  2. Defined a capacity limit which will be enforced for the volumes contained in the folder
  3. Specified that the folder will be used with SCVMM
  4. Specified the performance policy which will be used by volumes which are created within the folder

 

When we add this array to SCVMM we are presented with a list of folders and can pick which ones we want to bring under SCVMM control:

 

scvmm2.png

There’s another feature here that wasn’t available in the previous version. Notice the “Host Group” option. Previously, after the Add Storage Devices Wizard finished there was an additional step required which allocated the storage pools to various host groups which was necessary before any existing LUNs could be used or new LUNs created by those hosts on these newly imported storage pools. Now that step is rolled into the wizard (not necessarily a Nimble feature here, but an enhancement to SCVMM that’s worth pointing out), and storage can be allocated to host groups at the time it is imported.

 

As you can see here, we’ve imported a couple of new folders and given them different classifications, while allocating them for use by all hosts. If we take a look at the classifications and pools view within SCVMM, we can see that the folders (along with any volumes which may already exist within them) have been assigned appropriately.

 

scvmm3.png

 

Note that a “folder” in Nimble parlance shows up as a “storage pool” within SCVMM.

 

Enhanced iSCSI Support


While the folder enhancements apply regardless of whether you have a Fibre Channel or iSCSI array, there’s one new feature in NimbleOS 3.0 that is particular to iSCSI arrays and folders which have been defined with a management type of SCVMM.

 

Recall the “starter volumes” mentioned above. Again, these are necessary for dynamic target iSCSI arrays. In prior versions of NimbleOS, these had to be created manually. And not just created manually within the Nimble GUI, but created manually on the command line with some very specific parameters added to accomplish what we need for SCVMM compatibility.

 

This process has just gotten a whole lot easier. While the need for a starter volume with iSCSI arrays hasn’t gone away, you no longer have to create it manually. In fact, YOU don’t have to create it at all. When you create a folder on an iSCSI array with a management type of SCVMM, NimbleOS 3.0 automatically creates the starter volume with all of the required settings:

 

scvmm4.png

 

As you can see here, a small 1 MB starter volume has been created for us and placed within the newly created folder, which will satisfy the requirement for dynamic-target arrays and SCVMM. Super easy, no command line required!

 

Conclusion


If you’re a Nimble storage customer using Microsoft SCVMM to manage your Hyper-V environment and want a single pane of glass to manage both storage and virtualization resources, this integration is for you. Now regardless of whether you have an iSCSI array or a Fibre Channel array, integrating with SCVMM is fast and simple. And like all Nimble Storage features, there’s no extra cost to it. The SMI-S provider runs directly on the array, so there’s no extra infrastructure to deploy. In fact, you have everything you need right out of the box. Give it a try!

The Nimble Storage CS235 is the newest member of the Nimble family. Although Nimble announced the CS235 back in May this year, there are some enhancements in Nimble OS 2.3 that have a positive impact on the possibilities of the system.

Let’s first look into what we announced initially.

The goal of the CS235 is twofold:

  • to fill the gap between the CS215 and CS300
  • to lower the entry-level for customers that want FC connectivity.


The CS235 gives us a better capacity optimized price point in the +/- 20 TB capacity range

blog1.png

Let’s have a look at the specifications:

  • Same CASL as all other members of the family, so same functionalities and capabilities
  • Performance up to 15.000 IOPS in line with CS2xx
  • Head capacity of 24 TB raw (12 disks of 2 TB in a 3U format)
  • Cache configurations of 640GB or 1200GB of Flash
  • Network connectivity: 1G, 10GBaseT, 10G SFP+, or FC
  • Expansion Shelves: support up to 3 disk shelves
  • No All Flash Shelf supported for the CS235
  • Hot upgrade to CS500, CS700 (no CS300): needs OS 2.3
  • Minimum Nimble OS 2.2.6

 

If we compare the new CS235 with the older existing CS215, we see the following differences and communalities:

blog2.png

It is important to note that the CS235 runs on newer hardware then the CS215, which means it benefits new functionalities introduced in OS 2.3 as Encryption of Data.

For the techies, it runs on 2 CPU’s E5-2403V2 Ivy Bridge and 2 cores per controller, 4GB NVDIMM and 20GB of DRAM.

 

Now let’s see how OS 2.3 has changed this:

 

Nimble OS 2.3 delivers us also the new ES1-H90T disk shelves which contains 15 drives of 6 TB for a total of 90TB raw capacity. It comes with a 1920GB flash. This new disk shelf is supported on all Nimble arrays except CS210. The maximum number of shelves of this capacity is limited by the running Nimble OS. Nimble OS 2.2.6 only supports 2 of them on a CS235 where that limitation is lifted in Nimble OS 2.3.

Here an overview of that.

blog3.png

As you can see in the list the CS215 also supports now more than one disk shelf to give customers the ability to consolidate a lot of storage with limited amount of performance. It's a great solution for budget constraint customers.

If you are using Nimble Storage with Windows servers, chances are you have the Nimble Windows Toolkit installed. It’s an integral part of connecting Nimble volumes to your Windows server including components such as our MPIO DSM and Nimble Connection Manager.

 

Did you know that installing NWT also provides you with some useful PowerShell tools? Yep, it’s true; and there is a new cmdlet included which can be particularly helpful when connecting to clones of existing volumes. Using this cmdlet greatly simplifies the process and makes it so that the connection can be entirely automated. In this blog post, we’ll take a look at an example in which we create a clone from a VSS synchronized snapshot of a SQL database.

 

First, let’s inspect those PowerShell cmdlets.  In order to use them, you need to import the Nimble PowerShell module, which NWT installs at

C:\Program Files\Nimble Storage\bin\Nimble.Powershell.dll

 

get-command.png

 

As you can see, there are four cmdlets included in this module. For this workflow, we’re going to focus on Get-NimVolume and Set-NimVolume.

 

To understand just how useful these cmdlets can be, it’s worthwhile to take a look at what the process would look like without them.

 

Let’s assume we’ve created and connected to our clone volume. That’s great, but we can’t quite use it yet. Why not you ask? Well, let’s take a closer look at it.  What does the Windows Disk Management tool have to say about it?

disk offline.png

 

OK, it’s offline. No big deal, we’ll just right-click on the disk and ask windows to bring it online:

 

disk online no driveletter.png

 

Is that all we need to do? Well, unfortunately no it isn’t. Let’s peel back the onion a bit more and take a look at the attributes which are set on the disk. For these, we’ll use the venerable diskpart tool.

 

attributes.png

 

OK, we have some work to do. Our volume is read-only, hidden, marked as a snapshot, and doesn’t have a drive letter. We’ll have to modify these attributes with diskpart before we can use our newly created clone.

 

Enter PowerShell.

 

Get-NimVolume will help us discover our Nimble volumes, and Set-NimVolume will let us programmatically bring the volume online, set the correct attributes, and give it a drive letter. Let’s find our clone volume with Get-NimVolume:

 

get-nimvolume.png

 

OK cool, we found our clone volume. How do we go about bringing it online and setting those attributes? Here’s where we would use Set-NimVolume. Let’s see what parameters this cmdlet takes:

 

set-nimvolume-help.png

 

As you can see here, we can specify the volume we want to modify be feeding Set-NimVolume a windows disk ID, or a Nimble volume serial number, which we can get from Get-NimVolume. To make things even easier, we can simply pipe the output from Get-NimVolume directly into Set-NimVolume.

 

Since we’ve already found our clone volume with Get-NimVolume, let’s just send it over to Set-NimVolume and set those attributes:

 

set-nimvolume.png

 

Boom. Done! In one fell swoop, we’ve brought the volume online, set all the appropriate attributes, and requested a drive letter. We can verify all of this using Disk Management and diskpart if we’d like to:

 

attributes-after.png

 

disk online.png

 

Now our volume is ready for use.  We can mount up the database contained within it for test and development, data recovery, or whatever purpose we had in mind. It doesn’t take a big stretch of the imagination to figure out workflows in which this can be extremely useful. 

 

For example, what if we wanted an automated way to refresh a test/dev database on a daily basis from a nightly VSS synchronized snapshot of a source database?  That’s not something we want to do manually, but with these cmdlets we can fully automate it as part of a larger PowerShell script.

 

Or perhaps we are replicating our Nimble volumes to a downstream array and we want to automate the connection process after promoting our downstream volumes?

 

Or perhaps we’re using Site Recovery Manager from VMware and want to fully automate the workflow for our VMs which are using in-guest iSCSI connections and taking advantage of Nimble’s application specific VSS synchronization?

 

Enjoy!