Skip navigation
1 2 Previous Next

App Integration

28 posts

Nimble OS 4 has now been released as GAC (General Availability Candidate) and with this new OS comes new features for our customers. This blog concentrates on the new integration we provide with Nimble OS 4 and Microsoft Hyper-V and is the latest in the Nimble OS 4 - Detailed Exploration Series.

 

Nimble has for many years provided Windows Host Utilities which allow our users to integrate Microsoft Volume Shadow Copy Services (VSS) integration for Microsoft SQL Server and Exchange Server. We actually did provide integration with Hyper-V in the form of a compatible Nimble VSS provider which could be leveraged by 3rd party backup applications, however with the release of Nimble OS 4,  we provide support for a VSS Requestor for Microsoft Hyper-V, allowing our customers to schedule snapshot backups, manage snapshot retention, and replication of Hyper-V workloads directly from the array UI.

 

The first step to use this functionality, is to ensure you have the compatible array firmware (4.x) which you can request from Nimble Support (once GA it will be available to all Nimble Storage customers via InfoSight). Next you need to install the Nimble Windows Toolkit (NWT) on your Hyper-V hosts, it’s this toolkit which includes our VSS requestor and provider for Hyper-V environments.

 

Once you have these in place configuring Hyper-V backups is very simple. Below is a screenshot of the workflow with Nimble OS 4. Notice that the UI has changed as we’ve adopted HTML5 in this release, but the workflow to protect volume collections is identical to prior versions. A deeper dive into the new GUI in Nimble OS 4 is available here.

 

HyperV UI.png

 

Now when you protect a volume, you can select the “Application” type from the drop down, and along with Microsoft SQL and Exchange, there is “MS HyperV”. When this option is selected at the scheduled snapshot backup time we will communicate with the Nimble VSS Requestor on the Hyper-V host(s), which in turn will speak to the Hyper-V and CSV VSS writers to quiesce the virtual machines prior to taking the Nimble snapshot. The virtual machines will require Hyper-V Integration Services installed inside them as Hyper-V will use this to take a guest snapshot in coordination with the Hyper-V and CSV VSS writers.

 

The “Application Server” should contain the cluster name resource, or standalone host, FQDN or IP address so we know which host(s) we need to communicate with. As normal you’ll need to configure the schedule for snapshots, the retention for different schedules, and whether you would like to replicate to another Nimble Storage array.

 

A few worthy points to note on the integration:

 

  • We support standalone volumes or clustered shared volumes (CSV) – we don’t however support traditional clustered disks. I would expect most Hyper-V users to deploy on CSVs. A mix of standalone and CSVs in the same volume collection isn’t allowed, so if you have both split them out in to separate volume collections.
  • We support NTFS or ReFS
  • Minimum Hyper-V OS version supported is Microsoft Windows 2012R2 (with Oct 2014 rollup update). Windows 2016 is of course supported.
  • If Hyper-V fails to take a guest snapshot for a particular virtual machine, via Integration Services, the backup will continue i.e. the Nimble snapshot will still be taken. The Windows event log will contain a report on which VMs were backed up successfully and which failed.
  • Whilst most users will leverage the Nimble UI to configure backups, you can still configure via the CLI or RestAPI. For both of these you need to specify the “app_id” as “hyperv”
  • The snapshot will fail if the VM is in the root of the mount point, so place virtual machines under a folder in the disk.
  • When recovering from a VSS snapshot or importing from a cloned volume the VM will have a backup checkpoint. Apply this checkpoint prior to starting the virtual machine.

 

I hope you found the blog useful, if you have comments or questions post them below.

docker cerifited program 3.png

Photo Credit: Docker Blog

Gartner says 70% IT organizations planning a private PaaS will deploy a container service instead. 1 IDC predicts that achieving portability and hardware efficiency for traditional applications will be the most popular production use for containers in the next four years. 2 Both agree that DevOps adoption is accelerating for enterprises. 3,4 

Whether you think Docker containers are for IT, DevOps, or both, the benefits are impossible to ignore. Containers have long been popular with developers. Yet enterprise IT teams have had many questions about how to best adopt containers for mainstream use. Here are some top questions in the minds of customers and prospects:

  1. What tools can help me secure, manage, and govern my data, applications, and environment? Where do I find them?
  2. How much time and effort will it take to evaluate each of these tools and technologies, and integrate them into a working system?
  3. How much of the support burden will fall on my team if these technologies don’t work as expected?
  4. How do I choose a foundation that not only meets today’s requirements, but will also benefit from lasting future community innovation and support?
  5. When containerizing mainstream applications, how will I store, move, and manage their production data?

 

Find the answers to these questions on the Nimble Storage blog.

Taking some inspiration from Ansible I've put together a Ruby script that wraps the Bimbly REST client and digests yaml files to generate calls to the Nimble device. You don't need to understand Ruby to use the script, just yaml, and know that the first two levels of the yaml config are Array structures delimited by dashes.

 

How to Use (Ruby >= 2.0):

 

gem install bimbly

 

./nimble_playbook config.yml playbook.yml

 

And the rest is done by the script and library. The config.yml file is where the information about how to connect to a Nimble device are stored, and the playbook.yml file has all the calls and details that will go to the device.

 

Sample yaml Playbook

 

playbook_yaml.PNG

 

The script can be found here along with a README and some sample playbook/config files:

 

bimbly/bin at master · zumiez/bimbly · GitHub

 

I have also updated the Bimbly library to make it possible to generate Nimble playbooks. When using the library in irb, you can call 'save' rather than 'call' which will add the REST call to an array instead of sending it off to the device. You can see the REST calls as they are queued up with the method 'review'. Once you decide that you have all of the necessary calls queued up, you can then output them to a yaml file with 'create_playbook(filename)' and the library will write the yaml to disk to be used by the nimble_playbook script at any time.

 

I hope to add to the library here in the future that will help with generating templates, and have some sample templates ready as well.

 

Special thanks to Nimble SE alawrence for suggesting yaml config files for use with the REST client and allowing me to test out my script.

SAP HANA is the foundation for the future of SAP business software. SAP announced in 2015 that it will end mainstream support for its software running on traditional RDBMs in 2025. While that date seems a far way off, time flies and SAP customers are already moving quickly to SAP HANA based systems.

 

As a result, hardware vendors have found it necessary to have their products certified for SAP HANA in order to stay relevant and meet the needs of the thousands of SAP customers.

 

The SAP HANA certification process is well documented and supported by SAP. While it can be time consuming, the benefits to both customers and vendors is important. For customers it provides a level of security that the hardware they purchase will meet the stringent performance needs of this new technology. For vendors, it eliminates the questions related to “will it work with SAP?” that get asked frequently.

 

Appliance vs. Tailored Data Center Integration (TDI)

SAP HANA is delivered using two models, appliance and tailored data center integration or TDI. The appliance model combines servers, network, and storage in a single bundle that is certified using SAP tools and sold as a complete unit. TDI allows separate server, network, and storage vendors to certify their components individually, allowing customers to choose their preferred vendor for each of the pieces to create a SAP HANA solution.


Scale-up vs. Scale out

When SAP HANA was first introduced, the maximum supported memory in a single server was 512GB. This was related to the processor to memory ratio SAP recommended as well as processor technology. This meant that for systems of even 1TB, customers would have to create a scale-out environment, combining multiple servers and distribute the database between the environments. Scale-out environments required a shared storage architecture which allowed a standby server to attach a failed server’s storage to continue processing.

 

Smaller systems could run in a scale-up environment, with the persistent storage installed directly in the server. Many vendors found it necessary to use spinning disk for the data area and flash storage for log storage to meet the latency for log writes required by SAP.

 

As processors have advanced, with more cores and sockets, the amount of memory supported in a single server has increased with some server vendors offering SAP HANA solutions with up to 32TB of memory.

 

As a result, the need for large scale-out systems to support even modest SAP HANA systems has greatly reduced. Storage systems that need to support 8, 16, or more SAP HANA servers have also become less necessary, except in the case of extremely large customers.


Shared storage for SAP HANA

With memory for scale-up systems increasing, the question becomes “Why do I need shared storage for my SAP HANA systems, if I can fit everything on a single server?”

 

The answer has several components. First, why did we move to shared storage originally? Much of the motivation was related to the resource silos that were created by servers with direct attached storage. This was an inflexible solution that did not allow for resource sharing. If I had too much capacity in one server, I couldn’t easily allocate it to another server that needed the capacity. Shared storage allows customers to grow and scale their environments seamlessly, allocating capacity for individual servers that need it and achieving better capacity utilization.

 

Second, upgrading a server meant a migration that required a significant amount of downtime to migrate data between the environments. This downtime could prove costly in SAP systems which run all the core business processes for customers. Adding capacity to a single server could also require downtime and, at the very least, reconfiguration. Shared storage reduces the downtime significantly. In many cases it is just a matter of turning one server off, attaching the storage to the new server, and turning the new server on.

 

Finally, modern storage systems offer features and functionality that a server with internal storage simply cannot match. Data reduction technology like compression improves space utilization and even performance. The ability to create snapshot backups in seconds to allow multiple recover points throughout the day to reduce risk. Those snapshots also provide the foundation for a zero copy clone, which allows customers to create systems for development, testing, and troubleshooting in a matter of minutes instead of hours or even days. Modern storage systems also offer data protection capability like replication and integration with third party vendor tools.

 

We at Nimble Storage have invested the time and energy to achieve SAP HANA certification for both our Adaptive Flash and All Flash storage arrays. As part of our certification, we have also created deployment guides which describe the configuration required to implement SAP HANA on Nimble Storage arrays. We have guides for both All Flash and Adaptive Flash arrays available on InfoSight. We also have a sizing questionnaire to help customers determine the requirements for their SAP HANA implementations.

 

We believe our systems offer great performance and functionality for SAP HANA.

I've wrote few blogs on Nimble Connect in the last few months talking about PowerShell and PowerCLI and how to leverage Nimble Storage REST API (see details in the end).

I often get questions such "where/how do I start?" and where should I look for guides?

 

Julian Cates one of our marketing engineers wrote a great blog once about PowerShell / Linux and Curl and our API integration, it's a great place to start.

 

In this blog I will review the REST API integration using Postman application.

 

What is postman? The official description is "A powerful GUI platform to make your API development faster & easier, from building API requests through testing, documentation and sharing." the description is very tempting and the free software is great entry level and can be used to test the API calls.

 

Postman provide a great feature called "collections" this is basically a folder that will contain a pre-configured API calls that can be imported as a complete set, an example for it can be found on the VMware Code site for vcenter 6.5

Here is a capture of their integration:

 

Postmanvmware.jpg

 

Nimble Storage does not offer such collection (yet).

I've create a very limited collection for you to use, and with the help of this guide you should be able to use REST API and Nimble array with Postman integration.

 

The collection:

Postman-nmbl.jpg

The following collection allows you to:

  • Get a token from the Nimble array.
  • List volumes (Name and ID).
  • List all volumes details.
  • List single volume details.
  • Create a new volume.
  • Create a new snapshot of a volume.
  • List all snapshot of a volume.
  • Create a new clone from a snapshot.
  • List ACL.
  • List ACL details.
  • Create new ACL to a volume / clone.

 

To import the collection drag the attached collection to this blog:

Postman-import.jpg

 

Quick note: I've notice that with Postman, you need to hit the send button twice, if you get something like "Could not get any response" try to hit the send again.

 

 

Here is the details you MUST change:

In each of the requests you will see a place for uri: something like this: https://<ARRAY IP>:5392/v1/tokens   you must change it to either the IP of the FQDN, the end result should look like this for example:

https://10.66.32.44:5392/v1/tokens

Postman-uri.jpg

 

The Token request requires a user name and a password, this can be changed in the body tab:

Postman-body.jpg

In all the other requests the you must change the $token in the header tab, to the token received in the first Token call.

Here is the Token result:

Postman-token.jpg

 

Taking the "session_token" and using as authorization for all other calls will look like this example in the "list all volumes" call with the results:

 

Postman-header.jpg

 

So, now we understand how to modify the collection for your own details, here is a workflow you can easily follow to check the integration:

 

Get token.

     List all volumes.

          Create a new snapshot of a selected volume.

               Create a new clone volume from the snapshot.

                     List ACLs.

                          Create new ACLs to the Clone.

 

Get token:

Postman-token.jpg

List all volumes (we will use the id from this output):

Make sure you change the token in the header!!

Postman-vol.jpg

 

Create a new snapshot of a selected volume (based on the ID):

Make sure you change the token in the header!!

This is the Body (JSON):

     Notice the vol_id is of the volume in the last capture.

Postman-snapshot.jpg

After hitting the send, we can see the result:

 

Postman-snap-details.jpg

Create new clone from this snapshot:

Make sure you change the token in the header!!

The Body (JSON) contains the ID of the snapshot created one step before, hitting send will create a new clone from the snapshot.

Postman-clone.jpg

List ACL

Make sure you change the token in the header!!

Postman-acl.jpg

Create new ACL to the clone:

Make sure you change the token in the header!!

The Body (JSON) contains the ID of the initiator group listed one step before, hitting send will create a new ACL to the clone we've created:

Note: vol_id is of the clone we've created and reported back in the creation time.

          Initiator group id is reported one step before.

 

Postman-acl-add.jpg

 

Here is out the result looks like on the Nimble array UI:

 

postman-volume.jpg

 

If you're wondering about the Windows2012R2cluster ACL, those are inherited from the snapshot and moved to the clone, they can be removed as needed.

 

And there you have it, another integration of Nimble Storage with REST API.

 

Feel free to comment any question or feedback.

 

Thanks,

@Moshe_Blumberg

 

Links:

Nimble OS 2.3 – REST API

Nimble Storage InfoSight - REST API reference from Infosight.

https://blogs.vmware.com/code/2017/02/02/getting-started-vsphere-automation-sdk-rest/

https://www.getpostman.com/

In recent years Microsoft has gone through a transformation to attract more developers to their ecosystems. Native Linux binaries in a Ubuntu sandbox, open sourcing PowerShell, a free cross-platform source code editor, Visual Studio Code. A year ago the concept of Windows Containers were quite foreign. But here they are, in all it’s glory, with the full unambiguous Docker experience. Nimble continues to invest in our Microsoft relationship and we're the first Microsoft and Docker ecosystem partner to provide a Docker Volume plugin for Windows Containers.

 

By example: Minio

Getting into the driver seat as a developer to understand how to build, ship and run a Windows application inside a Windows Container, I wanted to explore each step. Minio is an AWS S3 compatible object storage server written in Go and already there’s an official Go image built from microsoft/nanoserver and microsoft/windowsservercore. It doesn’t have git installed so I need to add git to be able to pull down the Minio source code and build it.

 

Please find Dockerfile and Docker Compose file attached at the bottom!

 

Dockerfile

FROM golang:nanoserver

MAINTAINER michael.mattsson@nimblestorage.com

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]


ENV MINIO_BRANCH release

ENV GIT_VERSION 2.12.0

ENV GIT_RELEASE v${GIT_VERSION}.windows.1

ENV GIT_ZIP MinGit-${GIT_VERSION}-64-bit.zip

ENV GIT_URL https://github.com/git-for-windows/git/releases/download/${GIT_RELEASE}/${GIT_ZIP}

 

RUN New-Item -Path C:\Minio -Type Directory; \

    Invoke-WebRequest $env:GIT_URL -OutFile git.zip; \

    Expand-Archive git.zip -Destinationpath C:\git; \

    Remove-Item git.zip; \

    $env:PATH = 'C:\Minio;C:\git\cmd;C:\git\mingw64\bin;C:\git\usr\bin;' + $env:PATH; \

    Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment\' -Name Path -Value $env:PATH; \

    New-Item C:\gopath\src\github.com\minio -Type Directory; \

    cd C:\gopath\src\github.com\minio; \

    git clone -b $env:MINIO_BRANCH https://github.com/minio/minio; \

    cd minio; \

    git submodule update --init --recursive; \

    go build; \

    Copy-Item minio.exe /Minio; \

    cd \; \

    Remove-Item C:\gopath\src -Recurse -Force

 

EXPOSE 9000

ENTRYPOINT ["minio.exe"]

 

 

So far, not much have changed. All the standard verbs are there, slashes go the wrong way and I can admit I learned some PowerShell in the process. Building and running your application is much simpler with Docker Compose. I simply copied one of my example Docker Compose files I had for running Minio on Linux. With very little modification I could bring up the application and serve from a Nimble volume.

 

docker-compose.yml

version: '3'

services:

  minio:

    build: .\minio

    ports:

      - "9000:9000"

    volumes:

      - export:C:\Export

    environment:

      MINIO_ACCESS_KEY: nimble

      MINIO_SECRET_KEY: nimblestorage

    command: server C:\\Export\\Buckets

 

volumes:

  export:

    driver: nimble

    driver_opts:

      sizeInGiB: 1000

      perfPolicy: "Windows File Server"

      description: "Rolling my own AWS S3"

 

networks:

  default:

    external:

      name: nat

 

We’re now able to bring up the project:

 

docker-compose.exe up

PS C:\Users\mmattsson> docker-compose.exe up

Creating volume "dev_export" with nimble driver

Building minio

Step 1/11 : FROM golang:nanoserver

---> 2c8f2f11bd5a

Step 2/11 : MAINTAINER michael.mattsson@nimblestorage.com

---> Using cache

---> 3fe516998a09

Step 3/11 : SHELL powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';

---> Using cache

---> bf8a51e230ea

Step 4/11 : ENV MINIO_BRANCH release

---> Using cache

---> d20c97a62641

Step 5/11 : ENV GIT_VERSION 2.12.0

---> Using cache

---> c23f8f4b6020

Step 6/11 : ENV GIT_RELEASE v${GIT_VERSION}.windows.1

---> Using cache

---> 099fedfe35be

Step 7/11 : ENV GIT_ZIP MinGit-${GIT_VERSION}-64-bit.zip

---> Using cache

---> cd4074ad15e9

Step 8/11 : ENV GIT_URL https://github.com/git-for-windows/git/releases/download/${GIT_RELEASE}/${GIT_ZIP}

---> Using cache

---> e5411a4e0774

Step 9/11 : RUN New-Item -Path C:\Minio -Type Directory;    Invoke-WebRequest $env:GIT_URL -OutFile git.zip;    Expand-Archive git.zip -Destinationpath C:\git;    Remove-Item git.zip;    $env:PATH = 'C:\Minio;C:\git\cmd;C:\git\mingw64\bin;C:\git\usr\bin;' + $env:PATH;    Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment\' -Name Path -Value $env:PATH;    New-Item C:\gopath\src\github.com\minio -Type Directory;    cd C:\gopath\src\github.com\minio;    git clone -b $env:MINIO_BRANCH https://github.com/minio/minio;    cd minio;    git submodule update --init --recursive;    go build;    Copy-Item minio.exe /Minio;    cd \;    Remove-Item C:\gopath\src -Recurse -Force

---> Using cache

---> e95705a70bf1

Step 10/11 : EXPOSE 9000

---> Using cache

---> 4656141a2a56

Step 11/11 : ENTRYPOINT minio.exe

---> Using cache

---> b3e5f4a21ad4

Successfully built b3e5f4a21ad4

Image for service minio was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.

Creating dev_minio_1

Attaching to dev_minio_1

minio_1  | Created minio configuration file successfully at C:\Users\ContainerAdministrator\.minio

minio_1  |

minio_1  | Endpoint:  http://172.31.182.170:9000  http://127.0.0.1:9000

minio_1  | AccessKey: nimble

minio_1  | SecretKey: nimblestorage

minio_1  | Region:    us-east-1

minio_1  | SQS ARNs:  <none>

minio_1  |

minio_1  | Browser Access:

minio_1  |    http://172.31.182.170:9000  http://127.0.0.1:9000

minio_1  |

minio_1  | Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide

minio_1  |    $ mc.exe config host add myminio http://172.31.182.170:9000 nimble nimblestorage

minio_1  |

minio_1  | Object API (Amazon S3 compatible):

minio_1  |    Go:        https://docs.minio.io/docs/golang-client-quickstart-guide

minio_1  |    Java:      https://docs.minio.io/docs/java-client-quickstart-guide

minio_1  |    Python:    https://docs.minio.io/docs/python-client-quickstart-guide

minio_1  |    JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide

minio_1  |

minio_1  | Drive Capacity: 1000 GiB Free, 1000 GiB Total

 

This looks simple enough. Right? Getting to where Microsoft and Docker are today is no small feat. The full suite of Docker tools for Windows for developers to indulge. Nimble announced the Windows Container plugin at Microsoft Ignite last year and all parties have worked tirelessly on getting this technology out the door.

 

These are the capabilities that are expected at release:

 

Nimble Storage Windows Container plugin

PS C:\Users\mmattsson> docker volume create -d nimble -o help

Error response from daemon: create 6e9565253704c51483726f03609c47519e3844fe3a5d4424b530450ebe88aa3e:

Nimble Storage Docker Volume Driver: Create Help

Create or Clone a Nimble Storage backed Docker Volume or Import an existing Nimble Volume or Clone of a Snapshot into Do

cker.

Create options:

  -o sizeInGiB=X                  X is the size of volume specified in GiB

  -o size=X                        X is the size of volume specified in GiB (short form of sizeInGiB)

  -o description=X                X is the text to be added to volume description (optional)

  -o perfPolicy=X                  X is the name of the performance policy (optional)

                                  Performance Policies: Exchange 2003 data store, Exchange 2007 data store, Exchange log, SQL Server, SharePoint, Exchange 2010 data store, SQL Server Logs, SQL Server 2012, Oracle OLTP, Windows File Server, Other Workloads, DockerDefault

  -o pool=X                        X is the name of pool in which to place the volume (optional)

  -o folder=X                      X is the name of folder in which to place the volume (optional)

  -o encryption                    indicates that the volume should be encrypted (optional, dedupe and encryption are mutually exclusive)

  -o thick                        indicates that the volume should be thick provisioned (optional, dedupe and thick are mutually exclusive)

  -o dedupe                        indicates that the volume should be deduplicated (optional, requires perfPolicy option to be set)

Docker Volume Clone options:

  -o cloneOf=X                    X is the name of Docker Volume to create a clone of

Import Nimble Volume options:

  -o importVol=X                  X is the name of the Nimble Volume to import

  -o pool=X                        X is the name of pool in which to place the volume (optional)

  -o folder=X                      X is the name of folder in which to place the volume (optional)

  -o forceImport                  forces the import of the volume.  Note that overwrites application metadata (optional)

Import Nimble Volume as Clone options:

  -o importVolAsClone=X            X is the name of the Nimble Volume to clone

  -o snapshot=X                    X is the name of the snapshot to clone from & is applicable when importVolAsClone option is specified. (optional, latest snapshot of volume is used if not specified)

  -o pool=X                        X is the name of pool in which to place the volume (optional)

  -o folder=X                      X is the name of folder in which to place the volume (optional)

 

Yes, the capabilities of cloning and importing Nimble volumes are all there at the Windows Container DevOps engineer to reap the benefits of.

Please come see us at DockerCon ’17 in Austin, TX in a few weeks. We’ll be demoing Windows Containers amongst a slew of other cool things Nimble enable you to do in your container environments.

 

Useful links on Windows Containers

Build and run your first Docker Windows Server container: https://blog.docker.com/2016/09/build-your-first-docker-windows-server-container

Docker Captain Stefan Scherer’s blog: https://stefanscherer.github.io/ (endless useful information on Windows Containers)

Windows Container on Windows Server: https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-server

 

--

Here and there I often get an inquiry about using Chef with Nimble arrays. My standard response has been that we have a comprehensive REST API, please use that. It still remains my standard answer but now I have a blog post the inquiries to.

 

Chef has a built-in http_request resource that lends itself to manipulate remote REST APIs. However, it falls short in the department of retrieving and processing that content. Chef as a class you can call for this purpose, Chef::HTTP, and it lends itself just perfectly for easy retrieval of JSON content from a REST API.

 

Another caveat is that Nimble arrays serve the REST API over SSL connections with self-signed certificates. Which can be a bit of a pain to deal with in foreign libraries and programming languages (note that the author is a complete Ruby illiterate).

 

I’ve put together a small cookbook with just enough meat in it to use chef-client. The latest stable version of Chef DK (Chef Downloads) is being used in the below examples.

 

Sample Cookbook

Assume accounting have asked for a CSV-list of volumes from your array for their number crunching activities. I’m going to create such list for them to pickup at their convenience. Fire and forget. Here's the zipfile

 

First I need to figure out what attributes are relevant for my cookbook and how I easily can override those if we expand with more Nimble groups. This is my recipe cookbooks/nimble/attributes/default.rb

# Where to reach the API endpoint

default['nimble']['array'] = '192.168.59.64:5392'

default['nimble']['scheme'] = 'https'

default['nimble']['api_version'] = 'v1'

 

# This is the base dir for Chef DK

default['nimble']['certs'] = '/opt/chefdk/embedded/ssl/certs'

 

# Credentials

default['nimble']['auth'] = {

  'data': {

    'username': 'admin',

    'password': 'admin'

  }

}

 

With my attribute layout, I can now craft my first recipe, cookbooks/nimble/recipes/certs.rb

# Download certificate

bash 'download_cert' do

  code <<-EOH

    openssl s_client -connect #{node['nimble']['array']} \

      -showcerts </dev/null 2>/dev/null | openssl x509 -outform PEM \

       > #{node['nimble']['certs']}/#{node['nimble']['array']}.pem

    cat #{node['nimble']['certs']}/#{node['nimble']['array']}.pem >> #{node['nimble']['certs']}/cacert.pem

    EOH

end

 

This allows download of the Nimble array certificate and subsequent Chef connections to the array will be successful.

 

To download the certificate:

# chef-client --local-mode -r 'nimble::certs’

 

I can now move on to the more interesting pieces. Assume this recipe for cookbooks/nimble/recipes/volumes::list.rb

# Construct URL from attributes

nimble_array = node['nimble']['scheme'] + '://' + node['nimble']['array'] +

'/' + node['nimble']['api_version']

 

# My query

nimble_query = '/volumes/detail?fields=name,size,vol_usage_compressed_bytes'

 

# Get a token

nimble_token = JSON.parse(Chef::HTTP.new(nimble_array).post('/tokens',

  node['nimble']['auth'].to_json))

 

# Simply list my volumes

nimble_volumes = JSON.parse(Chef::HTTP.new(nimble_array).get(nimble_query,

  {'X-Auth-Token' => nimble_token['data']['session_token']}))

 

# Write JSON response to a list with an erb template

template '/tmp/nimble_volumes_list' do

  source 'nimble_volumes_list.erb'

  owner 'root'

  group 'root'

  mode '0644'

  variables :my_volumes => nimble_volumes['data']

end

 

Very easy to follow recipe and for clarity, this is what my cookbooks/nimble/templates/nimble_volumes_list.erb looks like:

name,size,used

<% @my_volumes.each do |object| %>

<%= "#{object['name']},#{object['size']},#{object['vol_usage_compressed_bytes']/1024/1024}" %>

<% end %>

 

Now, for the entrée:

$ chef-client --local-mode -r 'nimble::volumes::list’

 

This will create /tmp/nimble_volumes_list, in my case this is what it contains:

name,size,used

dlvm-docker-tme-lnx4-xenial,102400,1406

dlvm-docker-tme-lnx2-xenial,102400,2546

dlvm-docker-tme-lnx3-xenial,102400,3016

dlvm-docker-tme-lnx1-xenial,102400,2451

dlvm-docker-tme-lnx5-xenial,102400,1408

dlvm-docker-tme-lnx9-xenial,102400,1406

dlvm-docker-tme-lnx8-xenial,102400,1415

dlvm-docker-tme-lnx6-xenial,102400,1406

dlvm-docker-tme-lnx7-xenial,102400,1556

dlvm-docker-tme-lnx1-centos,102400,3203

dlvm-docker-tme-lnx2-centos,102400,2859

dlvm-docker-tme-lnx6-centos,102400,2150

dlvm-docker-tme-lnx9-centos,102400,2318

dlvm-docker-tme-lnx8-centos,102400,2383

dlvm-docker-tme-lnx4-centos,102400,2499

dlvm-docker-tme-lnx3-centos,102400,2842

dlvm-docker-tme-lnx5-centos,102400,2669

dlvm-docker-tme-lnx7-centos,102400,2508

 

Now accounting can crunch their numbers all they want!

 

To run this recipe on a array other than my default hardcoded array a custom JSON attribute file may be used to override the defaults in the recipe. Something like this (same logical structure as the attribute file):

{

  "nimble": {

    "array": "sjc-array875.lab.nimblestorage.com:5392",

    "auth": {

      "data": {

        "username": "admin",

        "password": "admin"

      }

    }

  }

}

 

So, for dessert:

# chef-client --local-mode -r 'nimble::volumes::list' -j nimble.json

 

That will however overwrite the contents of /tmp/nimble_volumes_list.

 

Chef has been around for a very long time. There’s countless resources available online to learn from. I’m a Chef apprentice and these examples may not represent the best practice pattern for creating cookbooks but demonstrates the principle that we have a REST API and you may manipulate it from anything, including Chef.

 

Dive into the NimbleOS REST API: Nimble Storage InfoSight

Learn more about Chef: Deploy new code faster and more frequently. Automate infrastructure and applications | Chef

Few weeks back I wrote to the community about VVol restore, and how such operation can be performed (if you missed it, here's the link).

How many time does one need to restore a full virtual machine? I can think about few cases where we've seen it but more often we encounter to restore a single object from a single drive.

We don't want to rollback the virtual machine and just to attach a clone to a location you decide to.

 

With Virtual Volumes we can perform such operation easily.

Looking at the Nimble Storage VVol structure, each VM vmdk is a dedicated Nimble Volume, the file system on those volumes is no longer VMFS but it's now the guest OS file system, meaning we can use few methods to recover such partition/vmdk/volume.

 

There are 3 methods to recover a VVol volume, I'm going to mainly write about the first one today:

1. RDM, mapping of a Nimble Storage cloned volume to a VM.

2. In gust iSCSI (you can use NCM)

3. Adding metadata to a clone and attaching it to a newly created VM.

     Note: This is a slightly more complex method and support will need to be involved.

 

All 3 methods of recovery involving creation of a Nimble Clone and presenting somehow to a guest OS.

 

RDM, mapping of a Nimble Storage cloned volume to a VM.

 

As I've said before, the file system on the Nimble volume and clone will be determined by the guest OS.

Starting from scratch, on the vsphere, you will want to determine what is the VM volume you wish to recover, this can be done by editing the virtual machine and expanding the hard disk view:

 

mm.jpg

 

 

There are few notes to take here:

The Nimble API will replace any "_" with a "-" as underscore is not allowed on the array (so if you're using a search box on the array UI be sure to remember that)

It's important to take notice of the file name, as often Hard disk 1 will NOT be "vvol-recovery-mb-1.vmdk" it will be "vvol-recovery-mb.vmdk" (unless changes by user), this is important as you don't want to waste time recovering the wrong volume.

 

You can also use the array CLI to view the volumes associated to the virtual machine:

 

# vm --info vvol-recovery-mb

VM ID: 5022156e-ebac-0c82-cff0-e44a40799d9d

VM name: vvol-recovery-mb

Volumes:

        vvol-recovery-mb

        vvol-recovery-mb-1.vmdk

        vvol-recovery-mb-6929c51f.vswp

        vvol-recovery-mb.vmdk

 

Now we know what volume we want to recovery, we can create a clone from our snapshot.

mm.jpg

 

You can also do it from the array CLI:

 

#vol --clone vvol-recovery-mb.vmdk --snapname vvol-recovery-mb-2608d3b1ff22617c-0001-minutely-2017-02-18::20:13:00.000 --clonename vvol-recovery-mb.vmdk-clone

 

Now we have the new clone, we want to give it ACL, based on where we want to connect the volume to.

In this example, I will be using the same ACL as the VVol VM.

 

mm.jpg

 

You can also do it from the array CLI:

# vol --addacl vvol-recovery-mb.vmdk-clone --apply_acl_to both --initiatorgrp vvol-initiator-group

 

 

At this stage we have to cloned volume ready, with ACL.

The next step will be scanning the ESXi host.

 

mm.jpg

 

After the rescan is complete, we can move on to the next step, adding the RDM to the virtual machine.

 

A note: vsphere will need to write a file, a descriptor file (also known as a mapping file) it's a small text file that tells what device mapped where.

Why am I saying it? well vsphere can only write it to a VMFS datastore, and if we are mapping the new clone to the a VVol VM, or NFS we will be required to provide with VMFS datastore.

Nothing to worry about, just another click on the way to your full recovery.

 

TIP: The EUI of a device is also the Nimble volume serial number, this can be used when mapping the device, this can be found in the Nimble UI volume page.

 

Go back to the VM you want to use, and Edit the VM:

 

Edit the VVol VM ( or any other VM you want to add the volume to ).

Add new device: RDM Disk

Select the new device (tip:  eui = volume serial number )


You can't complete at this point as the pointer requires a VMFS datastore:
   Expend the new RDM.
   Location: store with virtual machine >> expend and select browse >> select VMFS datastore.   (this is a VMware limitation)

 

 

 

 

 

And hit the save button.

 

The next and final steps is depended on the guest OS, you will need to online the disk.

Note that you might need to clear reservations or read only status if presenting to the same guest OS that is part of a cluster or a  DAG (see KBA KB-000047 and post )

 

 

That's it, all done and ready for you to recover what ever object is needed.

 

Now what if I told you you can do it all with a PowerCLI and Nimble API script?

 

The attached script will do it all for you, identify the volumes you need to clone, clone the volume add the ACL rescan the host and attach it the to the destination VM.

A side note, the VVol VM must be powered on during the recovery as I pull the ACL details from it.

 

Here is a little look of at it works:

 

.\vvolrecovery.ps1

 

Found PowerShell version 4 which is supported, continuing.

FQDN or IP of the Nimble array: 10.66.32.23

User name: admin

Password for the Nimble array: password

FQDN or IP of vCenter: 10.66.33.46

Use administrator@vsphere.local for vCenter user? Default is yes. hit enter to continue, enter No for new user:

Using  administrator@vsphere.local as vCenter user

Password for vCenter: password

######################################################

Connecting to vCenter: 10.66.33.46.

Connected to vCenter: 10.66.33.46.

######################################################

Found the following VVol VMs

 

Name           Folder PowerState Id

----           ------ ---------- --

EMEA-VM65-test vm      PoweredOn VirtualMachine-vm-205

VMUG           vm     PoweredOff VirtualMachine-vm-287

tt-mm          vm     PoweredOff VirtualMachine-vm-293

VVol-demo1     vm      PoweredOn VirtualMachine-vm-324

 

 

######################################################

Name of VVol VM to recover a VMDK from: VVol-demo1

######################################################

 

Found VM VVol-demo1 from input.

 

Found 1 VM with this name.

 

Found VM VVol-demo1 with ID VirtualMachine-vm-324 in input.

 

Moving to next step.

######################################################

 

Name        Filename

----        --------

Hard disk 1 [testbedvvol01-23] rfc4122.263388e2-fd67-457d-bfe8-5474803648f3/VVol-demo1.vmdk

Hard disk 2 [testbedvvol01-23] rfc4122.263388e2-fd67-457d-bfe8-5474803648f3/VVol-demo1_1.vmdk

 

######################################################

######################################################

What is the number of Hard Disk you would like to recover (1/2/3/4..)?: 1

Will recover Hard Disk Hard Disk 1

######################################################

You can choose to recover the VMDK to a new virtual machine

This Virtual machine MUST be on the same ESX host

If this is another VVOL VM (Or NFS), we will need a VMFS datastore for RDM descriptor file (small text file)

######################################################

Use same VM for recovery? Default is yes. hit enter to continue, enter No for new new VM:

######################################################

Using  VVol-demo1 as the VM to attach the VMDK to

Need a VMFS datastore for RDM descriptor file

 

Name       FreeSpaceGB CapacityGB

----       ----------- ----------

Datastore2     274.325    299.750

 

######################################################

 

Name of VMFS datastore to use for a very small file: Datastore2

######################################################

 

Found datastore Datastore2 from input.

######################################################

Found the following volume for this VVol VM hard disk.

Name       FreeSpaceGB CapacityGB

----       ----------- ----------

Datastore2     274.325    299.750

 

name            size online

----            ---- ------

VVol-demo1.vmdk 3072   True

 

######################################################

 

Found the following snapshots for this VVol VM volumes

 

vol_name        name

--------        ----

VVol-demo1.vmdk VVol-demo1-569e7fbad65b9423-0001-hourly-2017-03-02::02:05:00.000

VVol-demo1.vmdk VVol-demo1-569e7fbad65b9423-0001-minutely-2017-03-02::02:07:00.000

VVol-demo1.vmdk VVol-demo1-569e7fbad65b9423-0001-minutely-2017-03-02::02:22:00.000

VVol-demo1.vmdk VVol-demo1-569e7fbad65b9423-0001-minutely-2017-03-02::02:37:00.000

VVol-demo1.vmdk VVol-demo1-569e7fbad65b9423-0001-minutely-2017-03-02::02:52:00.000

######################################################

Please copy paste full Common snapshot name: VVol-demo1-569e7fbad65b9423-0001-minutely-2017-03-02::02:22:00.000

######################################################

Using VVol-demo1-569e7fbad65b9423-0001-minutely-2017-03-02::02:22:00.000 to create a clone of the volume.

######################################################

Use VVolCloneForRecoveryOperationVVol-demo1 as the clone volume name?

 

Default is yes. hit enter to continue, enter No for new clone name: no

Clone name: VVolCloneForRecoveryOperationVVol-demo2

Using  VVolCloneForRecoveryOperationVVol-demo2 as cloned volume name.

######################################################

name                   : VVolCloneForRecoveryOperationVVol-demo2

vol_state              : online

base_snap_name         : VVol-demo1-569e7fbad65b9423-0001-minutely-2017-03-02::02:22:00.000

id                     : 06569e7fbad65b9423000000000000000000000176

access_control_records :

parent_vol_name        : VVol-demo1.vmdk

parent_vol_id          : 06569e7fbad65b942300000000000000000000016d

######################################################

Volume VVolCloneForRecoveryOperationVVol-demo2 is ready, moving to host side actions.

######################################################

######################################################

Starting host rescan, please wait (this might take a while).

Done with host rescan.

######################################################

 

Volume ID 06569e7fbad65b9423000000000000000000000176 connection count 0, sleeping for 10 Seconds whilst waiting for host to connect.

Volume ID 06569e7fbad65b9423000000000000000000000176 connection count 0, sleeping for 10 Seconds whilst waiting for host to connect.

######################################################

Volume ID 06569e7fbad65b9423000000000000000000000176 ready with 2 connections.

######################################################

######################################################

Found eui.f77f3464dd0bfe106c9ce90099ebc580 on host, mapping RDM now.

######################################################

######################################################

Disk been attached to VM VVol-demo1

######################################################

SoftwareIScsiEnabled

--------------------

True

 

DeviceName        : vml.01000000006637376633343634646430626665313036633963653930303939656263353830536572766572

ScsiCanonicalName : eui.f77f3464dd0bfe106c9ce90099ebc580

Persistence       : IndependentPersistent

DiskType          : RawPhysical

Filename          : [Datastore2] VVol-demo1/VVol-demo1_1.vmdk

CapacityKB        : 3145728

CapacityGB        : 3

ParentId          : VirtualMachine-vm-324

Parent            : VVol-demo1

Uid               : /VIServer=vsphere.local\administrator@10.66.33.46:443/VirtualMachine=VirtualMachine-vm-324/HardDisk=2003/

ConnectionState   :

ExtensionData     : VMware.Vim.VirtualDisk

Id                : VirtualMachine-vm-324/2003

Name              : Hard disk 3

Client            : VMware.VimAutomation.ViCore.Impl.V1.VimClient

 

 

 

 

 

To Clean up the VM after the file recovery you would need to follow these steps:

Offline the disk in the guest OS.

Detach the device from the ESX host.

Offline the Nimble volume.

Remove the entries from the static discover, and rescan the host.

 

The other method I've mentioned in step 2 including inguest iSCSI, and can be performed with Nimble connection manager:

 

To recover a single file you can clone the data volume (.vmdk) from a snapshot and assign the guest OS ACL (via initiator group).

On the guest OS, use either NCM or microsoft iSCSI to mount the new device.

In disk mgmt online the newly discovered disk.

the partition is now available.

 

Please be aware that the script is not a supported tool and as a community tool will be maintained by the public.

 

Feel free add any feedback or thoughts about this script.

@Moshe_Blumberg



VVol restore, the fully automated way

Doing more with less – VVols and Simplified Storage Management

Nimble Storage Integration with VVols

NimbleOS 3.1 - Introduction To Folders

Talk VVol2.0 and VASA3 to me

Do and Don't - VVol

It has been a busy few weeks in Nimble engineering while polishing the next incremental update of our Docker Volume plug-in. We’re striving to abstract all our core features to the Docker API to make end-users self sufficient for all their persistent storage needs. NLT-2.1 and the next release of NimbleOS which NLT-2.1 depends on, will ship in a few weeks.

 

QoS limits

NimbleOS have very sophisticated IO scheduling to reduce contention and “noisy” neighbors. This is sufficient for single tenant type workloads but does not work well for our customers who wants to make money off their investments such as service providers. That's why NimbleOS 4.0 introduces QoS (Quality of Service) limits which allows administrators to manually govern performance between Nimble volumes and folders (a folder is a logical grouping of volumes). This allows for very intricate policing of IO where a tenant may purchase 5000 IOPS or 1Gbit of throughput assigned to a group of volumes and further pin IOPS and throughput within his family of volumes. The service provider may then resell different tiers of performance as well as capacity to meet the tenants needs in the ever emerging pay-only-for-what-you-need economy. Not having these governors in place gives tenant #1 a false sense of performance guarantee and will most likely start complaining when tenant #2 moves in on the same system.

 

We’re bringing these performance governors to the Docker API. Combining QoS limits with our current capabilities opens up a few interesting use cases.

 

Isolating Docker Swarm clusters to different performance tiers

For large deployments it makes a lot of sense to understand how much IO performance is available to household with and divide that among the different workloads. Production clusters may create ungoverned Docker Volumes but may run the image registry on governed Nimble volumes as pulling images is not desirable to impact transactional application workloads running of the Docker Volumes. Test/dev and QA clusters have its own set of default governors to allow a single Nimble array to cater a diverse population of IO needs.

 

In the below example there’s roughly 20K IOPS to household with for an entry level array using a synthetic transactional workload. In the ‘centos’ folder is where the production cluster Docker Volumes reside and in the ‘xenial’ folder is the test environment. To ensure the production cluster gets all the juice, the test environment is throttled significantly as the performance delivered there is not mission-critical.

centos.png

Fig. Folder 'centos' should be prioritizied, doubling performance by throttling folder 'xenial'

 

xenial.png

Fig. Performance observed from the 'xenial' folder, stealing IO from the important 'centos' folder.

 

In this particular example, performance doubled for the mission-critical workload when filtering out unimportant Docker Volumes.

 

Clone production volumes with performance caps for test/stage/qa

In data intense CI/CD build pipelines where tests run on clones of production there's always risk to impact production performance negatively. What we enable is the ability to set the limits on the volume during the clone operation which can easily be integrated into the build pipeline when preparing tests.

 

admin@nimble:~$ docker volume create -d nimble --name prod

prod

admin@nimble:~$ docker volume create -d nimble --name testing -o limitIOPS=500 -o cloneOf=prod
testing

admin@nimble:~$ docker volume inspect testing | json 0.Status.LimitIOPS
500

 

Being able to fine tune IO performance brings a whole new dimension to planning how data is being accessed. Cramming more work onto an array becomes less risky as it's possible to throttle undesired data access patterns and govern important workloads to roam freely on the available resources. Still using the default automatic QoS for fair balancing of IO between volumes.


Protection templates

Data without backups is not really data when disaster strikes. Local snapshots provide excellent RPO/RTO and asynchronous remote replication provide DR capabilities along with long-term archiving. This bread & butter capability is now available as an option for Docker Volumes where it’s possible to associate a pre-defined protection template during volume creation.


admin@source:~$ docker volume create -d nimble --name important-data -o protectionTemplate=Retain-30Daily

important_data

admin@source:~$ docker volume inspect important-data | json 0.Status.VolumeCollection

{

  "Description": "Provides daily snapshots retained for 30 days",

  "Name": "important-data.docker",

  "Schedules": [

    {

      "Days": "all",

      "Repeat Until": "23:59",

      "Replicate To": "group-sjc-array869",

      "Snapshot Every": "1 minutes",

      "Snapshots To Retain": 30,

      "Starting At": "00:00",

      "Trigger": "regular"

    }

  ]

}

 

While this is pretty straight-forward, there are some neat use cases opening up for having Docker nodes accessing the downstream Nimble group. Volumes are offline at the replica destination and can’t directly be seen by the Docker Volume plug-in but since we provide the ability to clone any Nimble volume in any state directly from the Docker API, that data is indirectly available on the downstream array for various uses.


Offload data validation and dev/test/qa to replicas

In CI/CD build pipelines with transactional workloads using load tests it may not be practical to run those tests on the primary array and instead run those tests during office hours on a replica set. Another good use would be to do data validation, such as Oracle 'dbv' or similar.

 

Plan and test DR strategies

Since data and applications are abstracted from each other using containers it’s quite convenient to build DR solutions around containerized applications. When disaster strikes, containers could be pre-pulled and built on off-site Docker hosts and taken online with replicas of the primary data. Testing these procedures could be done in an automatic fashion where the only manual intervention at a cutover would be to redirect front-end traffic. The less manual steps to perform during a live DR scenario the better.

 

On the destination group (group-sjc-array869) in the example above there’s a separate Docker cluster connected. Replica destinations are always offline as mentioned and they won’t show up on the remote Docker Engines. Since we can figure out the actual Nimble volume name on the source, cloning on the destination becomes easier.

 

admin@source:~$ docker volume inspect important-data | json 0.Status.VolumeName

important-data.docker

 

Flipping over to a Docker Engine on the remote site:

admin@destination:~$ docker volume create -d nimble -o importVolAsClone=important-data.docker --name replica-data
replica-data

 

Comprehensive data protection capabilities is essential for any production workload, persistent storage for containers is no exception. This is a first pass to bring this functionality to Docker, more features and capabilities are in the works to simplify these workflows further.

 

Locally scoped volumes

Docker Swarm introduced the concept of a service. An abstraction layer to define an application for scaling and availability. Docker images with volume definitions spits out anonymous local volumes and attaching a named globally scoped volume to the service will cause conflicts unless using a fully clustered aware filesystem (and application for that matter). Having these anonymous volumes co-mingled with the docker runtime might be undesirable as it's difficult to govern use of /var/lib/docker and it's best used only for images and runtime configuration data.

 

The default Nimble Docker Volume plug-in is a globally scoped driver which allows any node in the Docker Swarm to attempt mounting it. In NLT-2.1 we introduce a ‘nimble-local’ driver which creates locally scoped volumes which may create identically named volumes in the Docker Swarm but only accessible on the node it was created.

 

The following example will deploy a single Minio instance as a single replica and two global services will be created, one service replicates to the nimble-local volume and one service serve the data read-only. This sure copies a lot of data around but it will be deduped on the Nimble array. It yields very trivial horizontal scale and excellent resiliency. The neat thing about a global service is that it will automatically scale as more nodes are added to the cluster and Docker Volumes are created on the fly.

 

version: "3"

services:

  # this is a single replica minio instance

  minio:

    image: minio/minio:latest

    volumes:

    - export:/export

    command: server /export

    environment:

      MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minio}

      MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-miniostorage}

    ports:

    - "9000:9000"

    deploy:

      replicas: 1

 

  # this is a global minio client service that mirrors the minio instance

  mirror:

    image: minio/mc:latest

    volumes:

    - mirror:/mirror

    entrypoint: /bin/ash

    command: -c "/go/bin/mc config host add minio http://minio:9000 ${MINIO_ACCESS_KEY:-minio} ${MINIO_SECRET_KEY:-miniostorage} && /go/bin/mc mirror --quiet --force --remove --watch minio/docker /mirror"

    deploy:

      mode: global

    depends_on:

    - minio

 

  # this runs a read-only nginx copy of the mirrored data as a global service

  web:

    image: thegillis/nginx-as-root-hack

    volumes:

    - mirror:/usr/share/nginx/html:ro

    ports:

    - "80:80"

    deploy:

      mode: global

    depends_on:

    - mirror

 

volumes:

  # this is the minio source volume

  export:

    driver: nimble

    driver_opts:

      size: 768

      description: "Minio export"

 

  # this is the nimble-local volume that is mounted on each swarm host

  mirror:

    driver: nimble-local

    driver_opts:

      size: 768
      description: "Minio client and nginx frontends"

Fig. An example Docker Compose v3 YAML file - stack.yml

 

Bringing this application up with a single command:

admin@nimble:~$ docker stack deploy -c stack.yml migx


Observing the stack in Docker UCP:

migx_stack.png

Fig. Displaying an application stack in Docker UCP

 

So, where are you in your containerization project and how can we help you overcome your persistent storage challenges?

In an ideal world we wouldn't need to worry about data protection and recovery, as we all know in this industry a restore is an operation that we perform very often.

 

In the next blog I will try and cover data restore from a VVol VM point of view.

Many blogs were written about VVol configurations, even about recovery from a downstream array (see reference in the end of the blog) I will try not to repeat any information that was written before.

 

To protect a VVol VM Nimble Storage recommend applying Storage Policy Based Management (SPBM), with the protection settings enabled as a rule.

vmug_mp4.jpeg

 

Such storage policy when set on the VVol VM (don't forget you must apply it) will create a volume collection and a protection schedule on the storage backend, taking snapshots on the interval you've specified.

In order to restore (a roll back of the virtual machine to a previous point in time) from a snapshot a VI admin will need to follow these steps:

  1. Power off the VM
  2. Revert volume snapshots on the array for all of the VVols associated with the VM. (Or selectively revert them, it's up to you.)
  3. Power on the VM

 

Now, imagine you have a VM with multiple VMDKs, say.. 5? and you need to restore it. You will need to power off the VM, find each of the Nimble volumes in the containing folder and revert each one to the snapshot the you want, this can be time consuming.

 

I've wrote a little script that can help us with the restore of such VM, it's Powershell and PowerCLI based.

 

Notes:

 

This is not a supported tool and as a community tool will be maintained by the public.

Please make sure that you run Powershell in administrator mode.

Minimum Powershell needed - 3.0                             

Minimum PowerCLI needed - 5.8                               

Please Make sure that You allow scripts to run on this machine:  Set-ExecutionPolicy Should be set to RemoteSigned, the default  is Restricted.                                              

To change, issue the command  "Set-ExecutionPolicy RemoteSigned"

You must have a common snapshot for all volumes identified and found with the script.

Understand, that such operation will restore a volume to a previous point in time.

 

 

OK, now we covered the fine print lets have a look at the script.

This script will:

 

Check Versions of Powershell and PowerCLI

Collect input of user name and password for the array and the vCenter.

Collect input of the VM needed to recover, if more than 1 with the same name, it will ask for the ID (provided in a nicely formatted list).

Verify the VM selection from the list above.

Find the VMDK identifiers,  and find the volume identifiers on the Nimble array using API calls and compare in order to find the correct volumes.

List snapshots available.

Take input as selection, note that this snapshot will be used for all volumes.

Power Down VM.

Wait for ACL to be removed & volumes to be offlined and ready for a restore operation.

Restore volumes.

Power on VM.

 

Here how the output looks like:

In blue are the variables that you will be prompt to enter during the script.

 

 

 

> .\vvolrecovery.ps1

Found PowerShell version 4 which is supported, continuing.

Found PowerCLI version 6.0 which is supported, continuing.

FQDN or IP of the Nimble array: 10.66.32.26

User name: admin

Password for the Nimble array: mypasswd

FQDN or IP of vCenter: 10.66.32.19

Use administrator@vsphere.local for vCenter user? Default is yes. hit enter to continue, enter No for new user:

Using  administrator@vsphere.local as vCenter user

Password for vCenter: mypasswd2

Connecting to vCenter: 10.66.32.19

Connected to vCenter: 10.66.32.19.

 

Found the following VVol VMs

 

Name              Folder       PowerState Id

----              ------       ---------- --

emea-vm-win02     Windows 2008 PoweredOff VirtualMachine-vm-258

vvol-recovery-mb  vm            PoweredOn VirtualMachine-vm-2820

vvol-recovery-mb  Moshe         PoweredOn VirtualMachine-vm-2902

vvol-recovery-mb  vvol-VMs      PoweredOn VirtualMachine-vm-3066

emea-vm-mb0002    Moshe         PoweredOn VirtualMachine-vm-1122

emea-vm-mb0003    Moshe         PoweredOn VirtualMachine-vm-1124

emea-vm-mb001     Moshe         PoweredOn VirtualMachine-vm-1121

vvol-restore-mb   vm            PoweredOn VirtualMachine-vm-3065

 

Name of VVol VM to restore: vvol-recovery-mb

 

Found VM vvol-recovery-mb from input.

 

Found more than 1 VM with the same name, folder location must be different.

 

Name             Folder   PowerState Id

----             ------   ---------- --

vvol-recovery-mb vm        PoweredOn VirtualMachine-vm-2820

vvol-recovery-mb Moshe     PoweredOn VirtualMachine-vm-2902

vvol-recovery-mb vvol-VMs  PoweredOn VirtualMachine-vm-3066

 

What is the VM ID of the VVol VM (copy full ID )?: VirtualMachine-vm-2820

 

Found VM vvol-recovery-mb with ID VirtualMachine-vm-2820 in input.

 

Moving to next step.

 

Single SPBM to all VMDKs.

 

Getting VVol VM vvol-recovery-mb VMDK identifiers - config volume excluded.

 

Done.

 

Getting Nimble Volumes and Snapshots details.

 

@{startRow=0; endRow=161; totalRows=161; data=System.Object[]}

 

Found the following volumes for this VVol VM - config volume excluded.

 

Entity      Storage Policy Status    Time Of Check

------      -------------- ------    -------------

Hard disk 1 vvol-mb        compliant 07/02/2017 19:16:51

Hard disk 2 vvol-mb        compliant 07/02/2017 19:16:50

 

rfc4122.ad80972c-449c-48f7-9a62-9756e46de335

rfc4122.32064f50-00eb-4529-a5c6-a70ff584adf3

 

name                     size online app_uuid                                     id

----                     ---- ------ --------                                     --

vvol-recovery-mb.vmdk   40960   True rfc4122.32064f50-00eb-4529-a5c6-a70ff584adf3 062608d3b1ff22617c00000000000000000000011e

vvol-recovery-mb-1.vmdk 40960   True rfc4122.ad80972c-449c-48f7-9a62-9756e46de335 062608d3b1ff22617c00000000000000000000011f

 

Found the following snapshots for this VVol VM volumes

 

vol_name                name

--------                ----

vvol-recovery-mb.vmdk   vvol-recovery-mb-2608d3b1ff22617c-0001-minutely-2017-02-07::20:23:00.000

vvol-recovery-mb-1.vmdk vvol-recovery-mb-2608d3b1ff22617c-0001-minutely-2017-02-07::20:28:00.000

vvol-recovery-mb.vmdk   vvol-recovery-mb-2608d3b1ff22617c-0001-minutely-2017-02-07::20:28:00.000

 

 

############################################

 

Please copy paste full Common snapshot name: vvol-recovery-mb-2608d3b1ff22617c-0001-minutely-2017-02-07::20:28:00.000

 

Using vvol-recovery-mb-2608d3b1ff22617c-0001-minutely-2017-02-07::20:28:00.000 to restore all volumes

 

Checking power state of vvol-recovery-mb.

Stopping vvol-recovery-mb with ID VirtualMachine-vm-2820.

 

Done.

 

Sleeping for 5 seconds before checking if VM is ready for restore.

 

 

Volume ID 062608d3b1ff22617c00000000000000000000011e connection count 4, sleeping for 20 Seconds whilst waiting for volume to be ready.

Volume ID 062608d3b1ff22617c00000000000000000000011e connection count 4, sleeping for 20 Seconds whilst waiting for volume to be ready.

Volume ID 062608d3b1ff22617c00000000000000000000011e connection count 4, sleeping for 20 Seconds whilst waiting for volume to be ready.

Volume ID 062608d3b1ff22617c00000000000000000000011e connection count 4, sleeping for 20 Seconds whilst waiting for volume to be ready.

Volume ID 062608d3b1ff22617c00000000000000000000011e connection count 4, sleeping for 20 Seconds whilst waiting for volume to be ready.

Volume ID 062608d3b1ff22617c00000000000000000000011e connection count 4, sleeping for 20 Seconds whilst waiting for volume to be ready.

Volume ID 062608d3b1ff22617c00000000000000000000011e ready with 0 connections.

Volume ID 062608d3b1ff22617c00000000000000000000011f ready with 0 connections.

 

version is  3.5.0.0-395934-opt , no need for offline operation.

 

Starting volumes restore.

Starting VM vvol-recovery-mb.

 

Done.

 

 

Here is the code itself (don't worry it's also attached to the blog)

 

 

####################################################################

# Functions     # Functions     ## Functions     # Functions       #        

####################################################################

#  Please make sure that you run Powershell in administrator mode. #                            

#  Minimum Powershell needed - 3.0                                 #

#  Minimum PowerCLI needed - 5.8                                   #

#  Please Make sure that You allow scripts to run on this machine: #                                                              

#  Set-ExecutionPolicy Should be set to RemoteSigned, the default  #

#  is Restricted.                                                  #

#  To change, issue the command  "Set-ExecutionPolicy RemoteSigned"#                                                           

#                                                                  #

#                                                                  #

#                                                                  #

#                                                                  #

####################################################################

 

 

###########################

# Enable HTTPS

###########################

 

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}

 

 

 

#################################################################

#Check PowerShell version is greater than 5.8 for convertto JSON

#################################################################

 

$Pshellversion = $PSVersionTable.PSVersion.Major

if ($Pshellversion -lt 3)

    {

    Write-Host "`nPowerShell version $Pshellversion is not supported, version 3 and above is needed.`n"

    Exit

    }

        else

        {

            Write-Host "`nFound PowerShell version $Pshellversion which is supported, continuing.`n"

        }

 

 

 

###########################

#Convert time

###########################

 

Function Convert-FromUnixdate ($UnixDate) {

   [timezone]::CurrentTimeZone.ToLocalTime(([datetime]'1/1/1970').`

   AddSeconds($UnixDate))

}

 

#################################

# Connect to Array and Get Token

#################################

 

 

$array    = Read-Host "`nFQDN or IP of the Nimble array"

if (!$array) {

  Write-Host "`nValue is empty, must enter value. Aborting.`n"

  Exit

  }

$username = Read-Host "`nUser name"

if (!$username) {

  Write-Host "`nValue is empty, must enter value. Aborting.`n"

  Exit

  }

$password = Read-Host "`nPassword for the Nimble array"  

if (!$password) {

  Write-Host "`nValue is empty, must enter value. Aborting.`n"

  Exit

  }

$arrayapiport = ":5392"

 

try {

         $data = @{

             username = $username

             password = $password

         }

       

         $body = convertto-json (@{ data = $data })

         $uri = "https://" + $array + $arrayapiport + "/v1/tokens"

         $token = Invoke-RestMethod -Uri $uri -Method Post -Body $body

         $token = $token.data.session_token

    }

      catch

        {

          Write-Host "`nCan't connect to $array, check user name and password.`n"

          exit

        }

 

 

 

 

###########################

#Connect to vCenter

###########################

 

$vcenter    = Read-Host "`nFQDN or IP of vCenter"

if (!$vcenter) {

  Write-Host "`nValue is empty, must enter value. Aborting.`n"

  Exit

  }

$answer = Read-Host "`nUse administrator@vsphere.local for vCenter user? Default is yes. hit enter to continue, enter No for new user"

while("","no" -notcontains $answer)

{

    $answer = Read-Host "`nEnter or No is expected"

}

if ($answer -eq "")

{

    $username1 = "administrator@vsphere.local"

    Write-Host "`nUsing  $username1 as vCenter user`n"

}

else {

    $username1  = Read-Host "`nUser name" 

        if (!$username1) {

         Write-Host "`nValue is empty, must enter value. Aborting.`n"

         Exit

         }

         Write-Host "`nUsing  $username1 as vCenter user`n"  

     }

 

 

$passwordVC = Read-Host "`nPassword for vCenter"

if (!$passwordVC) {

  Write-Host "`nValue is empty, must enter value. Aborting.`n"

  Exit

  }

 

Add-PSSnapin VMware.VimAutomation.Core

ipmo VMware.VimAutomation.Storage

 

 

 

Write-Host "`nConnecting to vCenter: $($vcenter)`n"

  try

    {

      $connection = Connect-VIServer $vcenter -User $username1 -Password $passwordVC -errorAction Stop

    }

  catch

    {

        Write-Host "`nFailed to connect to server.`n"

        Exit

    }

 

      Write-Host "`nConnected to vCenter: $($vcenter).`n"

 

######################################################

#Check PowerCLI version is greater than 5.8 for SPBM

######################################################

 

$pcliMajor = Get-PowerCLIVersion | select -expand Major

$pcliMinor = Get-PowerCLIVersion | select -expand Minor

if ($pcliMajor -le 4)

    {

        Write-Host "`nPowerCLI version $pcliMajor.$pcliMinor is not supported, version 5.8 and above is needed.`n"

        Exit

    }

if ($pcliMajor -eq 5)

    {

        if ($pcliMinor -lt 8)

            {

            Write-Host "`nPowerCLI version $pcliMajor.$pcliMinor is not supported, version 5.8 and above is needed.`n"

            Exit

            }

    }

Write-Host "`nFound PowerCLI version $pcliMajor.$pcliMinor which is supported, continuing.`n"

 

 

 

 

 

###########################

# VM Details

###########################

 

Write-Host "`nFound the following VVol VMs`n"

Get-Datastore | where {$_.type -eq "VVOL"}  | get-vm | Format-Table  -Property Name, Folder, PowerState, Id -autosize

 

$VM = Read-Host "`nName of VVol VM to restore"

if (!$VM) {

  Write-Host "`nValue is empty, must enter value. Aborting.`n"

  Exit }

 

try {

      $targetVM = Get-VM -Name $VM -errorAction Stop

    }

    catch

    {

      Write-Host "`nFailed to find VM $VM with get-vm command, aborting.`n"

      Exit 

    }

 

$Nameslist = Get-Datastore | where {$_.type -eq "VVOL"}  | get-vm | select -expand Name

            if ($Nameslist -contains $VM)

             {

             write-host "`nFound VM $VM from input.`n"

             }

              else

              {

                Write-Host "`nCouldn't Find VM $VM in the above list, aborting.`n"

                Exit    

              }

 

 

 

$foldersarray = @();

if ($targetVM.length -eq 1)

    {

        Write-Host "`nFound 1 VM with this name.`n"

        $ID = Get-VM -Name $VM | select -expand ID

    }

    else

    {

        Write-Host "`nFound more than 1 VM with the same name, folder location must be different.`n"

        $folders =  Get-VM -Name $VM | Format-Table  -Property Name, Folder, PowerState, Id -autosize

                $folders

                $ID = Read-Host -Prompt "`nWhat is the VM ID of the VVol VM (copy full ID )?"

    }

    if ($ID -eq "")

        {

         Write-Host "`nError, must write name of folder.`n"

         Exit

        }

          $IDslist = Get-VM -Name $VM  | select -expand ID

            if ($IDslist -contains $ID)

             {

             write-host "`nFound VM $VM with ID $ID in input.`n"

             $VMlist = Get-VM -Name $VM  | select -expand Name

              if ($VMlist -contains $VM)

              {

                  Write-Host "`nMoving to next step.`n"

              }

              else

              {

                Write-Host "`nCouldn't Find VM $VM with ID $ID on $vcenter.`n"    

              }

             }

            else

            {

             write-host "`n Didn't find VM $VM with ID $ID in initial list, make sure there is no typo.`n"

             Exit

            }

          

 

$VMgetView = get-vm -Server $vcenter -Id $ID | get-view

$tVM = get-vm -Server $vcenter -Id $ID

$hostVM = get-vm -Server $vcenter -Id $ID | Get-VMHost | select Name

 

$spbm = get-SpbmEntityConfiguration -HardDisk (Get-HardDisk -VM $vm) |  Where-Object {$_.Id -LIKE "$ID*"}

$spbm

$countspbm = $spbm | sort StoragePolicy -Unique | measure

  if ($countspbm.Count -ne 1)

      {

        write-host "`nMultiple SPBMs not supported, found multiple SPBMs.`n"

        Exit

      }

        else

          {

            write-host "`nSingle SPBM to all VMDKs.`n"

          }

 

 

###########################

# VM VVol VMDKs identifiers

###########################

 

$VMappUuids = $VMgetView.config.hardware.device.Backing.backingObjectId

Write-Host "`nGetting VVol VM $VM VMDK identifiers - config volume excluded.`n"

$VMappUuids | sort-object name

Write-Host "`nDone.`n"

 

#############################

# Nimble Array Related Volumes

#############################

 

Write-Host "`nGetting Nimble Volumes and Snapshots details.`n"

  $header = @{ "X-Auth-Token" = $token }

  $uri = "https://" + $array + $arrayapiport + "/v1/volumes"

  $volume_list = Invoke-RestMethod -Uri $uri -Method Get -Header $header

  Write-Host $volume_list

  $voldata = @();

  foreach ($volume_id in $volume_list.data.id){

 

      $uri = "https://" + $array + $arrayapiport + "/v1/volumes/" + $volume_id

      $volume = Invoke-RestMethod  -Uri $uri -Method Get -Header $header

 

            foreach ($VMappUuid in $VMappUuids){

                if ($volume.data.app_uuid -LIKE $VMappUuid)  

              {

                  $voldata += $volume.data

              }

          }

        }

 

    Write-Host "`nFound the following volumes for this VVol VM - config volume excluded.`n"

    $voldata | sort-object name | select name,size,online,app_uuid,id| format-table -autosize

 

###########################

# Volumes snapshots

###########################

 

        $nssnapshots = @()

        foreach ($line in $voldata.id)

            {

                $uri = "https://" + $array + $arrayapiport + "/v1/snapshots/detail/?vol_id=$line"

                $nssnapshots += Invoke-RestMethod -Uri $uri -Method Get -Header $header | select -ExpandProperty data

                $nssnapshots_list = $nssnapshots | select vol_name,name, creation_time, new_data_compressed_bytes, id, vol_id  | sort-object new_data_compressed_bytes, vol_name, name -descending

 

                    foreach ($el in $nssnapshots_list)

                    {

                        $el.creation_time = Convert-FromUnixdate($el.creation_time)

                    }

                }

                    Write-Host "`nFound the following snapshots for this VVol VM volumes`n"

                    $nssnapshots_list | sort-object creation_time |  select vol_name,name | format-table -autosize

 

 

Write-Host "############################################"

 

$snapshotselection = Read-Host -Prompt "`nPlease copy paste full Common snapshot name"

if (!$snapshotselection) {

  Write-Host "Value is empty, must enter value. Try again."

  $snapshotselection = Read-Host -Prompt "`nPlease copy paste full Common snapshot name"

  if (!$snapshotselection) {

  Write-Host "Value is empty, must enter value. Aborting."

  Exit

  }

}

Write-Host "`nUsing $snapshotselection to restore all volumes`n"

 

 

 

#############################

# Power Off function

#############################

 

function powerOffVm($tVM)

    {

        Write-Host "`nChecking power state of $VM.`n"

        $VmStatus = $tVM.PowerState

        if ($VmStatus -ne "PoweredOff")

        {

            Write-Host "`nStopping $VM with ID $ID.`n"

            get-vm -Server $vcenter -Id $ID | stop-vm -Confirm:$false -RunAsync | Out-Null

            Sleep 5

        }

 

        else

        {

            do

        {

            $tVM =  get-vm -Server $vcenter -Id $ID

            $VmStatus = $tVM.PowerState

            sleep 2

        }

        until ($VmStatus -eq "PoweredOff")

        }  

        Write-Host "`nDone.`n"

    }

 

 

#############################

# Power on function

#############################

 

function powerOnVm($tVM)

    {

         Write-Host "`nStarting VM $VM.`n"

         $VmStatus = $tVM.PowerState

 

        if ($VmStatus -ne "PoweredOn")

        {

          get-vm -Server $vcenter -Id $ID |  Start-VM -Confirm:$false -RunAsync | Out-Null

            Sleep 5

        }

        else

        {

            do

        {

            $tVM =  get-vm -Server $vcenter -Id $ID

            $VmStatus = $tVM.PowerState

            Sleep 2

        }

        until ($VmStatus -eq "PoweredOn")

        }  

        Write-Host "`nDone.`n"

    }

 

 

powerOffVm

 

 

Write-Host "`nSleeping for 5 seconds before checking if VM is ready for restore.`n"

Sleep 5

 

###########################

# Volumes Connection Checks

###########################

 

foreach ($line in $voldata.id)

    {

      $uri = "https://" + $array + $arrayapiport + "/v1/volumes/" + $line

      $numconnections = Invoke-RestMethod -Uri $uri -Method Get -Header $header | select -ExpandProperty Data

      $numconnections2 = $numconnections | select -expand num_connections

      while ($numconnections2 -ne 0)

        {

            Write-Host "`nVolume ID $line connection count $numconnections2, sleeping for 20 Seconds whilst waiting for volume to be ready.`n"

            Sleep 20

             $numconnections = Invoke-RestMethod -Uri $uri -Method Get -Header $header | select -ExpandProperty Data

             $numconnections2 = $numconnections | select -expand num_connections

        }

      if ($numconnections2 -eq 0)

        {

          Write-Host "`nVolume ID $line ready with $numconnections2 connections.`n"

        }

    }

 

 

 

###########################################################

# Offline Volumes for 4.0 release Volumes Connection Checks

###########################################################

# Check NOS version #

 

 

$header = @{ "X-Auth-Token" = $token } 

$uri = "https://" + $array + $arrayapiport + "/v1/arrays/detail"

$result = Invoke-RestMethod -Uri $uri -Method Get -Header $header

$version = $result.data.version

$version = $version.Substring(0, $s.IndexOf('.'))

 

if ($version -le 3)

{

   write-host "`nversion is "$result.data.version", no need for offline operation.`n"

}

else

{

  Write-Host "`nStarting Offline checks.`n"

  foreach ($line in $voldata.id)

    {

      $data = @{

               online = "false"

               }

                 $body = convertto-json (@{ data = $data })

                 $header = @{ "X-Auth-Token" = $token }

                 $uri = "https://" + $array + $arrayapiport + "/v1/volumes/" + $line

                 $result = Invoke-RestMethod -Uri $uri -Method PUT -Body $body -Header $header

                 $result = $result.data

                 $result | sort-object name | select name,size,online,app_uuid,id| format-table -autosize

                 sleep 3

    }

  }

 

 

###############################

# Restore volumes from snapshot

###############################

 

Write-Host "`nStarting volumes restore.`n"

$snapinfo = $nssnapshots_list | Where-Object {$_.name -eq $snapshotselection}

$snapinfoforloop = $snapinfo | select vol_id, id

 

foreach ($row in $snapinfoforloop)

{

$data = @{

       id = $row.vol_id

       base_snap_id = $row.id

   }

   $body = convertto-json (@{ data = $data })

   $header = @{ "X-Auth-Token" = $token }

   $uri = "https://" + $array + $arrayapiport + "/v1/volumes/" + $row.vol_id + "/actions/restore"

   $result = Invoke-RestMethod -Uri $uri -Method Post -Body $body -Header $header

   $result = $result.data

   $result

   sleep 3

}

 

Sleep 15

powerOnVm

 

 

 

 

 

 

 

Feel free add any feedback or thoughts about this script.

@Moshe_Blumberg

 

 

Doing more with less – VVols and Simplified Storage Management

Nimble Storage Integration with VVols

NimbleOS 3.1 - Introduction To Folders

Talk VVol2.0 and VASA3 to me

A core strength of Nimble storage is the simplicity of management. Another strength is the ability to quickly create space efficient copies of volumes (clones). While the creation of clones has always been super simple, the end-to-end workflow of creating clones, provisioning them to the host and bringing up application instances to use these clones hasn’t been very easy. Specifically, this is a lengthy manual process in Windows environment that involves dealing with Windows failover clustering, cleaning up various metadata on cloned disks using tools such as diskpart, assigning new disk signatures (which at times requires mounting disks to a yet another Windows server) and so on.

 

In Nimble Windows Toolkit (aka NWT) 4.0, a great effort has been made to simplify this process and make it “Nimble”. Let us take a look at the new PowerShell cmdlets in NWT 4.0.

 

Prior to 4.0 release, NWT offered a set of credential management cmdlets (Set-NWTConfiguration, Get-NWTConfiguration and Clear-NWTConfiguration) along with a couple of cmdlets to automate diskpart operations on Nimble volumes (Set-NimVolume) and discovery of Nimble volumes mapped to a host (Get-NimVolume).

 

In NWT 4.0, we have added a bunch of cmdlets to significantly simplify the cloning workflow.

 

Invoke-CloneNimVolume: Create clones of one or more Nimble volumes and attach them to the same host from where the cmdlet is invoked.

 

This cmdlet takes care of:

  • Creating clones
  • Connecting the cloned volumes to the host (or all nodes of a Windows failover cluster)
  • Re-signaturing cloned disks and clearing various metadata information (VSS, VDS and what not!)
  • Mounting the cloned disks using various methods
    • Specified list of drive letters
    • Mount points
    • Auto assign drive letters
    • No drive letter or mount point
  • Adding cloned disks to Windows failover cluster either as “Available Storage” or as a Cluster Shared Volumes (CSVs).

 

Invoke-CloneNimVolume also supports creating clones of Nimble volumes without connecting the clones to the host. The clones can then be connected to another host using Connect-NimVolume cmdlet.

 

Invoke-CloneNimVolumeCollection: Clone volume or snapshot collections and attach to host

 

This cmdlet is similar to Invoke-CloneNimVolume cmdlet except that it works with a volume collection or snapshot collection as input instead of a list of volumes or snapshots.

 

Connect-NimVolume: Connect a volume to the host or cluster

 

This cmdlet connects a Nimble volume to a host and supports all of the post-connect functionality of re-signaturing the disks, mounting the volumes and provisioning them in a cluster.

 

 

Disconnect-NimVolume and Remove-NimVolume

 

These cmdlets handle the cleanup tasks. The Disconnect-NimVolume cmdlet will gracefully disconnect the volume from host by removing the disk from cluster, removing access paths and taking the Windows disks offline. The volume is not deleted from the Nimble array and can be subsequently connected using Connect-NimVolume cmdlet.

 

The Remove-NimVolume cmdlet goes a step further and also deletes the volume from Nimble array after disconnecting it from the host. If the volume being deleted is a clone, it also provides an option to clean up the base snapshot.

 

Putting it together

 

With the above three cmdlets, we can greatly simplify the typical dev-test workflow:

  1. Clone the production data (Invoke-CloneNimVolume or Invoke-CloneNimVolumeCollection)
  2. Connect it to a test server (Connect-NimVolume) (If you just want to create clones on production server itself – may be for recovery – it can be done in the first step itself).
  3. After the testing is done, you can either:
    1. Disconnect the cloned volume, but leave it around (Disconnect-NimVolume), OR,
    2. Clean up the clone and its base snapshot (Remove-NimVolume)

 

Wait, there is more cool stuff!

 

What’s in a snapshot?

 

If you are performing application consistent snapshots of your SQL or Exchange environment, you may have a requirement to clone from a specific snapshot. But how do you find out what’s in a snapshot? NWT 4.0 cmdlets to the rescue again! These cmdlets will tell you what application treasure is hidden in a snapshot.

 

The Get-NimSnapshot and Get-NimSnapshotCollection cmdlets not only allow simple listing of snapshots and snapshot collections, but you can now use these cmdlets to run queries such as:

  • Show me all snapshots for my SQL environment
  • Show me all snapshots that have a consistent copy of database “Payroll”

With these cmdlets, the dev-test workflow is simplified even further. If you want to clone the production database “Payroll” and provision the clone in a test environment, you no longer need to find out which Nimble volumes to clone. Instead, you can find appropriate Nimble snapshot to use for cloning using Get-NimSnapshot or Get-NimSnapshotCollection cmdlet and pipe that output to the clone cmdlet!

 

In a NutShell

Here is a simple script to clone a SQL database “Payroll” from its latest application consistent snapshot. The clone is attached to the same server.

 

Get latest application consistent snapshot of “Payroll” database and clone/attach the storage back to the server:


PS C:\> Get-NimSnapshotCollection -GroupMgmtIP 10.18.236.77 -AppObject Payroll -MaxObjects 1 | Invoke-CloneNimVolumeCollection -AssignDriveLetter


That’s just one line! The cmdlet returns cloned object information, which can be used to attach the database. In this case, the cloned volumes are mounted as H: and I:.

Attach cloned SQL DB:


Import-Module "sqlps" -DisableNameChecking

$dbfiles = New-Object System.Collections.Specialized.StringCollection

$dbfiles.Add("H:\db\payroll.mdf")

$dbfiles.Add("I:\log\payroll.ldf")

$srv = new-object Microsoft.SqlServer.Management.Smo.Server("(local)")

$srv.AttachDatabase("Payroll-clone", $dbfiles)

 

We hope you like these new cmdlets built into NWT 4.0. We are eager to hear your thoughts on how we can improve this further, so please let us know your feedback!

In typical multi-tier applications, there are challenges performing tests with high quality data. Either data is copied out of production, mocked, stubbed or the tests are running on empty datasets. In the former case, the copying of data might have performance impact on the production environment and may resort to backed up data. The further data moves from the production environment, quality degrades for testing. In many cases the backups may not be available to run tests on for CI/CD pipelines, as they are just that, backups. In the latter case the effectiveness of the test degrades as running tests on stubs or mocked datasets might not reveal problems present on a fully populated production system.

 

In today’s highly efficient CI/CD systems, code changes are integrated, tested and deployed to production multiple times per day. In order to achieve high quality and confidence, tests need to be accurate and performed often. Problems need be discovered fast and mitigated even faster. Discovering issues in production needs to be avoided at all cost. Bugs will always be introduced for the fact that new features need to be added continuously to stay competitive and relevant. Recovering gracefully and correcting errors and issues faster than anyone notices them could very well be a business differentiator that allows small teams to iterate fast to deliver a high-quality experience for their customers.

 

Containers are extremely efficient at improving quality of software delivery due to their ability to package all the application runtime in a format that runs verbatim on any platform. Transporting stateful persistent data in container images is not very practical as the container is ephemeral in nature. This means that data needs to be accessible independently to containers across multiple environments to serve production, staging and development. There also needs to be a secure and clear separation between these environments to ensure they’re capable of running autonomously without cross dependencies.

 

Organizations treating sensitive data that is somehow regulated or restricted face even greater challenges performing tests on accurate datasets. These could be credit card details, medical records, social security numbers or confidential financial records. Making these datasets available for testing requires intermediary steps to be taken by either scrambling, discarding or masking the sensitive parts from being accessed by developers or test teams.

 

Solution

In the following scenario, we’ll discuss an artificial build pipeline that uses Git, Jenkins, Ansible and Docker to build, ship and run a containerized Python application accessing a 850GB MySQL containerized database. Docker Datacenter is the centerpiece of the container solution utilizing the Nimble Storage Docker Volume plug-in to clone the production database from a production cluster to a development cluster where the application will be built and tested. The application will then be deployed to a staging area where it will be kept running after successful tests and in the final phase will be deployed to production.


A ten minute narrated screencast is available on YouTube that glance over the details outlined below.

cicd-firstframe-light-web.png

Fig. The TL;DR version

 

Environment

The pattern being assumed is that dev, stage and prod are isolated islands without any cross-dependencies. Each environment is a nine-node Docker Datacenter cluster, but depending on the level of sophistication this could easily fit this onto one Docker Datacenter cluster using teams and labels for your resources creating nodes with label affinity towards dev, stage and prod. A partial goal of this exercise is to demonstrate the capability of application multi-tenancy and the ability to isolate Docker Volumes to certain clusters being served from the same Nimble array.

 

Docker Datacenter advantages

Having a secure and reliable platform for any container orchestration is paramount to allow the right abstractions. Docker Datacenter provides Active Directory/LDAP integration, central syslog support, a trusted registry and foremost the Universal Control Plane (UCP). UCP allows users to access the Docker environment without having access to the nodes themselves. Through object labeling it’s also possible to achieve role-based access controls (RBAC) for users and teams. Developers use the local native Docker client on their laptops and remotely build, ship and run their applications, including external resources such as networks and volumes. Docker Datacenter and the Docker CS Engine is used exclusively throughout in this example.

 

Nimble Storage advantages and nomenclature

Nimble provides the capabilities to manage thousands of volumes and over one hundred thousand snapshots per group of arrays. While it may not be practical to expose that many volumes to the Docker environment, the Docker plug-in may be scoped into different folders and pools which allows for application multi-tenancy. Metaphorically, a pool is like a hard drive and a folder is a directory, the files would correspond to volumes you may expose over a block target protocol. The system administrator then has the means to lock-in the Nimble Storage Docker Volume plug-in into a certain pool or folder. This enables having multiple Docker environments for different purposes or tenants.

 

docker-cicd-eng-revB-arch.png

Fig. Architecture Overview

 

In this exercise, three different environments are being used, separating development/build, staging and production. It’s also possible to clone and move resources around in the folders directly from the Docker interface, which yields the capability of cloning production data to the build environment and later importing it to the staging environment.

 

Infrastructure cluster

The infrastructure used in this exercise features a plethora of standard tools used for various tasks. Most of the applications use standard docker images from Docker Hub, some with very little modifications. Apps requiring persistent storage are being served by Nimble Docker Volumes off the same Nimble array serving the CI/CD pipeline. In no particular order:

 

  • Git - Version control system used for the skeleton application.
  • Jenkins - Continuous integration and continuous delivery/deployment application framework orchestrating the build, ship and run aspect of the entire workflow.
  • Ansible - Application and infrastructure management used to define the build, ship and run steps for the entire pipeline.
  • InfluxDB - Time-series database used to track application metrics to measure performance.
  • Grafana -  Data visualization of the skeleton application.
  • Docker Registry - Insecure local registry used for the shipping steps (Docker Trusted Registry is encouraged for production deployments).
  • generatedata- Used to randomly generate 850GB of dummy MySQL data. Hosted locally for performance reasons.
  • nginx - Webserver used as a reverse HTTP proxy for all the web applications (uses a custom image, no persistent storage).

 

The dev, stage prod clusters

In the lab setup, all three environments are identical. Best described as a nine-node Docker Datacenter cluster built on top of KVM virtual machines using CentOS 7.2 on CentOS 7.2. The Jenkins application have its own separate credentials for all three environments when deploying applications.

 

The skeleton application: Populous

There are no good “Hello World” applications to use for data management at scale and a custom Python application was simply made up to fit this exercise where the amount of data is the most critical point of this demonstration.

 

The application consists of two container images:

  • app - Gunicorn Python WSGI serving a custom application using the minimalist Falcon REST server framework. Exposes a number of REST resources used to populate the database, gather application performance metrics and generate a 64KB BLOB used to speed up filling of the database.
  • db - Uses the stock Docker Hub MySQL image with a custom initialization statement. The database itself is a single table with a few columns and best described with the create statement. The database is roughly 850GB with 13 million rows.

 

CREATE TABLE main (

      id int unsigned NOT NULL auto_increment,

      guid varchar(36) NOT NULL,

      pid bigint default NULL,

      street varchar(255) default NULL,

      zip varchar(10) default NULL,

      city TEXT default NULL,

      email varchar(255) default NULL,

      name varchar(255) default NULL,

      imprint longblob,

      PRIMARY KEY (id)

) AUTO_INCREMENT=10000000;

ALTER TABLE main ADD INDEX guid (guid);

Fig. Database create statement

 

The pipeline

Jenkins is installed from the stock Docker Hub image with the default set of plug-ins. A few custom layers were added, such as Ansible and Docker to allow building the application. The only custom plug-ins used are the Ansible and ANSI-color output (Ansible produces colored logs which are easy to read). The Jenkins job is fairly simple. It has a build hook used by a git post-receive hook which essentially kicks off the build after each successful push to master.

jenkins.png

Fig. Jenkins job overview

 

Three separate build steps are defined which essentially run the same Ansible playbook against each of the environments dev, stage and prod. Different roles are honored depending on the target environment. A cheap Ansible inventory trick is used to execute against each of these environments as ‘localhost’ is where the playbook is executed and the Docker commands only care about certain environment variables to point to the specific environments Docker Datacenter.

 

# Populous Docker Datacenter environments

dev ansible_host=localhost ansible_connection=local ucp_host=tme-lnx1-dev.lab.nimblestorage.com

stage ansible_host=localhost ansible_connection=local ucp_host=tme-lnx1-stage.lab.nimblestorage.com

prod ansible_host=localhost ansible_connection=local ucp_host=tme-lnx1-prod.lab.nimblestorage.com

Fig. Ansible inventory configuration

 

The ‘populous.yml’ playbook and roles live with the source code of the application and therefor all the build processes and tests are version controlled and potentially peer reviewed. From a high-level, the steps to build, ship and run the application from source code to production would encompass these three steps:

 

$ ansible-playbook --vault-password-file=$ANSIBLE_VAULT -l dev -e build_number=$BUILD_NUMBER populous.yml

$ ansible-playbook --vault-password-file=$ANSIBLE_VAULT -l stage -e build_number=$BUILD_NUMBER populous.yml

$ ansible-playbook --vault-password-file=$ANSIBLE_VAULT -l prod -e build_number=$BUILD_NUMBER populous.yml

Fig. Actual Ansible commands executed by Jenkins

 

As part of the host variables there is a "secrets.yml" file which is encrypted with Ansible Vault. This allows for safe keeping of the Docker UCP credentials for the Jenkins account. In Jenkins, we create a binding that exposes this secret file to the build workspace when the build job executes. The "secrets.yml" file is safely stored in git.

 

---

- hosts: all

  tasks:

    - name: Ensure only one host is targeted

      fail: >

        msg="More than one host specifed, use -l to limit to either dev, stage

        or prod"

      when: "{{ play_hosts|length }} != 1"

 

    - name: Determine build_number

      set_fact:

        build_number: 0

      when: build_number is undefined

 

    - name: Determine build_version

      set_fact:

        build_version: "{{ lookup('file', 'VERSION') }}"

 

    - name: Set build_string

      set_fact:

        build_string: "{{ build_version }}-{{ build_number }}"

 

- include: util_docker_env.yml

 

- hosts: dev

  environment: "{{ local_docker_env }}"

  roles:

    - build

    - ship

 

- hosts: stage

  environment: "{{ local_docker_env }}"

  roles:

    - destroy

 

- hosts: all

  environment: "{{ local_docker_env }}"

  roles:

    - run

 

- hosts: none

  environment: "{{ local_docker_env }}"

  roles:

    - mask

 

- hosts: all

  environment: "{{ local_docker_env }}"

  roles:

    - smoke

 

- hosts: dev

  environment: "{{ local_docker_env }}"

  roles:

    - destroy

Fig. populous.yml

 

Examining the build steps more closely in detail, each of the phases conducts the steps outlined below. Assume ‘status quo’ that the production application is up and running and at a high-level, the “prod" database volume is cloned to the “dev” environment and when built/tested on, gets imported (moved) to the “stage” environment where the application continue to run until the next build.

docker-cicd-eng-revB-build.png

Dev

In the “dev” environment the application has a short lifespan; it builds, ships, runs and is tested. In the run phase, the Nimble Docker Volume plug-in clones the production volume. After that, the application is removed and the cloned volume is removed from Docker and off-lined on the array.

 

  • Build - Calls docker build on the app and db container’s Dockerfile with the current tags.
  • Ship - Tags and pushes Docker images to the Docker Registry.
  • Run - Creates a clone of the production volume. Runs docker service create to deploy the app as a global service and the db container as a single instance service.
  • Mask (optional) - This is step is not performed. Please see section below for discussion.
  • Smoke - Verifies a correct JSON response from the application and passes once the correct (current) version is returned.
  • Destroy - Issues a docker service rm and also removes the docker volume. This simply removes the volume from Docker control and offline the volume on the array.

 

Stage

The “stage” environment’s purpose is mainly to provide a sandbox where the application will continue to run for exploratory and manual testing. Depending in the confidence put on automatic testing, some might pause the build pipeline here and only deploy to production manually after manual testing has been approved by human interaction.

 

  • Destroy - Issues docker service rm and permanently destroy the previous clone.
  • Run - Imports the offline “dev” cloned volume. Runs the app.
  • Smoke - Same tests performed as in the “dev” step.

 

Prod

In “prod”, only the “app” container is updated to demonstrate how disruption is minimized. The application service impact for continuous deployment is discussed in the next section.

 

  • Run (update) - Issues a docker service update and bulk updates the running images to the new version.
  • Smoke - Same tests performed as in the previous steps to ensure consistency.

 

Production impact analysis

Having a fully automated continuous integration, delivery and deployment method brings enormous gains to the software supply chain in terms of speed, agility and quality. If code is being pushed several times per day, is it reliable? What are the risks and what is the cost of updating a containerized application during business hours?

 

The Populous app is a global service that runs in Swarm-mode. That means that each node in the cluster will run exactly one instance of the application. When an update occurs, containers will be restarted with the new image in bulk at a configurable parallelism. With the built-in load-balancer in Docker Swarm, an outage will never occur as the application will always be responding from a running container.

 

The following screenshot displays the container replacement process during the last “run” step in the “prod” environment.  Notice the versioning in the “Image” column.

ddc-update.png

Fig. View from Docker Datacenter UI while performing a docker service update

 

As mentioned previously, application response times are being measured by one of the REST calls. The time being measured is to retrieve a random row from the 13 million records which signifies our potential user’s application response times. This is the REST response being retrieved every five seconds:

 

{

  "version": "1.0.1-28",

  "served_by": "e380bca895a7",

  "response_time_ms": 37.60504722595215

}

Fig. JSON output from the _ping REST call

 

The “served_by” key signifies which container served the request. In the below screenshot from the Grafana dashboard it’s being observed that cutting over between containers has zero impact on the end user’s experience and previous cloning and importing steps do not impact production response times what so ever. We also have evidence that the entire pipeline executes in about five minutes.

 

grafana-update.png

Fig. Grafana dashboard

 

A note on data masking

As an optional step, it’s quite trivial to insert transformation DML to the database in the “dev” step. This is useful for masking sensitive data that may reside in the database that should not be part of the “stage” environment as it may be exposed to developers and users without any clearance to access such data. Doing this type of operation on a terabyte sized transactional database is not practical from a CI/CD perspective. It’s purely left here as an example of what is possible in such a scenario.  It would be completely feasible in the event of having nightly builds instead and the impact wouldn’t have developers waiting for results on their code push.

 

Comparison

For this particular database, a baseline copy using mysqldump <source> | mysql <destination> took roughly five hours. The build pipeline executes normally between four to five minutes, which is roughly a 60x improvement! This translates to having more productive developers as they get answers sooner with real data and problems get addressed before they reach production and impact users. There is also potential risk with doing full database dumps as they might have a negative impact on performance.  Those operations may be scheduled for off-hours, but in today’s day and age all systems are expected to perform optimally around the clock.

 

Summary

Whether adopting CI/CD for data intense or data sensitive applications, using Nimble Storage arrays and Docker Datacenter ensures that containerized applications and their data is secure, reliable and available. It also provide all the right abstractions for the development teams. Regardless of the tools being used, Nimble Storage caters well to any standards-based CI/CD system that may interact with REST APIs or use our application specific integration such as the Docker Volume plug-in or the Oracle Application Data Manager. Improving quality in CI/CD build pipelines with real data without disrupting the production environment has never been easier. Please let us know below if you want to gain a 60x improvement in your software supply chain today!

Nimble Storage congratulates Veeam on the General Availability of the Veeam Availability Suite 9.5 today!   Nimble and Veeam have over a thousand shared customers using the Nimble Storage Solution for Veeam Availability Suite.  Veeam comprises our largest installed data protection solution, and with version 9.5, we expect this growth to accelerate. 

 

Nimble – the New Gen Choice for v9.5

Through a number of enhancements, Veeam Availability Suite 9.5 delivers better performance, improved VM restore, and expanded scalability. The feature we really like is the integration with Nimble snapshots. The Veeam integration with Nimble makes us one of only four vendors with which Veeam has integrated, and the only ‘new gen’ storage player.  It provides our shared data protection solution a unique competitive edge vs. other combinations of storage hardware and data availability software.

 

How the Integrated Solution Works

Here are key areas of integration between Nimble storage and the new Veeam v9.5 and related uses:

 

  1. Backup from Storage Snapshots – Veeam admins can now reduce the host system impact from backup activities by running scheduled backup jobs using the Nimble Storage array-based snapshots.
  2. Backup from Secondary array – Veeam admins can replicate snapshots of the data on the Primary and replicate them to a Secondary storage array, and then backup from the Secondary to further reduce the IO load on the Primary array
  3. Veeam Explorer for Storage Snapshots – “VESS” allows the recovery of individual items or entire VMs quickly and efficiently from Nimble snapshots and replicated copies
  4. On-Demand Sandbox for Storage Snapshots – Veeam admins can now use storage snapshots on either primary or secondary storage systems to create complete isolated copies of the production environment in just a few clicks, for fast and easy Dev/Test or troubleshooting

 

Why You Should Care

This new version of Veeam along with Nimble Predictive Flash provides dramatically faster, more reliable data protection.  At the time of the original announcement, Veeam wrote a blog about the new integration which still provides a great take on the shared value: https://www.veeam.com/blog/integration-nimble-storage-veeam-availability-suite.html

 

Here’s a quick list of key facts about the Nimble-Veeam solution:

  • Enables Veeam instant VM recovery directly from the array snapshot
  • High performance array provides the IOPS to constantly run backup verification
  • More snapshots – 1,024 vs. 256 w/EMC & NetApp – for shorter backup windows, better RPOs
  • Managing Nimble storage natively using Veeam improves operator efficiency
  • Space-efficient protection is built into the Nimble Unified Flash Fabric
  • All Flash to Adaptive Flash replication comes at 1/3rd the cost
  • Nimble ‘Predictive Flash’ provides proactive monitoring with InfoSight
  • InfoSight resolves 9 out of 10 infrastructure issues

 

 

Where to go for more Information

Checkout the Nimble Data Protection Solution page to learn more about how Nimble Storage delivers the benefits of primary storage, backup, and disaster recovery and how we extend these benefits through valuable third party alliances like this one with Veeam.

VMware vSphere 6.5 is Generally Available for download today!  Nimble Storage was there at VMworld in Barcelona when the new vSphere 6.5 was first announced.  Key highlights of the release include the new vCenter Server Appliance, Security upgrades and a new vSphere Integrated Containers capability.  For a ‘point release’, it brings a lot of new functionality and benefits to vSphere customers.

As far as being the key technology of our VMware Server Virtualization solution, there are a few areas I’d like to highlight for customers and resellers considering working with this new version.

 

Universal App Platform

vSphere touts itself as a universal app platform that supports a broad mix of workloads, both traditional and modern.  It traces this heritage from serving just Test/Dev to a whole array of collaboration, business apps and databases, and more recently challenging ones like Virtual Desktop Infrastructure (VDI).  Nimble espouses a similar concept of a singular yet broad data platform in the Unified Flash Fabric.

In a strictly technical sense, the Unified Flash Fabric is the ability to have All Flash and Adaptive Flash arrays together in a single consolidated architecture with common data services. But in practice, it means that this cluster of storage – that can be managed as a single logical system – can support the breadth of workloads whether they require high performance or capacity-oriented data capacity.

And by combining recent enhancements in the vSphere architecture with the high performance of Nimble All Flash, customers can now go beyond VDI and take on newer, more demanding applications like Hadoop, Machine Learning, HPC and cloud native apps, but within this familiar, mature IT platform.

 

Integrated Features for Data Protection and Manageability

At the top of the list for many in the VMware community is the feature called vSphere Virtual Volumes or “VVols”.  VVols was a feature that came with vSphere 6.0, and it enabled integrated and more efficient provisioning of VMs within external storage arrays.  It manifested the promise of Storage Policy-Based Management (SPBM) to let VM admins more efficiently manage the data storage layer.  Recent enhancements to VVols in v6.5 include the ability to do Replication, though only two storage vendors have Day 0 support, and Nimble is one of them.  Other VVol related enhancements that Nimble solution customers will enjoy are:

  • Support for Oracle RAC on VVols
  • SPBM Components – create reusable groups of SPBM rules that can quickly be added to a policy
  • Public API and PowerCLI cmdlets for failover workflows

And we’re hearing that it’s these failover workflows that are going to be the ‘killer feature’ of the new solution.

 

vSphere integrated containers

Containers have become one of the new, hot topics of enterprise IT, thanks to their ability to empower developers with a lighter weight means to deliver applications. And recently Containers have been talked about as more than just the newest DevOps tool, but as part of a broader evolution within the data center.

VMware vSphere Integrated Containers deliver an enterprise infrastructure that provides the best of both worlds for Developers and VM operations teams. With the latest version of vSphere, Containers become just as easy to enable and manage as virtual machines, with no process or tool changes required. This helps customers transform their businesses through the adoption of Containers without re-architecting infrastructure.

Along with VMware Integrated Containers, Nimble is offering customer choice with a new Nimble Solution for Docker Containers.   With this Container solution, Nimble is coupling our proven, enterprise class storage platform with one of the most comprehensive Docker Volume plugins on the market.  Users of this solution will get the self-service capability that DevOps teams want along with the simplified operations that IT requires.

 

Predictive Flash – the New Requirement for VMware vSphere

Nimble has been traveling the globe with the VMworld shows this year, demonstrating how Predictive Flash is becoming the new requirement for serious VMware environments. It’s Nimble InfoSight that puts the ‘predictive’ in our Predictive Flash.  The Nimble Storage Predictive Flash platform lets organizations deploy a single IT platform for all virtualized workloads realizing absolute performance and the ability to scale without disruption. And as touched upon already, running a vSphere universal app platform on a Nimble unified flash fabric gets you the data storage scalability, performance, and manageability that still fits your budget.

 

See for Yourself

A recent recorded demo of the Nimble VVols integration is available on Youtube, and more information on the Nimble solution for VMware vSphere is available on the web.

As part of the Nimble Linux Toolkit 2.0 we're introducing a plug-in for Docker. Please be aware that the software is still in beta and what the examples outlined below are subject to change without notice. Docker is a widely popular container system used to build, ship and run applications anywhere. We announced the Nimble Docker Volume plug-in on our corporate blog, my personal blog talks about the container paradigm shift and the impact of containers. This post will go into even greater technical detail of our implementation.

 

We plan to support Docker Datacenter, which include Docker CS (commercially supported) Engine which is suitable for production use. We're also ensuring support for the latest 'main' release train as we appreciate that the tinkerers out there want to take advantage of all the new features. Examples below uses Docker 1.12.1, differences between that version and the Docker CS Engine are discussed in the Availability Options section.

 

Nimble Linux Toolkit 2.0

The NLT 2.0 is delivered as a multi-platform binary installer. We're currently targeting to have it readily available for the most popular Linux distributions at launch through InfoSight. NLT requires that the Nimble arrays are running at least NimbleOS 3.3 or above. Once NLT is installed, all you have to do is add your Nimble Storage array to your host and start the "Docker-Driver". NLT depends on a number of system utilities such as open-iscsi, multipathd and the sg3 utilities. The install process will be outlined in the documentation at release.

 

Add your Nimble Storage array:

# nltadm --group --add --ip-address 192.168.10.64 --username admin --password admin

Done


Start the "Docker-Driver":

# nltadm --start Docker-Driver

Done

# nltadm --status

Service Name         Service Status

--------------------+--------------

Connection-Manager          RUNNING

Docker-Driver               RUNNING

 

For Docker to "pickup" the new volume plug-in. Docker needs to be restarted.

# systemctl restart docker

 

The state of NLT is saved across reboots, once everything is installed and running, so there's no need to start anything manually after a reboot. Also, an important tidbit is that there's no configuration necessary on the array except making sure IP connectivity is established between the Docker host and array, both for the management interface and data interfaces. That said, only iSCSI will be supported in the initial release.

 

Create your first Docker Volume on Nimble Storage

Our plug-in is overloaded with features. We've worked diligently to ensure the Docker admin will get the full Nimble experience while provisioning Docker Volumes. All volume option flags are available to glance over from the CLI. Let's have a look:

 

$ docker volume create --driver nimble -o help

Nimble Storage Docker Volume Driver: Create Help

Create or Clone a Nimble Storage backed Docker Volume or Import an existing Nimble Volume or Clone of a Snapshot into Docker.

 

Create options:

  -o sizeInGiB=X          X is the size of volume specified in GiB

  -o size=X               X is the size of volume specified in GiB (short form of sizeInGiB)

  -o fsOwner=X            X is the user id and group id that should own the root directory of the filesystem, in the form of [userId:groupId]

  -o fsMode=X             X is 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem

  -o description=X        X is the text to be added to volume description (optional)

  -o perfPolicy=X         X is the name of the performance policy (optional)

  -o pool=X               X is the name of pool in which to place the volume (optional)

  -o folder=X             X is the name of folder in which to place the volume (optional)

  -o encryption           indicates that the volume should be encrypted (optional, dedupe and encryption are mutually exclusive)

  -o thick                indicates that the volume should be thick provisioned (optional, dedupe and thick are mutually exclusive)

  -o dedupe               indicates that the volume should be deduplicated (optional, requires perfPolicy option to be set)

 

Clone options:

  -o cloneOf=X            X is the name of Docker Volume to create a clone of

 

Import Volume options:

  -o importVol=X          X is the name of the Nimble Volume to import

  -o pool=X               X is the name of the pool in which the volume to be imported resides (optional)

  -o folder=X             X is the name of the folder in which the volume to be imported resides (optional)

  -o forceImport          forces the import of the volume.  Note that overwrites application metadata (optional)

 

Import Clone of Snapshot options:

  -o importCloneOfSnap=X  X is the name of the Nimble Volume and Nimble Snapshot to clone and import, in for form of [volName:snapName]

  -o pool=X               X is the name of the pool in which the volume to be imported resides (optional)

  -o folder=X             X is the name of the folder in which the volume to be imported resides (optional)

 

Performance Policies: Exchange 2003 data store, SQL Server Logs, Windows File Server, Other Workloads, Exchange 2007 data store,

                      Exchange 2010 data store, Exchange log, SQL Server, SQL Server 2012, SharePoint, Oracle OLTP, DockerDefault

 

In an effort not to overwhelm users with all these flags, we've picked a sensible set of defaults for all options underneath the "Create options" column. The stock defaults may be overridden in the plug-in configuration file: /opt/NimbleStorage/etc/docker-driver.conf (requires NLT restart) if desired. That said, creating a new volume is as simple as:

 

$ docker volume create --driver nimble --name demo-vol1

demo-vol1

$ docker volume ls

DRIVER            VOLUME NAME

nimble            demo-vol1


The volume is now ready for use by a container. But before that, let's inspect our new volume:

$ docker volume inspect demo-vol1

[

    {

        "Name": "demo-vol1",

        "Driver": "nimble",

        "Mountpoint": "",

        "Status": {

            "ApplicationCategory": "Virtual Server",

            "Blocksize": 4096,

            "CachePinned": false,

            "CachingEnabled": true,

            "Connections": 0,

            "DedupeEnabled": false,

            "Description": "Docker knows this volume as demo-vol1.",

            "EncryptionCipher": "none",

            "Group": "nimgrp-dev2",

            "ID": "0633cfd76905aa2a500000000000000000000000a4",

            "LimitVolPercentOfSize": 100,

            "PerfPolicy": "DockerDefault",

            "Pool": "default",

            "Serial": "9e1b8cac258070856c9ce90039fd6536",

            "SnapUsageMiB": 0,

            "ThinlyProvisioned": true,

            "VolSizeMiB": 10240,

            "VolUsageMiB": 0,

            "VolumeCollection": "",

            "VolumeName": "demo-vol1.docker"

        },

        "Labels": {},

        "Scope": "global"

    }

]


Now, let's run a container with the new volume:

$ docker run --rm -it -v demo-vol1:/data busybox sh

/ # df -h | grep /data

/dev/mapper/mpathb       10.0G     32.2M     10.0G   0% /data


An important aspect of the semantics is that volumes are not mounted on the Docker host unless a container requests it. When the container exits, the filesystem is unmounted, the DM device torn down and the Initiator Group is removed from the volume on the array. The process happens in reverse when a mount is requested by Docker.


Compose your first Application with Persistent Storage

The previous 'Hello World' example does not have many practical use cases other than making sure everything works. For this example, we'll use Docker Compose to deploy Drupal, a popular Open Source CMS using the LAMP stack. Drupal is an interesting example because both the webserver and the database requires persistent storage. The next example will then illustrate how we are able to clone a production instance and run it side by side.

 

Here's our base docker-compose.yml file:

version: "2"

services:

  web:

    image: drupal:8.1.8-apache

    ports:

    - "8080:80"

    volumes:

    - www-data:/var/www/html/sites/default

 

  db:

    image: mysql:5.7.14

    volumes:

    - mysql-data:/var/lib/mysql

    environment:

      MYSQL_DATABASE: drupaldb

      MYSQL_USER: db

      MYSQL_PASSWORD: secret

      MYSQL_ROOT_PASSWORD: donotuse

 

volumes:

  www-data:

    driver: nimble

    driver_opts:

      sizeInGiB: 10

      fsOwner: "33:33"

      perfPolicy: "Windows File Server"

      description: "This is my Drupal assets and configuration"

 

  mysql-data:

    driver: nimble

    driver_opts:

      sizeInGiB: 1

      fsOwner: "999:999"

      perfPolicy: "default"

      description: "This my MySQL database volume"

 

What we see here are two containers being deployed, web and db, and they have a volume each with slightly different characteristics. Let's go ahead and deploy:

$ docker-compose up

Creating volume "drupal_mysql-data" with nimble driver

Creating volume "drupal_www-data" with nimble driver

Creating drupal_web_1

Creating drupal_db_1

Attaching to drupal_db_1, drupal_web_1

...

 

There should now be a webserver listening on port 8080 on the host where the containers have been deployed. As we step through the setup wizard, we need to pay attention to the database setup screen. Docker Compose creates a private overlay network between the webserver and database.  Both nodes can reach each other with their respective container names defined in the docker-compose.yml file.

drupal-setup.png

Drupal is now ready for use!

 

Clone your Application

As we are bringing premium data management features to Docker, I wanted to walk through a simple example of how to make a zero-copy clone of the example above.

 

We need a new Docker Compose file, so I created a new directory called 'clone' and made these few edits for docker-compose.yml:

version: "2"

services:

  web:

    image: drupal:8.1.8-apache

    ports:

    # If we intend to run the clone on the same host on the

    # default bridge, we need to change port mapping

    - "8081:80"

    volumes:

    # We don't need to change the volume name as

    # docker-compose prefix the names for you

    - www-data:/var/www/html/sites/default

 

  db:

    image: mysql:5.7.14

    volumes:

    - mysql-data:/var/lib/mysql

    # Since the database is already setup, no need for

    # environment variables

 

volumes:

  www-data:

    driver: nimble

    driver_opts:

      # All parameters will be inherited by the clone, we only

      # need to specify the source volume

      cloneOf: drupal_www-data

      description: "This is my cloned Drupal assets and configuration"

 

  mysql-data:

    driver: nimble

    driver_opts:

      cloneOf: drupal_mysql-data

      description: "This my cloned MySQL database volume"

 

To bring up the cloned environment, simply do this:

$ docker-compose up

Creating volume "clone_mysql-data" with nimble driver

Creating volume "clone_www-data" with nimble driver

Creating clone_web_1

Creating clone_db_1

Attaching to clone_web_1, clone_db_1

...

 

The clone is now available on port 8081, completely separated from the production instance. Since YAML data structures are simple to manipulate programmatically, it would be trivial to incorporate cloning in an automated workflow to spin up clones on demand that allow every developer or designer to essentially have a personal clone of a production copy.

 

More examples will be available at release, such as containerizing a non-containerized workload by cloning snapshots.

 

Availability Options

We will support Docker UCP and CS Engine,  that are part of Docker Datacenter, to allow high-availability for your stateful containers in a production environment. We also support the new SwarmKit in Docker 1.12.1. We're closely working with Docker to ensure we will fully support Distributed Application Bundle (.dab) files once DAB files will support volumes through Docker Compose. We do support the new mount syntax with 'docker service' and you may run your containers without the Docker Compose component using the Nimble Storage Docker Volume plug-in.

 

Conclusion

If you're faced with the challenge of running highly-available stateful containers with Docker and realizing that storage is a non-trivial problem to solve, you should explore our solution as we genuinely believe we're solving a real problem. Our arrays integrate seamlessly in any infrastructure, traditional IT shops, DevOps shops or via Direct Connect to your public cloud. InfoSight keeps your storage environment humming and your time could be spent focusing on higher business objectives.

 

So, tell me, what are your containerizing today?