Skip navigation
1 2 3 Previous Next

App Integration

42 posts

I recently covered our plans to release a HPE Nimble Storage FlexVolume driver for the Kubernetes Persistent Storage FlexVolume plugin. While we’re well on track to release this through InfoSight shortly, a chain events have led us to open source the entire FlexVolume driver! Let’s explore that a little bit further and understand what it means - this is exciting!

 

Our FlexVolume driver is merely a translation layer. It re-writes FlexVolume API calls to Docker Volume API calls. We had the intentions to hardwire the driver to only look for Nimble Storage plugin sockets. Our friends over at 3PAR got wind of what we were working on and they too wanted to get on the Kubernetes bandwagon. It made sense. After the fact, lifting out the hardwired code into a JSON configuration file made it very easy to point the translator to any socket that speaks the Docker Volume API.

 

Let me introduce Dory, she’s the FlexVolume driver that speaks whale! This means that any legacy volume plugin (it also works with managed plugins but Kubernetes does not generally recommend running anything but Docker 1.12.x) that works with Docker, you may now use with Kubernetes 1.5, 1.6, 1.7 and their OpenShift counterparts to provision Persistent Storage for your Kubernetes pods. Dory is Open Source released under the Apache 2.0 license and available on GitHub.

 

If you have a solution today with a Docker Volume plugin, you may use that plugin with Dory to provide Persistent Storage for Kubernetes.

Building and Enabling Dory

Let’s assume we have Kubernetes installed and a vendor’s Docker Volume plugin installed, configured and working. These are the simple steps to build, install and use Dory.

 

Building Dory requires Go and make installed on your host OS, please follow your Linux distribution’s instructions on how to install those tools before proceeding. There’s also more detailed building instructions here.

 

Note: Substitute any reference to ‘myvendor’ to the actual Docker Volume plugin you want to use. If you’re using Nimble Storage, ‘nimble’ is the correct string to use.

$ git clone https://github.com/hpe-storage/dory
$ cd dory
$ make gettools
$ make dory
$ sudo mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dory~myvendor
$ sudo cp src/nimblestorage/cmd/dory/dory.json /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dory~myvendor/myvendor.json
$ sudo cp bin/dory /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dory~myvendor/myvendor

 

We now have the driver installed. Let’s look at the basic configuration:

# /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dory~myvendor/myvendor.json
{
    # Where to log API calls
    "logFilePath": "/var/log/myvendor.log”,

    # Be very verbose about the API calls?
    "logDebug": false,

    # Does the underlying driver understand kubnernetes.io/<string> calls?
    "stripK8sFromOptions": true,

    # This is where our plugin API socket resides
    "dockerVolumePluginSocketPath": "/run/docker/plugins/myvendor.sock”,
 
    # Does the Docker Volume plugin support creation of volumes?
    "createVolumes": true
}

 

Now, configured and ready to provision and mount volumes, we need to restart the kubelet node service.

 

If running Kubernetes:
$ sudo systemctl restart kubelet

 

If running OpenShift:
$ sudo systemctl restart atomic-openshift-node

 

If everything checks out, you should be able to inspect your log file for successful initialization:
Info : 2017/09/18 16:37:40 dory.go:52: [127775] entry  : Driver=myvendor Version=1.0.0-ae48ca4c Socket=/run/docker/plugins/myvendor.sock Overridden=true
Info : 2017/09/18 16:37:40 dory.go:55: [127775] request: init []
Info : 2017/09/18 16:37:40 dory.go:58: [127775] reply  : init []: {"status":"Success"}

 

Hello World from Dory

Now, let’s create some resources on our Kubernetes cluster. First, we need a Persistent Volume:
$ kubectl create -f - << EOF
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv100
spec:
capacity:
   storage: 20Gi          # This is the capacity we’ll claim against
accessModes:
   - ReadWriteOnce
flexVolume:
   driver: dory/myvendor  # This is essentially dory~myvendor created above
   options:               # All options are vendor dependent
     name: mydockervol100 # This is actual docker volume name
     size: "20"           # This is also vendor dependent!
EOF

 

If you’re paying attention, no actual volume is created in this step. The FlexVolume plugin is very basic and we’ll call the Docker Volume Create API in the FlexVolume mount phase.

 

Now, let’s create a claim against the above volume:
$ kubectl create -f - << EOF
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc100
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
EOF

 

For a very basic application that requires some persistent storage and is easy to demo:
$ kubectl create -f - <<EOF
---
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: minio
    image: minio/minio:latest
    args:
    - server
    - /export
    env:
    - name: MINIO_ACCESS_KEY
      value: minio
    - name: MINIO_SECRET_KEY
      value: doryspeakswhale
    ports:
    - containerPort: 9000
    volumeMounts:
    - name: export
      mountPath: /export
  volumes:
    - name: export
      persistentVolumeClaim:
        claimName: pvc100
EOF

 

When the pod gets created and a mount request comes in, you should see the actual volume created:
$ docker volume ls
DRIVER              VOLUME NAME
nimble              mydockervol100

 

On the Kubernetes side it should now look something like this:
$ kubectl get pv,pvc,pod -o wide
NAME       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM            STORAGECLASS   REASON    AGE
pv/pv100   20Gi       RWO           Retain          Bound     default/pvc100                            11m

NAME         STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
pvc/pvc100   Bound     pv100     20Gi       RWO                          11m

NAME                          READY     STATUS    RESTARTS   AGE       IP             NODE
po/mypod                      1/1       Running   0          11m       10.128.1.53    tme-lnx1-rhel7-stage.lab.nimblestorage.com

 

Within the cluster, you should see something like this on http://10.128.1.53:9000

Summary

What we can witness here is just the tip of the iceberg. We’re currently building a Kubernetes out-of-tree StorageClass provisioner to accompany the FlexVolume driver to truly allow dynamic provisioning. In a production scenario, users are not allowed to create Persistent Volume resources. Overlaying the Persistent Volume (PV) and Persistent Volume Claim (PVC) process with a StorageClass allows the end-user to call the StorageClass directly from the PVC, which in turn creates the PV. The StorageClass itself is defined by the cluster admin.

 

It’s also worth mentioning that we are active collaborators on the Container Storage Interface (CSI) specification. CSI is an orchestration agnostic storage management framework for containers that eventually will mature into “one interface to rule them all”. We see Dory and the Docker Volume API (which is proven) as a perfectly good example on how you would deploy persistent storage for containers into production today.

 

While we encourage everyone to kick the tires on Dory, this project is supported only through GitHub on a best effort basis. Nimble Support is not able to assist with any issues. A fully supported version of the FlexVolume driver for HPE Nimble Storage and 3PAR will be released through the official channels later this year.

 

Don't forget to checkout the code at https://github.com/hpe-storage/dory

Earlier this year, Microsoft released SQL Server 2017 for public preview. This new version of SQL Server broke the traditional Microsoft mold, by including support for Linux OS and containers (Microsoft, Linux, or macOS), while maintaining all of the advanced features of SQL Server 2016 (and including new ones). 

 

Now that SQL Server is a more platform agnostic database, enterprises can now leverage SQL Server for a broader range of DevOps scenarios. The following is an excellent blog post from Microsoft about the possible use cases for SQL Server 2017 in DevOps: SQL Server 2017 containers for DevOps scenarios | SQL Server Blog 

 

I recently set up a Docker environment to test SQL Server 2017 for Linux in containers, and wanted to share how easy it was to duplicate data from one of my other SQL Server instances, running on Windows, to a SQL Server 2017 for Linux container, by leveraging our Docker Volume Plugin.

 

Building the Environment:

 

I began by setting up the Nimble Linux Toolkit (NLT), that includes the Nimble Docker Volume Plugin. NLT has several prerequisites for installation, and information can be found in the Linux Integration Guide from Infosight: Linux Integration Guide.

 

After I installed NLT, I made sure to configure the appropriate array information, and verify array connectivity. The Docker Volume Plugin leverages NLT for authentication. Also, make sure to enable the Docker service.

 

Configuring NLT

# Establish array connectivity

nltadm --group --add --ip-address 10.0.0.1 --username admin --password XXXXXX

nltadm --group --verify --ip-address 10.0.0.1

 

# Enable the Docker Service

nltadm --start docker

nltadm --enable docker

 

Next, I installed Docker on a Centos 7.3 host. For detailed instructions on how to install Docker, refer to the following: Get Docker CE for CentOS | Docker Documentation 

 

Docker-Compose and the latest ntfs-3g and ntfsprogs drivers came next. I installed the NTFS drivers because the database I wanted to clone to the Docker container was running on a Windows SQL Server instance. In order for the Docker host to be able to mount the NTFS file system, I required the appropriate drivers.

 

Installing Docker-Compose and NTFS Drivers

# Install Docker-Compose

yum istall python

yum install python-pip

pip install docker-compose

 

# Install latest NTFS packages

yum --enablerepo=extras install epel-release

yum install ntfs-3g -y

yum install ntfsprogs -y

 

# Load Fuse module

modprobe fuse 

 

Now that I had my Docker environment ready to go, the final step in my setup was downloading the Microsoft SQL Server 2017 for Linux image.

 

Download the SQL Server Image

# Use docker pull for download

docker pull microsoft/mssql-server-linux

 

Creating the SQL Server 2017 on Linux Container and Importing Data:

 

Getting data (or a subset of data) from a production environment into a development environment is a common first step in many DevOps workflows. In my lab environment, the "production" database is running on a Windows version of SQL Server. Microsoft's recommended way of porting data from Windows to Linux is to do a full backup of the Windows SQL Server database, and then restore that full backup to SQL Server on Linux.

 

Migrate a SQL Server database from Windows to Linux | Microsoft Docs 

 

However, I wanted to see if we could use our zero copy clones to duplicate the data for the Linux instance, save the time required for a long running restore, and reduce the amount of space required (imagine two complete copies of a large database). So instead of doing a backup and restore, I leveraged our Docker Volume Plugin to clone and import the production database volumes to the SQL Server 2017 on Linux container.

 

Below is my Docker-Compose file with the appropriate options:

 

Docker-Compose.yml
volumes:
  'sql03-tl':
    driver: nimble
    driver_opts:
      importVolAsClone: sql03-tl
      forceImport: "true"
  'sql03-udb':
    driver: nimble
    driver_opts:
      importVolAsClone: sql03-udb
      forceImport: "true"
services:
  db:
    environment:
      ACCEPT_EULA: "Y"
      MSSQL_SA_PASSWORD: XXXXXXXX
      MSSQL_PID: Developer
    image: microsoft/mssql-server-linux
    ports:
      - '1401:1433'
    volumes:
      - sql03-udb:/var/opt/mssql/sqlclone/udb
      - sql03-tl:/var/opt/mssql/sqlclone/tl
version: "3"

 

Now you may be wondering, "How do I get volume information from my Windows environment? I am a Developer, not a Storage Admin." I recently did a series of blog posts that cover how to leverage the new functionality in our Nimble Windows Toolkit for DevOps workflows. I cover how to get the appropriate information by using application metadata. 

 

NOS4 Use Case: Leveraging NWT4 cmdlets for SQL Server Reporting or Dev/Test Workflows 

NOS4 Use Case: Rapid Deployment of SQL Developer Containers with Nimble Storage 

 

After creating my Docker-Compose file, all that was left was for me to bring up the container. 

 

Console Output
[root@dockerhost1 virtdb1_dev41]# docker-compose up -d
Creating network "virtdb1dev41_default" with the default driver
Creating volume "virtdb1dev41_sql03-tl" with nimble driver
Creating volume "virtdb1dev41_sql03-udb" with nimble driver
Creating virtdb1dev41_db_1 ...
Creating virtdb1dev41_db_1 ... done
[root@dockerhost1 virtdb1_dev41]#

 

Seconds! That is all it took to bring up a new instance of SQL Server, complete with cloned copy of my production database volumes. Our clones do not take up any space, only new writes are recorded, so I saved ~500GB of space that would have otherwise been required if I had restored the data to the SQL Server container.

 

I quickly connected to the instance with SSMS, and attached the database as "virtdb1_dev42." To make sure I could read and write to the cloned database, I also executed a stored procedure. Success!

 

SSMS

 

More Information:

 

For more information on our Docker Solutions or our Docker Volume Plugin for Linux, please check out some of our other blog posts:

 

Make It About Apps, Not Infrastructure 

Tech Preview of Nimble Linux Toolkit 2.0: Docker plug-in 

 

For more information on our SQL Solutions, refer to the following blog posts:

 

NOS4 Use Case: Leveraging NWT4 cmdlets for SQL Server Reporting or Dev/Test Workflows 

NOS4 Use Case: Rapid Deployment of SQL Developer Containers with Nimble Storage

 

Also, please feel free to reach out to your Account Teams if you would like to see any of our solutions in action!

With the increase of zero copy cloning applicabilities usage I thought it will be nice to provide a quick method of monitoring clones parent volumes and base snapshot as they are all depended of each other.

 

This is a very quick script and the output is very powerfull, it will let you know which are the volumes are clones(name), what is the parent volume name and what is the base snapshot.

 

Here is the expected output:

 

PS C:\Users\Administrator\Desktop> .\ff.ps1

 

name                                    parent_vol_name          base_snap_name
----                                    ---------------          --------------
mb-vvol-mm-config-clone                 mb-vvol-mm               mb-vvol-mm-569e7fbad65b9423-0001-minutely-2016-09-28::12:57:00.000
mb-vvol-mm-clone.vmdk                   mb-vvol-mm.vmdk          mb-vvol-mm-569e7fbad65b9423-0001-minutely-2016-09-28::12:57:00.000
Clone-emea-vm-mb0002-v1.docker          emea-vm-mb0002-v1.docker BaseForemea-vm-mb0002-v1.docker:2016-10-13.17:33:58.808
Clone-Docker-volume1.docker             Docker-volume1.docker    BaseForDocker-volume1.docker:2016-10-13.17:39:32.999
importedSnap.docker                     Not-Docker-Volume        snapshot-Not-Docker-Volume
os-clone                                SRM-VM01                 os
SBVolSnap-Clone                         SB-Testvol1              SnapSB-Testvol1
VVolCloneForRecoveryOperationVVol-demo1 VVol-demo1-1.vmdk        VVol-demo1-569e7fbad65b9423-0002-minutely-2017-03-01::21:03:00.000

 

Here is the script:

 


###########################
# Enable HTTPS
###########################

 

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}

 

###########################
# Get Token
###########################

 

$array = "<IP/FQDN>"
$username = "admin"
$password = "PASSWD"

 


$data = @{
    username = $username
    password = $password
}

 

$body = convertto-json (@{ data = $data })

 

$uri = "https://" + $array + ":5392/v1/tokens"
$token = Invoke-RestMethod -Uri $uri -Method Post -Body $body
$token = $token.data.session_token

 

$header = @{ "X-Auth-Token" = $token }
$uri = "https://" + $array + ":5392/v1/volumes"
$volume_list = Invoke-RestMethod -Uri $uri -Method Get -Header $header
$uri = "https://" + $array + ":5392/v1/volumes/detail?clone=true&fields=name,parent_vol_name,base_snap_name"
$volume = Invoke-RestMethod  -Uri $uri -Method Get -Header $header
$volume.data  | format-table -AutoSize

 

 

I've attached the script as a file if should you need to download it

 

Hope you find it useful.

Thanks,

@Moshe_Blumberg

Bharath Ram

Nimble VDI Solution

Posted by Bharath Ram Employee Aug 1, 2017

Earlier this spring we had completed one of our Virtual Desktop references architectures with Citrix XenDesktop and Nimble Storage.

Superior performance, Infosight, VMVision, Resiliency and Redundancy features of Nimble Storage was demonstrated leveraging an AF5000 array.

 

The reference architecture goes through various test cases leveraging a load (LoginVSI) of 5000 XenDesktop users on a Nimble All Flash A5000 array. The 5000 users comprises of a mixture of persistent, non-persistent Desktops and XenAPP users.Superior performance with average Sub-millisecond latency was seen through the test cases. There were also 2 tests cases which demonstrated Nimble Storage's Resiliency and Redundancy during an active VDI workload.

 

The following uses cases where validated,

  • 5000 knowledge worker user (2vCPU, 2GB) workload
  • Boot storm of Virtual Desktops
  • Nimble Storage Controller fail-over under load 
  • 3 x simultaneous SSD failure under load
  • Transparent application migration between Adaptive and All flash arrays

 

For more details and results of use cases please refer the following whitepaper,

 

Citrix XenDesktop Reference Architecture

NimbleOS contains built-in VMware integration that allows NimbleOS snapshots to be synchronized with VMware Virtual Machine snapshots in the VMFS datastore environment.

 

There are few factors that we have to take into account when working with the sync'd snapshot and Nimble volume collections like:

Do I actually need to quiesce the guest OS to get a consistent snapshot?

How many volumes (datastores) in a single volume collection (Volcoll)?

How many VMs in each datastore and how many VMs in each volcoll?

 

Over the years here at Nimble Storage we've improved our protection engine and introduced specific configurations that will help when a volume collection is configured against the best practice, for example checks if VMware tools installed or not before requestion the snapshot of a VM, power on / off state of a VM and being able to stage the snapshots for the hosts as we don't want to overload the server and spike the CPU.

 

Some environments are not able to follow the best practice and the above taking it into consideration, the last feature for the VMware Synchronized Snapshots and Nimble Storage was introduced in NimbleOS 4.2.

 

By default, the VMware synchronized snapshots provided by NimbleOS include all the VMs and datastores in a volume collection. Starting with NimbleOS 4.2.0.0 (and vCenter 6.0), you have the option of including or excluding specific VMs and datastores from the snapshot.

 

Using VMware tags feature you can now use a pre-defined text as a tag name to act as a trigger for the Nimble array to either exclude or include the object in the snapshot request to vCenter.

 

It does not mean that your data is not protected, VMs or datastores that are excluded will still be part of the Crash consistent snapshot (array snapshot), it just means that we won't ask vCenter to quiesce the Virtual machines.

 

If you're not aware of VMware tags feel free to read a bit about it in the following great blogs, I wouldn't want to duplicate the work done by these great bloggers:

VMware vSphere Tags and Categories - all you need to know about it 

Time to switch your vCenter attributes to tags - Gabes Virtual World 

 

Tags can be assigned to a VM or a datastore and will work together if both objects are tagged, for example if the datastore is tagged with the exclude but 1 VM out of the 5 inside is tagged to be included in the backup operation we will only take a VMware Synchronized snapshot of that single VM.

 

How to use it?

 

Category & Tags can be created and assigned from the vCenter UI or from PowerCLI.

 

PowerCLI:

 1. Creating the tag category

     $npmSnapTagCategory = New-TagCategory -Name "NimbleVMwareSyncSnaps" -EntityType VirtualMachine,Datastore -Description "Category of tags for Nimble VMware Synchronized Snapshots"

 2. Creating the "Include" tag under the category created:

     $include_tag = New-Tag -Name "NimbleSnapInclude"  -Category NimbleVMwareSyncSnaps -Description "Tag for including VMs or datastores from Nimble VMware Synchronized Snapshots"

 3. Creating the "Exclude" tag under the category created:

     $exclude_tag = New-Tag -Name "NimbleSnapExclude"  -Category NimbleVMwareSyncSnaps -Description "Tag for excluding VMs or datastores from Nimble VMware Synchronized Snapshots"

 

Should look like this:

 

vCenter UI:

 

  1. From the Navigator pane in the VMware vSphere Web Client, select Tags & Custom Attributes.
  2. Select the Tags tab.
  3. Select Categories.
  4. Select the Create icon and perform the following tasks:
    • Create a category called NimbleVMwareSyncSnaps that has the One Tag per object attribute. This attribute ensures that only one tag from this category can be associated with a vSphere object at a time.
    • Under Associable Object Types, select Datastore and Virtual Machine. This selection ensures that these tags can be applied to only these objects.

 

   5. Select the Create icon.

   6. Create a tag called NimbleSnapExclude and associate it with the NimbleVMwareSyncSnaps category.

   7. Create a tag called NimbleSnapInclude and associate it with the NimbleVMwareSyncSnaps category.

 

Now we have the tags ready we can assign NimbleSnapExclude and NimbleSnapInclude tags to VMs and datastores. The VMware tags feature allows you to explicity designate VMs and datastores to be excluded or included when VMware takes a synchronized snapshot.

 

Side note:  VMs and datastores that are excluded will still be backed up but not using VMware sync'd snapshot.

 

Assign a tag:

  1. Right click either the VM or datastore.
  2. Select Tags and Custom Attributes.
  3. Select the NimbleSnapExclude tag to exclude that VM or datastore from synchronized snapshots. Select the NimbleSnapInclude tab to include that object.
    Note: You cannot assign both tags to the same vSphere object.
  4. Repeat these steps until you have assigned a tag to all the objects that you either want to exclude or to explicitly include. Any object that does not have a tag is automatically included in all synchronized snapshots and their restore operations.

 

 

Example:

 

I have a volume collection called MyVMs it includes 2 volumes "EMEA-VMDK-DS" and "EMEA-Infrastructure-1 (Do Not Delete)".

I can't relocate some of my VMs out of these 2 dedicated datastores, but I must have an application consistent snapshot i.e VMware sync'd snapshot.

What I can now do with the new feature is tag both my datastores with the NimbleSnapExclude tag.

The VMs that I must have an application consistent snapshot can be tagged with the NimbleSnapInclude tag.

 

"EMEA-VMDK-DS" 

 

"EMEA-Infrastructure-1 (Do Not Delete)"

 

 

Assign Tag to both the datastores (repeat for both):

 

Now you've excluded the datastores, you can decide on the VMs that you want to protect with sync'd snapshot.

 

 

The above is one example of few use cases that this new feature can be used with.

Hope you enjoy it, let us know what you think and if you like it

 

 

Reference:

Snapshot Exclusion and Inclusion Options for VMs and Datastores

Crash Consistent – The state of disks equivalent to what would be found following a catastrophic failure that abruptly shuts down the system. A restore from such a shadow copy set would be equivalent to a reboot following an abrupt shutdown.

Application Consistent – In general terms, a database application will respond to its VSS writer being triggered by flushing all of its memory and I/O operations so that the database is completely consistent. In doing so, there is nothing in memory and no pending I/O to be lost.  When the VSS snapshot is complete, it signals the VSS writers, which are then to resume normal operation of the attendant application while the backup software safely copies out of the snapshot.

 

 

 

Thanks,

@Moshe_Blumberg

No doubt there’s a ton of buzz around Kubernetes in the container space today. Kubernetes is a container platform introduced by Google inspired by their internal Borg system and one of the largest and active communities on GitHub. This powerful platform makes users and Enterprises gape in awe at the sheer scale, feature set and simplicity of orchestrating container workloads.
Adopting Kubernetes as an organization could be challenging without partnering with a Kubernetes solution provider that packages and supports Kubernetes. It may not be the latest upstream, it may not support all the latest capabilities but at least there’s somewhere to turn for support, upgrade paths and consulting. One of the most prominent distributions is Red Hat OpenShift Container Platform which is a revamp of their easy to use PaaS which is now built around Kubernetes since OpenShift version 3.
Nimble Storage has one of the feature richest Docker Volume plugins on the Docker Store, fully integrated into Docker Swarm which gives Docker users a premium experience when it comes to persistent storage and data services. What if we could bring all of that to Kubernetes and OpenShift? Kubernetes 1.4 introduced the Flexvolume plugin framework, an extremely lightweight “shim” to allow third-parties and users to provision storage to their containers however they want. We currently have a Flexvolume plugin in development that simply sits as a translation layer on top of the Nimble Storage Docker Volume plugin and provides the exact same features as the same.
Let’s walk through the simplest pod:
---
apiVersion: v1
kind: Pod
metadata:
  name: mywebserverpod
spec:
  containers:
  - name: webserver
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    flexVolume:
      driver: hpe/nimble
      options:
        name: mydockervol
        description: "My web content"
        protectionTemplate: platinum
        pool: hybrid
        perfPolicy: "Windows File Server"
        sizeInGiB: "4000"
        limitMBPS: "250"
        limitIOPS: "750"
This pod manifest will deploy nginx with a persistent volume named “html”, inside the pod that volume maps to a Flexvolume. Kubernetes is aware about the “hpe/nimble” plugin as it’s initialized when the kubelet (Kubernetes node service) starts and will call the plugin with the described option map in the manifest. If there is a volume named “mydockervol” on the node already, it will be mounted in the pod, if not, it will be created on the fly according to the specification and mounted.

Advanced Data Services

One of my pet use cases is the ability to clone an entire application stack for development, testing and integration purposes. How well does this translate into Kubernetes and OpenShift? Let’s explore a full blown example.
Assume the follow manifest, it deploys a MySQL server and a webserver running WordPress - a very popular content management system, along with all the plumbing needed to deploy a production application stack:
---
apiVersion: v1
kind: Secret
metadata:
  name: mysql
data:
  password: WU9VUl9QQVNTV09SRA==
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql
                  key: password
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          flexVolume:
            driver: hpe/nimble
            options:
              name: mysqlvolZ
              description: "This is my WordPress MySQL DB"
              size: "16"
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  type: ClusterIP
  ports:
    - port: 3306
  selector:
    app: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
        - image: wordpress
          name: wordpress
          env:
          - name: WORDPRESS_DB_HOST
            value: mysql:3306
          - name: WORDPRESS_DB_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mysql
                key: password
          ports:
            - containerPort: 80
              name: wordpress
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
      volumes:
        - name: wordpress-persistent-storage
          flexVolume:
            driver: hpe/nimble
            options:
              name: wpvolZ
              description: "This is my WordPress assets"
              size: "1000"
              perfPolicy: "Windows File Server"
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: wordpress
  name: wordpress
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    app: wordpress
---
apiVersion: v1
kind: Route
metadata:
  name: host-route
spec:
  host: tme-lnx1-rhel7-2.lab.nimblestorage.com
  to:
    kind: Service
    name: wordpress
On my OpenShift system, I have an empty project named “wordpress”, I’ll switch to that project and deploy the above manifest:
$ oc project wordpress
Now using project "wordpress" on server "https://tme-lnx1-rhel7-2.lab.nimblestorage.com:8443"..
$ oc create -f wordpress.yaml
secret "mysql" created
deployment "mysql" created
service "mysql" created
deployment "wordpress" created
service "wordpress" created
route "host-route" created
$
In a moment I will have a WordPress instance ready to be setup on the route I declared. Let’s assume I setup this instance, give it to my content management team and let it run for a while. At one point they might want to upgrade to the next version or do a major overhaul that involves risk in letting visitors see intermediary versions of the website while restructuring and testing. Having a sandbox to do that would be most helpful and non-disruptive.
You would simply switch to another project:
$ oc project clone
Now using project "clone" on server "https://tme-lnx1-rhel7-2.lab.nimblestorage.com:8443".
$
After that you’d simply alter the volume sections for each of the pods:
<snip>
      volumes:
        - name: mysql-persistent-storage
          flexVolume:
            driver: hpe/nimble
            options:
              name: mysqlvolZ-clone
              cloneOf: mysqlvolZ
<snip>
      volumes:
        - name: wordpress-persistent-storage
          flexVolume:
            driver: hpe/nimble
            options:
              name: wpvolZ-clone
              cloneOf: wpvolZ
<snip>
This will simply clone the volumes from the “wordpress” project into the “clone” project. In my example I also remove the route section and connect externally via the randomly assigned port:
$ oc create -f clone.yaml 
secret "mysql" created
deployment "mysql" created
service "mysql" created
deployment "wordpress" created
service "wordpress" created
$ oc get services -l app=wordpress
NAME        CLUSTER-IP       EXTERNAL-IP                     PORT(S)        AGE
wordpress   172.30.136.215   172.29.135.102,172.29.135.102   80:31919/TCP   14s
$
In this example I would connect to port 31919 to access the cloned instance.
This is a comprehensive suite of data services Nimble Storage and HPE is bringing to the Kubernetes community and Red Hat OpenShift Container Platform users. Please feel free to reach out if this is something you want to get your hands on early, we might invite you to a beta program. We expect to ship the Nimble Storage Flexvolume plugin for Kubernetes and OpenShift over the next few months.

Keeping up on developments in Virtual Desktop Infrastructure (VDI)?   New themes continue to arise, such as App Layering and Desktop as a Service (DaaS), but “How to reduce cost” is a constant topic of conversation when it comes to IT in general, and VDI in particular, whether in the data center or the boardroom.

 

When we talk about VDI costs, a big part of the analysis is the backend infrastructure.  These deployments are often a “clean sheet” exercise, as all layers of the solution – server, networking, storage – need to be upgraded to accommodate the increased requirements around performance, bandwidth and centralized data storage capacity.

Two recently published VDI design documents provided an apples-to-apples comparison around the storage requirements of two comparable enterprise-class All-Flash solutions: one with Pure Storage and one with Nimble Storage. Yet the difference in storage infrastructure cost couldn’t be more different.

 

Both solutions use All-Flash back-end storage infrastructure to deliver the IOPS and low latency required for a VDI deployment.  They also both used the same enterprise-class servers, switches, same version of Citrix XenDesktop 7.11, were spec-ed for NVIDIA graphics support, and were both sized for a 5,000 seat deployment.

 

The first solution used Pure’s All-flash 40TB array with a single external shelf holding another 45TB. The list price for this was $1,700,000. The Nimble/HPE solution used a Nimble AF5000 All Flash Array with one base disk shelf with 40TB raw space, plus an external shelf with 44TB raw space. This provided comparable capacity and at least comparable performance characteristics, but had a list price of just $397,000. 

 

The difference was an amazing 77% lower cost with the Nimble solution!  In terms of cost per desktop, at list price, it was $340 per desktop with Pure, but just $79.40 per desktop with Nimble.

 

So keep doing your homework on the new developments in the end user computing and mobility space, but no need to look any further for the best deal in VDI storage.  If you’re seeing different results, please share them in the comments below.  For more info on our VDI solutions, or to request a demo or POC, visit the Nimble Desktop Virtualization solution page.

Infrastructure-as-code was a paradigm introduced in the mid two thousand to manage infrastructure in a more scalable fashion than error-prone human interaction with API driven cloud era systems. Puppet and Chef are by far the biggest household names in this space and new tools are emerging to find new ways to abstract complexity from intricate system architectures. One of the most prominent principles is the declarative nature on how tasks should be carried out. Manifests, recipes and playbooks (depending on what tool you're using) are human readable stanzas that could be source code controlled and peer reviewed, just like code.

 

Unlike Ansible and Chef, which has generic HTTP modules to manipulate our REST API, Puppet needs a custom module to declare and manipulate NimbleOS. We recently published an Open Source Puppet module for Nimble Storage on GitHub. It's also accessible from Puppet Forge.

 

In the current release the following resources are available to manage:

  • Volumes and Snapshots
  • Volume Collections and Protection Templates
  • Initiators and Initiator Groups
  • Access Control Records

 

Documentation is available on the resource types found in the module itself, available in the "types" section on Puppet Forge. A few in depth examples are available in the git repo README.md file.

 

In a Puppet environment using a Hiera data backend, a mount point may be declared as such:

---
agent:
  - nimblestorage::init
  - nimblestorage::chap
  - nimblestorage::initiator_group
  - nimblestorage::initiator
  - nimblestorage::protection_template
  - nimblestorage::volume_collection
  - nimblestorage::volume
  - nimblestorage::acr
  - nimblestorage::fs_mount

iscsiadm:
  config:
    ensure: present
    port: 3260
    target: 192.168.59.64
    user: "%{alias('chap.username')}"
    password: "%{alias('chap.password')}"

chap:
  ensure: present
  username: chapuser
  password: password_25-24
  systemIdentifier: example-chap-account

initiator:
  ensure: present
  groupname: "%{::hostname}"
  label: "%{::hostname}:sw-iscsi"
  ip_address: "*"
  access_protocol: "iscsi"
  description: "This is an example initiator"
  subnets:
    - Management

multipath:
  config: true

volumes:
  example-vol:
    ensure: present
    name: example-vol
    size: 1000m
    description: Example Volume
    perfpolicy: default
    force: true
    online: true
    vol_coll: example-vol-coll

access_control:
  example-vol:
    ensure: present
    volume_name: example-vol
    chap_user: "%{alias('chap.username')}"
    initiator_group : "%{::hostname}"
    apply_to: both

mount_points:
  example-vol:
    ensure: present
    target_vol: example-vol
    mount_point: /mnt/example-vol
    fs: xfs
    label: example-vol

On the host used in the example above, we'll end up with the following when calling the puppet agent:

[vagrant@agent ~]$ sudo puppet agent -t 
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for agent.localdomain
Info: Applying configuration version '1496597926'
Fact's are being picked up
Notice: /Stage[main]/Nimblestorage::Host_init/Host_init[prepare_facts]/ensure: created
Creating New CHAP Account -> chapuser
Notice: /Stage[main]/Nimblestorage::Chap/Nimble_chap[chap-account]/ensure: created
Creating New initiatorgroup agent
Notice: /Stage[main]/Nimblestorage::Initiator_group/Nimble_initiatorgroup[initiator-group]/ensure: created
Creating New Initiator iqn.1994-05.com.redhat:a977584934c
Will assign iqn: 'iqn.1994-05.com.redhat:a977584934c' with label agent:sw-iscsi to group agent
Creating New initiator agent
Notice: /Stage[main]/Nimblestorage::Initiator/Nimble_initiator[initiator]/ensure: created
Creating New Volume example-vol
Notice: /Stage[main]/Nimblestorage::Volume/Nimble_volume[example-vol]/ensure: created
Notice: /Stage[main]/Nimblestorage::Acr/Nimble_acr[example-vol]/ensure: created
Notice: /Stage[main]/Nimblestorage::Iscsiinitiator/Nimblestorage::Iscsi[config]/File[/etc/iscsi/iscsid.conf]/ensure: defined content as '{md5}cbd6728303be5281fac35a02fb2149a2'
Info: /Stage[main]/Nimblestorage::Iscsiinitiator/Nimblestorage::Iscsi[config]/File[/etc/iscsi/iscsid.conf]: Scheduling refresh of Service[iscsid]
Notice: /Stage[main]/Nimblestorage::Iscsi::Service/Service[iscsid]: Triggered 'refresh' from 1 events
iscsiadm: No active sessions.
iscsiadm: No active sessions.
/dev/mapper/mpathc on /mnt/example-vol type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
Notice: /Stage[main]/Nimblestorage::Fs_mount/Nimble_fs_mount[example-vol]/ensure: created
Notice: Applied catalog in 6.15 seconds
[vagrant@agent ~]$ df -h /mnt/example-vol
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/mpathc  997M   33M  965M   4% /mnt/example-vol

Now we have all the stanzas declared, we can quite easily make changes and re-apply it. Let's increase the size of the volume:

<snip>
volumes:
  example-vol:
    ensure: present
    name: example-vol
    size: 2000m
    description: Example Volume
    perfpolicy: default
    force: true
    online: true
<snip>

And re-apply:

[vagrant@agent ~]$ sudo puppet agent -t 
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for agent.localdomain
Info: Applying configuration version '1496598141'
Fact's are being picked up
Notice: /Stage[main]/Nimblestorage::Host_init/Host_init[prepare_facts]/ensure: created
Updating existing initiatorgroup agent . Values to change {"target_subnets"=>[{"id"=>"1a23afebda5c91693c000000000000000000000006", "label"=>"Management"}]}
Notice: /Stage[main]/Nimblestorage::Initiator_group/Nimble_initiatorgroup[initiator-group]/ensure: created
Updating existing Volume example-vol with id = 0623afebda5c91693c00000000000000000000000e. Values to change {"size"=>2000}
Notice: /Stage[main]/Nimblestorage::Volume/Nimble_volume[example-vol]/ensure: created
Re Discovering and refreshing
/dev/mapper/mpathc on /mnt/example-vol type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/mpathc on /mnt/example-vol type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/mpathc on /mnt/example-vol type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
Notice: /Stage[main]/Nimblestorage::Fs_mount/Nimble_fs_mount[example-vol]/ensure: created
Notice: Applied catalog in 4.89 seconds
[vagrant@agent ~]$ df -h /mnt/example-vol
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/mpathc  2.0G   33M  2.0G   2% /mnt/example-vol

A simple volume increase should be just that. Simple, declarative and unambiguous. Since the Hiera data backend is plain YAML files these become extremely easy to manipulate programmatically by external systems, such as end-user provisioning portals.

 

Are you using Puppet or any other automation tool (homegrown or mainstream) to manage your Nimble Storage environment? Please let us know!

Nimble OS 4 has now been released as GAC (General Availability Candidate) and with this new OS comes new features for our customers. This blog concentrates on the new integration we provide with Nimble OS 4 and Microsoft Hyper-V and is the latest in the Nimble OS 4 - Detailed Exploration Series.

 

Nimble has for many years provided Windows Host Utilities which allow our users to integrate Microsoft Volume Shadow Copy Services (VSS) integration for Microsoft SQL Server and Exchange Server. We actually did provide integration with Hyper-V in the form of a compatible Nimble VSS provider which could be leveraged by 3rd party backup applications, however with the release of Nimble OS 4,  we provide support for a VSS Requestor for Microsoft Hyper-V, allowing our customers to schedule snapshot backups, manage snapshot retention, and replication of Hyper-V workloads directly from the array UI.

 

The first step to use this functionality, is to ensure you have the compatible array firmware (4.x) which you can request from Nimble Support (once GA it will be available to all Nimble Storage customers via InfoSight). Next you need to install the Nimble Windows Toolkit (NWT) on your Hyper-V hosts, it’s this toolkit which includes our VSS requestor and provider for Hyper-V environments.

 

Once you have these in place configuring Hyper-V backups is very simple. Below is a screenshot of the workflow with Nimble OS 4. Notice that the UI has changed as we’ve adopted HTML5 in this release, but the workflow to protect volume collections is identical to prior versions. A deeper dive into the new GUI in Nimble OS 4 is available here.

 

HyperV UI.png

 

Now when you protect a volume, you can select the “Application” type from the drop down, and along with Microsoft SQL and Exchange, there is “MS HyperV”. When this option is selected at the scheduled snapshot backup time we will communicate with the Nimble VSS Requestor on the Hyper-V host(s), which in turn will speak to the Hyper-V and CSV VSS writers to quiesce the virtual machines prior to taking the Nimble snapshot. The virtual machines will require Hyper-V Integration Services installed inside them as Hyper-V will use this to take a guest snapshot in coordination with the Hyper-V and CSV VSS writers.

 

The “Application Server” should contain the cluster name resource, or standalone host, FQDN or IP address so we know which host(s) we need to communicate with. As normal you’ll need to configure the schedule for snapshots, the retention for different schedules, and whether you would like to replicate to another Nimble Storage array.

 

A few worthy points to note on the integration:

 

  • We support standalone volumes or clustered shared volumes (CSV) – we don’t however support traditional clustered disks. I would expect most Hyper-V users to deploy on CSVs. A mix of standalone and CSVs in the same volume collection isn’t allowed, so if you have both split them out in to separate volume collections.
  • We support NTFS or ReFS
  • Minimum Hyper-V OS version supported is Microsoft Windows 2012R2 (with Oct 2014 rollup update). Windows 2016 is of course supported.
  • If Hyper-V fails to take a guest snapshot for a particular virtual machine, via Integration Services, the backup will continue i.e. the Nimble snapshot will still be taken. The Windows event log will contain a report on which VMs were backed up successfully and which failed.
  • Whilst most users will leverage the Nimble UI to configure backups, you can still configure via the CLI or RestAPI. For both of these you need to specify the “app_id” as “hyperv”
  • The snapshot will fail if the VM is in the root of the mount point, so place virtual machines under a folder in the disk.
  • When recovering from a VSS snapshot or importing from a cloned volume the VM will have a backup checkpoint. Apply this checkpoint prior to starting the virtual machine.

 

I hope you found the blog useful, if you have comments or questions post them below.

docker cerifited program 3.png

Photo Credit: Docker Blog

Gartner says 70% IT organizations planning a private PaaS will deploy a container service instead. 1 IDC predicts that achieving portability and hardware efficiency for traditional applications will be the most popular production use for containers in the next four years. 2 Both agree that DevOps adoption is accelerating for enterprises. 3,4 

Whether you think Docker containers are for IT, DevOps, or both, the benefits are impossible to ignore. Containers have long been popular with developers. Yet enterprise IT teams have had many questions about how to best adopt containers for mainstream use. Here are some top questions in the minds of customers and prospects:

  1. What tools can help me secure, manage, and govern my data, applications, and environment? Where do I find them?
  2. How much time and effort will it take to evaluate each of these tools and technologies, and integrate them into a working system?
  3. How much of the support burden will fall on my team if these technologies don’t work as expected?
  4. How do I choose a foundation that not only meets today’s requirements, but will also benefit from lasting future community innovation and support?
  5. When containerizing mainstream applications, how will I store, move, and manage their production data?

 

Find the answers to these questions on the Nimble Storage blog.

Taking some inspiration from Ansible I've put together a Ruby script that wraps the Bimbly REST client and digests yaml files to generate calls to the Nimble device. You don't need to understand Ruby to use the script, just yaml, and know that the first two levels of the yaml config are Array structures delimited by dashes.

 

How to Use (Ruby >= 2.0):

 

gem install bimbly

 

./nimble_playbook config.yml playbook.yml

 

And the rest is done by the script and library. The config.yml file is where the information about how to connect to a Nimble device are stored, and the playbook.yml file has all the calls and details that will go to the device.

 

Sample yaml Playbook

 

playbook_yaml.PNG

 

The script can be found here along with a README and some sample playbook/config files:

 

bimbly/bin at master · zumiez/bimbly · GitHub

 

I have also updated the Bimbly library to make it possible to generate Nimble playbooks. When using the library in irb, you can call 'save' rather than 'call' which will add the REST call to an array instead of sending it off to the device. You can see the REST calls as they are queued up with the method 'review'. Once you decide that you have all of the necessary calls queued up, you can then output them to a yaml file with 'create_playbook(filename)' and the library will write the yaml to disk to be used by the nimble_playbook script at any time.

 

I hope to add to the library here in the future that will help with generating templates, and have some sample templates ready as well.

 

Special thanks to Nimble SE alawrence for suggesting yaml config files for use with the REST client and allowing me to test out my script.

SAP HANA is the foundation for the future of SAP business software. SAP announced in 2015 that it will end mainstream support for its software running on traditional RDBMs in 2025. While that date seems a far way off, time flies and SAP customers are already moving quickly to SAP HANA based systems.

 

As a result, hardware vendors have found it necessary to have their products certified for SAP HANA in order to stay relevant and meet the needs of the thousands of SAP customers.

 

The SAP HANA certification process is well documented and supported by SAP. While it can be time consuming, the benefits to both customers and vendors is important. For customers it provides a level of security that the hardware they purchase will meet the stringent performance needs of this new technology. For vendors, it eliminates the questions related to “will it work with SAP?” that get asked frequently.

 

Appliance vs. Tailored Data Center Integration (TDI)

SAP HANA is delivered using two models, appliance and tailored data center integration or TDI. The appliance model combines servers, network, and storage in a single bundle that is certified using SAP tools and sold as a complete unit. TDI allows separate server, network, and storage vendors to certify their components individually, allowing customers to choose their preferred vendor for each of the pieces to create a SAP HANA solution.


Scale-up vs. Scale out

When SAP HANA was first introduced, the maximum supported memory in a single server was 512GB. This was related to the processor to memory ratio SAP recommended as well as processor technology. This meant that for systems of even 1TB, customers would have to create a scale-out environment, combining multiple servers and distribute the database between the environments. Scale-out environments required a shared storage architecture which allowed a standby server to attach a failed server’s storage to continue processing.

 

Smaller systems could run in a scale-up environment, with the persistent storage installed directly in the server. Many vendors found it necessary to use spinning disk for the data area and flash storage for log storage to meet the latency for log writes required by SAP.

 

As processors have advanced, with more cores and sockets, the amount of memory supported in a single server has increased with some server vendors offering SAP HANA solutions with up to 32TB of memory.

 

As a result, the need for large scale-out systems to support even modest SAP HANA systems has greatly reduced. Storage systems that need to support 8, 16, or more SAP HANA servers have also become less necessary, except in the case of extremely large customers.


Shared storage for SAP HANA

With memory for scale-up systems increasing, the question becomes “Why do I need shared storage for my SAP HANA systems, if I can fit everything on a single server?”

 

The answer has several components. First, why did we move to shared storage originally? Much of the motivation was related to the resource silos that were created by servers with direct attached storage. This was an inflexible solution that did not allow for resource sharing. If I had too much capacity in one server, I couldn’t easily allocate it to another server that needed the capacity. Shared storage allows customers to grow and scale their environments seamlessly, allocating capacity for individual servers that need it and achieving better capacity utilization.

 

Second, upgrading a server meant a migration that required a significant amount of downtime to migrate data between the environments. This downtime could prove costly in SAP systems which run all the core business processes for customers. Adding capacity to a single server could also require downtime and, at the very least, reconfiguration. Shared storage reduces the downtime significantly. In many cases it is just a matter of turning one server off, attaching the storage to the new server, and turning the new server on.

 

Finally, modern storage systems offer features and functionality that a server with internal storage simply cannot match. Data reduction technology like compression improves space utilization and even performance. The ability to create snapshot backups in seconds to allow multiple recover points throughout the day to reduce risk. Those snapshots also provide the foundation for a zero copy clone, which allows customers to create systems for development, testing, and troubleshooting in a matter of minutes instead of hours or even days. Modern storage systems also offer data protection capability like replication and integration with third party vendor tools.

 

We at Nimble Storage have invested the time and energy to achieve SAP HANA certification for both our Adaptive Flash and All Flash storage arrays. As part of our certification, we have also created deployment guides which describe the configuration required to implement SAP HANA on Nimble Storage arrays. We have guides for both All Flash and Adaptive Flash arrays available on InfoSight. We also have a sizing questionnaire to help customers determine the requirements for their SAP HANA implementations.

 

We believe our systems offer great performance and functionality for SAP HANA.

Postman and Nimble REST API

Posted by mblumberg Employee Mar 29, 2017

I've wrote few blogs on Nimble Connect in the last few months talking about PowerShell and PowerCLI and how to leverage Nimble Storage REST API (see details in the end).

I often get questions such "where/how do I start?" and where should I look for guides?

 

Julian Cates one of our marketing engineers wrote a great blog once about PowerShell / Linux and Curl and our API integration, it's a great place to start.

 

In this blog I will review the REST API integration using Postman application.

 

What is postman? The official description is "A powerful GUI platform to make your API development faster & easier, from building API requests through testing, documentation and sharing." the description is very tempting and the free software is great entry level and can be used to test the API calls.

 

Postman provide a great feature called "collections" this is basically a folder that will contain a pre-configured API calls that can be imported as a complete set, an example for it can be found on the VMware Code site for vcenter 6.5

Here is a capture of their integration:

 

Postmanvmware.jpg

 

Nimble Storage does not offer such collection (yet).

I've create a very limited collection for you to use, and with the help of this guide you should be able to use REST API and Nimble array with Postman integration.

 

The collection:

Postman-nmbl.jpg

The following collection allows you to:

  • Get a token from the Nimble array.
  • List volumes (Name and ID).
  • List all volumes details.
  • List single volume details.
  • Create a new volume.
  • Create a new snapshot of a volume.
  • List all snapshot of a volume.
  • Create a new clone from a snapshot.
  • List ACL.
  • List ACL details.
  • Create new ACL to a volume / clone.

 

To import the collection drag the attached collection to this blog:

Postman-import.jpg

 

Quick note: I've notice that with Postman, you need to hit the send button twice, if you get something like "Could not get any response" try to hit the send again.

 

 

Here is the details you MUST change:

In each of the requests you will see a place for uri: something like this: https://<ARRAY IP>:5392/v1/tokens   you must change it to either the IP of the FQDN, the end result should look like this for example:

https://10.66.32.44:5392/v1/tokens

Postman-uri.jpg

 

The Token request requires a user name and a password, this can be changed in the body tab:

Postman-body.jpg

In all the other requests the you must change the $token in the header tab, to the token received in the first Token call.

Here is the Token result:

Postman-token.jpg

 

Taking the "session_token" and using as authorization for all other calls will look like this example in the "list all volumes" call with the results:

 

Postman-header.jpg

 

So, now we understand how to modify the collection for your own details, here is a workflow you can easily follow to check the integration:

 

Get token.

     List all volumes.

          Create a new snapshot of a selected volume.

               Create a new clone volume from the snapshot.

                     List ACLs.

                          Create new ACLs to the Clone.

 

Get token:

Postman-token.jpg

List all volumes (we will use the id from this output):

Make sure you change the token in the header!!

Postman-vol.jpg

 

Create a new snapshot of a selected volume (based on the ID):

Make sure you change the token in the header!!

This is the Body (JSON):

     Notice the vol_id is of the volume in the last capture.

Postman-snapshot.jpg

After hitting the send, we can see the result:

 

Postman-snap-details.jpg

Create new clone from this snapshot:

Make sure you change the token in the header!!

The Body (JSON) contains the ID of the snapshot created one step before, hitting send will create a new clone from the snapshot.

Postman-clone.jpg

List ACL

Make sure you change the token in the header!!

Postman-acl.jpg

Create new ACL to the clone:

Make sure you change the token in the header!!

The Body (JSON) contains the ID of the initiator group listed one step before, hitting send will create a new ACL to the clone we've created:

Note: vol_id is of the clone we've created and reported back in the creation time.

          Initiator group id is reported one step before.

 

Postman-acl-add.jpg

 

Here is out the result looks like on the Nimble array UI:

 

postman-volume.jpg

 

If you're wondering about the Windows2012R2cluster ACL, those are inherited from the snapshot and moved to the clone, they can be removed as needed.

 

And there you have it, another integration of Nimble Storage with REST API.

 

Feel free to comment any question or feedback.

 

Thanks,

@Moshe_Blumberg

 

Links:

Nimble OS 2.3 – REST API

Nimble Storage InfoSight - REST API reference from Infosight.

https://blogs.vmware.com/code/2017/02/02/getting-started-vsphere-automation-sdk-rest/

https://www.getpostman.com/

In recent years Microsoft has gone through a transformation to attract more developers to their ecosystems. Native Linux binaries in a Ubuntu sandbox, open sourcing PowerShell, a free cross-platform source code editor, Visual Studio Code. A year ago the concept of Windows Containers were quite foreign. But here they are, in all it’s glory, with the full unambiguous Docker experience. Nimble continues to invest in our Microsoft relationship and we're the first Microsoft and Docker ecosystem partner to provide a Docker Volume plugin for Windows Containers.

 

By example: Minio

Getting into the driver seat as a developer to understand how to build, ship and run a Windows application inside a Windows Container, I wanted to explore each step. Minio is an AWS S3 compatible object storage server written in Go and already there’s an official Go image built from microsoft/nanoserver and microsoft/windowsservercore. It doesn’t have git installed so I need to add git to be able to pull down the Minio source code and build it.

 

Please find Dockerfile and Docker Compose file attached at the bottom!

 

Dockerfile

FROM golang:nanoserver

MAINTAINER michael.mattsson@nimblestorage.com

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]


ENV MINIO_BRANCH release

ENV GIT_VERSION 2.12.0

ENV GIT_RELEASE v${GIT_VERSION}.windows.1

ENV GIT_ZIP MinGit-${GIT_VERSION}-64-bit.zip

ENV GIT_URL https://github.com/git-for-windows/git/releases/download/${GIT_RELEASE}/${GIT_ZIP}

 

RUN New-Item -Path C:\Minio -Type Directory; \

    Invoke-WebRequest $env:GIT_URL -OutFile git.zip; \

    Expand-Archive git.zip -Destinationpath C:\git; \

    Remove-Item git.zip; \

    $env:PATH = 'C:\Minio;C:\git\cmd;C:\git\mingw64\bin;C:\git\usr\bin;' + $env:PATH; \

    Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment\' -Name Path -Value $env:PATH; \

    New-Item C:\gopath\src\github.com\minio -Type Directory; \

    cd C:\gopath\src\github.com\minio; \

    git clone -b $env:MINIO_BRANCH https://github.com/minio/minio; \

    cd minio; \

    git submodule update --init --recursive; \

    go build; \

    Copy-Item minio.exe /Minio; \

    cd \; \

    Remove-Item C:\gopath\src -Recurse -Force

 

EXPOSE 9000

ENTRYPOINT ["minio.exe"]

 

 

So far, not much have changed. All the standard verbs are there, slashes go the wrong way and I can admit I learned some PowerShell in the process. Building and running your application is much simpler with Docker Compose. I simply copied one of my example Docker Compose files I had for running Minio on Linux. With very little modification I could bring up the application and serve from a Nimble volume.

 

docker-compose.yml

version: '3'

services:

  minio:

    build: .\minio

    ports:

      - "9000:9000"

    volumes:

      - export:C:\Export

    environment:

      MINIO_ACCESS_KEY: nimble

      MINIO_SECRET_KEY: nimblestorage

    command: server C:\\Export\\Buckets

 

volumes:

  export:

    driver: nimble

    driver_opts:

      sizeInGiB: 1000

      perfPolicy: "Windows File Server"

      description: "Rolling my own AWS S3"

 

networks:

  default:

    external:

      name: nat

 

We’re now able to bring up the project:

 

docker-compose.exe up

PS C:\Users\mmattsson> docker-compose.exe up

Creating volume "dev_export" with nimble driver

Building minio

Step 1/11 : FROM golang:nanoserver

---> 2c8f2f11bd5a

Step 2/11 : MAINTAINER michael.mattsson@nimblestorage.com

---> Using cache

---> 3fe516998a09

Step 3/11 : SHELL powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';

---> Using cache

---> bf8a51e230ea

Step 4/11 : ENV MINIO_BRANCH release

---> Using cache

---> d20c97a62641

Step 5/11 : ENV GIT_VERSION 2.12.0

---> Using cache

---> c23f8f4b6020

Step 6/11 : ENV GIT_RELEASE v${GIT_VERSION}.windows.1

---> Using cache

---> 099fedfe35be

Step 7/11 : ENV GIT_ZIP MinGit-${GIT_VERSION}-64-bit.zip

---> Using cache

---> cd4074ad15e9

Step 8/11 : ENV GIT_URL https://github.com/git-for-windows/git/releases/download/${GIT_RELEASE}/${GIT_ZIP}

---> Using cache

---> e5411a4e0774

Step 9/11 : RUN New-Item -Path C:\Minio -Type Directory;    Invoke-WebRequest $env:GIT_URL -OutFile git.zip;    Expand-Archive git.zip -Destinationpath C:\git;    Remove-Item git.zip;    $env:PATH = 'C:\Minio;C:\git\cmd;C:\git\mingw64\bin;C:\git\usr\bin;' + $env:PATH;    Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment\' -Name Path -Value $env:PATH;    New-Item C:\gopath\src\github.com\minio -Type Directory;    cd C:\gopath\src\github.com\minio;    git clone -b $env:MINIO_BRANCH https://github.com/minio/minio;    cd minio;    git submodule update --init --recursive;    go build;    Copy-Item minio.exe /Minio;    cd \;    Remove-Item C:\gopath\src -Recurse -Force

---> Using cache

---> e95705a70bf1

Step 10/11 : EXPOSE 9000

---> Using cache

---> 4656141a2a56

Step 11/11 : ENTRYPOINT minio.exe

---> Using cache

---> b3e5f4a21ad4

Successfully built b3e5f4a21ad4

Image for service minio was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.

Creating dev_minio_1

Attaching to dev_minio_1

minio_1  | Created minio configuration file successfully at C:\Users\ContainerAdministrator\.minio

minio_1  |

minio_1  | Endpoint:  http://172.31.182.170:9000  http://127.0.0.1:9000

minio_1  | AccessKey: nimble

minio_1  | SecretKey: nimblestorage

minio_1  | Region:    us-east-1

minio_1  | SQS ARNs:  <none>

minio_1  |

minio_1  | Browser Access:

minio_1  |    http://172.31.182.170:9000  http://127.0.0.1:9000

minio_1  |

minio_1  | Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide

minio_1  |    $ mc.exe config host add myminio http://172.31.182.170:9000 nimble nimblestorage

minio_1  |

minio_1  | Object API (Amazon S3 compatible):

minio_1  |    Go:        https://docs.minio.io/docs/golang-client-quickstart-guide

minio_1  |    Java:      https://docs.minio.io/docs/java-client-quickstart-guide

minio_1  |    Python:    https://docs.minio.io/docs/python-client-quickstart-guide

minio_1  |    JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide

minio_1  |

minio_1  | Drive Capacity: 1000 GiB Free, 1000 GiB Total

 

This looks simple enough. Right? Getting to where Microsoft and Docker are today is no small feat. The full suite of Docker tools for Windows for developers to indulge. Nimble announced the Windows Container plugin at Microsoft Ignite last year and all parties have worked tirelessly on getting this technology out the door.

 

These are the capabilities that are expected at release:

 

Nimble Storage Windows Container plugin

PS C:\Users\mmattsson> docker volume create -d nimble -o help

Error response from daemon: create 6e9565253704c51483726f03609c47519e3844fe3a5d4424b530450ebe88aa3e:

Nimble Storage Docker Volume Driver: Create Help

Create or Clone a Nimble Storage backed Docker Volume or Import an existing Nimble Volume or Clone of a Snapshot into Do

cker.

Create options:

  -o sizeInGiB=X                  X is the size of volume specified in GiB

  -o size=X                        X is the size of volume specified in GiB (short form of sizeInGiB)

  -o description=X                X is the text to be added to volume description (optional)

  -o perfPolicy=X                  X is the name of the performance policy (optional)

                                  Performance Policies: Exchange 2003 data store, Exchange 2007 data store, Exchange log, SQL Server, SharePoint, Exchange 2010 data store, SQL Server Logs, SQL Server 2012, Oracle OLTP, Windows File Server, Other Workloads, DockerDefault

  -o pool=X                        X is the name of pool in which to place the volume (optional)

  -o folder=X                      X is the name of folder in which to place the volume (optional)

  -o encryption                    indicates that the volume should be encrypted (optional, dedupe and encryption are mutually exclusive)

  -o thick                        indicates that the volume should be thick provisioned (optional, dedupe and thick are mutually exclusive)

  -o dedupe                        indicates that the volume should be deduplicated (optional, requires perfPolicy option to be set)

Docker Volume Clone options:

  -o cloneOf=X                    X is the name of Docker Volume to create a clone of

Import Nimble Volume options:

  -o importVol=X                  X is the name of the Nimble Volume to import

  -o pool=X                        X is the name of pool in which to place the volume (optional)

  -o folder=X                      X is the name of folder in which to place the volume (optional)

  -o forceImport                  forces the import of the volume.  Note that overwrites application metadata (optional)

Import Nimble Volume as Clone options:

  -o importVolAsClone=X            X is the name of the Nimble Volume to clone

  -o snapshot=X                    X is the name of the snapshot to clone from & is applicable when importVolAsClone option is specified. (optional, latest snapshot of volume is used if not specified)

  -o pool=X                        X is the name of pool in which to place the volume (optional)

  -o folder=X                      X is the name of folder in which to place the volume (optional)

 

Yes, the capabilities of cloning and importing Nimble volumes are all there at the Windows Container DevOps engineer to reap the benefits of.

Please come see us at DockerCon ’17 in Austin, TX in a few weeks. We’ll be demoing Windows Containers amongst a slew of other cool things Nimble enable you to do in your container environments.

 

Useful links on Windows Containers

Build and run your first Docker Windows Server container: https://blog.docker.com/2016/09/build-your-first-docker-windows-server-container

Docker Captain Stefan Scherer’s blog: https://stefanscherer.github.io/ (endless useful information on Windows Containers)

Windows Container on Windows Server: https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-server

 

--

Here and there I often get an inquiry about using Chef with Nimble arrays. My standard response has been that we have a comprehensive REST API, please use that. It still remains my standard answer but now I have a blog post the inquiries to.

 

Chef has a built-in http_request resource that lends itself to manipulate remote REST APIs. However, it falls short in the department of retrieving and processing that content. Chef as a class you can call for this purpose, Chef::HTTP, and it lends itself just perfectly for easy retrieval of JSON content from a REST API.

 

Another caveat is that Nimble arrays serve the REST API over SSL connections with self-signed certificates. Which can be a bit of a pain to deal with in foreign libraries and programming languages (note that the author is a complete Ruby illiterate).

 

I’ve put together a small cookbook with just enough meat in it to use chef-client. The latest stable version of Chef DK (Chef Downloads) is being used in the below examples.

 

Sample Cookbook

Assume accounting have asked for a CSV-list of volumes from your array for their number crunching activities. I’m going to create such list for them to pickup at their convenience. Fire and forget. Here's the zipfile

 

First I need to figure out what attributes are relevant for my cookbook and how I easily can override those if we expand with more Nimble groups. This is my recipe cookbooks/nimble/attributes/default.rb

# Where to reach the API endpoint

default['nimble']['array'] = '192.168.59.64:5392'

default['nimble']['scheme'] = 'https'

default['nimble']['api_version'] = 'v1'

 

# This is the base dir for Chef DK

default['nimble']['certs'] = '/opt/chefdk/embedded/ssl/certs'

 

# Credentials

default['nimble']['auth'] = {

  'data': {

    'username': 'admin',

    'password': 'admin'

  }

}

 

With my attribute layout, I can now craft my first recipe, cookbooks/nimble/recipes/certs.rb

# Download certificate

bash 'download_cert' do

  code <<-EOH

    openssl s_client -connect #{node['nimble']['array']} \

      -showcerts </dev/null 2>/dev/null | openssl x509 -outform PEM \

       > #{node['nimble']['certs']}/#{node['nimble']['array']}.pem

    cat #{node['nimble']['certs']}/#{node['nimble']['array']}.pem >> #{node['nimble']['certs']}/cacert.pem

    EOH

end

 

This allows download of the Nimble array certificate and subsequent Chef connections to the array will be successful.

 

To download the certificate:

# chef-client --local-mode -r 'nimble::certs’

 

I can now move on to the more interesting pieces. Assume this recipe for cookbooks/nimble/recipes/volumes::list.rb

# Construct URL from attributes

nimble_array = node['nimble']['scheme'] + '://' + node['nimble']['array'] +

'/' + node['nimble']['api_version']

 

# My query

nimble_query = '/volumes/detail?fields=name,size,vol_usage_compressed_bytes'

 

# Get a token

nimble_token = JSON.parse(Chef::HTTP.new(nimble_array).post('/tokens',

  node['nimble']['auth'].to_json))

 

# Simply list my volumes

nimble_volumes = JSON.parse(Chef::HTTP.new(nimble_array).get(nimble_query,

  {'X-Auth-Token' => nimble_token['data']['session_token']}))

 

# Write JSON response to a list with an erb template

template '/tmp/nimble_volumes_list' do

  source 'nimble_volumes_list.erb'

  owner 'root'

  group 'root'

  mode '0644'

  variables :my_volumes => nimble_volumes['data']

end

 

Very easy to follow recipe and for clarity, this is what my cookbooks/nimble/templates/nimble_volumes_list.erb looks like:

name,size,used

<% @my_volumes.each do |object| %>

<%= "#{object['name']},#{object['size']},#{object['vol_usage_compressed_bytes']/1024/1024}" %>

<% end %>

 

Now, for the entrée:

$ chef-client --local-mode -r 'nimble::volumes::list’

 

This will create /tmp/nimble_volumes_list, in my case this is what it contains:

name,size,used

dlvm-docker-tme-lnx4-xenial,102400,1406

dlvm-docker-tme-lnx2-xenial,102400,2546

dlvm-docker-tme-lnx3-xenial,102400,3016

dlvm-docker-tme-lnx1-xenial,102400,2451

dlvm-docker-tme-lnx5-xenial,102400,1408

dlvm-docker-tme-lnx9-xenial,102400,1406

dlvm-docker-tme-lnx8-xenial,102400,1415

dlvm-docker-tme-lnx6-xenial,102400,1406

dlvm-docker-tme-lnx7-xenial,102400,1556

dlvm-docker-tme-lnx1-centos,102400,3203

dlvm-docker-tme-lnx2-centos,102400,2859

dlvm-docker-tme-lnx6-centos,102400,2150

dlvm-docker-tme-lnx9-centos,102400,2318

dlvm-docker-tme-lnx8-centos,102400,2383

dlvm-docker-tme-lnx4-centos,102400,2499

dlvm-docker-tme-lnx3-centos,102400,2842

dlvm-docker-tme-lnx5-centos,102400,2669

dlvm-docker-tme-lnx7-centos,102400,2508

 

Now accounting can crunch their numbers all they want!

 

To run this recipe on a array other than my default hardcoded array a custom JSON attribute file may be used to override the defaults in the recipe. Something like this (same logical structure as the attribute file):

{

  "nimble": {

    "array": "sjc-array875.lab.nimblestorage.com:5392",

    "auth": {

      "data": {

        "username": "admin",

        "password": "admin"

      }

    }

  }

}

 

So, for dessert:

# chef-client --local-mode -r 'nimble::volumes::list' -j nimble.json

 

That will however overwrite the contents of /tmp/nimble_volumes_list.

 

Chef has been around for a very long time. There’s countless resources available online to learn from. I’m a Chef apprentice and these examples may not represent the best practice pattern for creating cookbooks but demonstrates the principle that we have a REST API and you may manipulate it from anything, including Chef.

 

Dive into the NimbleOS REST API: Nimble Storage InfoSight

Learn more about Chef: Deploy new code faster and more frequently. Automate infrastructure and applications | Chef