ViPR SRM Solution Packs for vCenter and XtremIO

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

In my last two posts I touched on what ViPR SRM can do, and the quick installation.

With the ViPR SRM installation out of the way, it’s time to start adding Solution Packs. Solution Packs are use to connect to various systems, such as VMware vCenter, so ViPR SRM can collection information about virtual machines, ESXi hosts, datastores, and HBA’s. Additionally, you connect ViPR SRM to your switches and storage for, quite literally, an end to end view of your environment.

  • First, log into http://<vipr srm IP or DNS>:58080/APG and click on Administration (upper right corner)
  • Once you are in the Administration interface, click on Centralized Management on the left navigation menu, a new window or tab will open
  • In the new window, click on Solution Pack Center (back in the upper right corner)


  • In the search box in the upper right corner, type vCenter to filer the results, and click on VMware vCenter
  • When the vCenter box opens, click on the install button.


  • Follow the wizard and review the options; it’s a basic wizard – next, next; if using PowerPath click Enable the Host PowerPath alerts for example and click next, next, next, next, and finally install. ViPR SRM will go through and install the selected components.


  • Click OK. Repeat the above steps for your environment. At the very least, the Storage Compliance pack is useful. Here is the EMC XtremIO solution pack which I will be installing to show examples from.


  • With the solution packs installed, we need to provide each some information. Expand Discovery Center in the left navigation menu, expand Devices Management and click on VMware vCenter
  • Click on the Add new device… button and fill in the information to connect to vCenter. I suggest using dedicated accounts for external services, so for example here is my app_viprsrm user account which has admin privileges in vCenter. Click the test button to confirm the account has access, and then click OK. Repeat for multiple vCenters or the storage in your environment you added a pack for.


Don’t forget to click the Save button!


vcenter-vipr-srm-credsDepending on your environment, you may also want to add your FC switches as well. Switch monitoring is done by adding a Solution pack for your switch, and connecting to it via SNMP. While logged in as admin go to http://<viprsrm IP or DNS>:58080/device-discovery, click Collectors, click New Collector, and Save. This will add an SNMP collector to the local VM. Once the collector is added click on Devices, New Device, and fill in the appropriate information.


With all switches added, click the check box next to it, and click the magnifying glass icon under actions; this will discover the switch.

ViPR SRM will now start collecting data, to expidite the process click on Scheduled Tasks (left navigation menu), check box for the “import-properties-default” task, and click the Run Now button. If you return to the User Interface (back in the Administration page, click User Interface) and go to Explore >> Hosts you should see your vCenter hosts as well as virtual machines.


If you navigate to Explore >> Storage you should also see the storage devices you added.


With the configuration out of the way, I can now start to explore my environment with the various reports available, which I will do in the next post!

Installing ViPR SRM

*Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

In my last ViPR SRM post, I introduced you to some of the features if you were not already aware of them. In this post, I will look at installing ViPR SRM 6.5.2. I downloaded ViPR SRM from; while I am an EMC employee, I logged into the support site with my personal email account to download the files. Once logged in, search for ViPR SRM and click on the downloads menu, as I mentioned I will be going with the vApp option versus a binary installation.


Once downloaded, extract the content of the zip file – you’ll have 2 OVF’s. One is the 4 VM vApp I mentioned in my last post, the other, a 1VM vApp useful for lab and evaluation purposes. Given I have limited resources in my home lab, I will be deploying the 1 VM vApp.


Important note here, you will need to deploy the OVF to vCenter, not a stand-alone ESXi host as some of the OVF properties will not be exposed properly, causing the deployment to fail.

Follow the OVF deployment wizard, when prompted select the All-In-One configuration:


By default, the VM deploys with 4 vCPU – adjust according to your lab, I have set mine to 2, 16GB RAM and removed the reservation (performance here would not be ideal obviously, but this is for lab purposes only). Once the OVF has been deployed, you should be able to log into http://<viprsrm DNS or IP>:58080/APG. Login as admin/change me to access ViPR SRM.



By default, you are in the “User” interface, if you click on “Administration” in the upper right corner, you will go to the administration screen. Go ahead and click on Administration >> Centralized Management (on the left nav menu) >> License Management (also on the left nav menu). As you can see you have a 30 day trial license to test out ViPR SRM.


Close the license window/tab. Notice where the “Administration” menu was, you now see a “User Interface” menu, this will (like the administration link did) take you to the User interface (where you initially landed when you logged in.

In the next post, I will look at connecting ViPR SRM to vCenter and, in my case, XtremIO.

Security needs automation, but automation does not mean you are secure

A couple of weeks ago a question was posted on the Ansible LinkedIn group stemming from an Ansible role for security CentOS. The question, whether Automation is the only way to ensure security. My brief social media shorted response was

Completely agree. If you aren’t automating then you can’t really claim to be secure

This caused some fuss on the post, with most disagreeing with me. Now I stand by my answer, you cannot be secure if you are not automating, but to further my answer, you are not necessarily secure just because you are automating. Security is not just something you turn on, said another way its not binary, you don’t just turn on security. Security consists of many layers, not the least of which is truly understanding your companies business, goals, requirements, processes, and people. With that understanding, you can now apply any specific security measures you may need to abide by. For example if accept credit cards then appropriate safe guards need to be taken to ensure data is encrypted and certain elements such as the validation number are not stored.

Now, if you are not adhering to those requirements, there is no automation process in the world that can secure you. However, even with the most specific of run books, security teams, engineers, and auditors ensuring you have done everything technically possible to ensure security, you cannot truly say you are secure with a means to automate the installation, and configuration as you have defined them.

heartbleedAnother argument in the group discussion was that automation can also lead to widespread vulnerabilities by opening security holes. And while this is true, my previous statements still hold true – you need to have the proper security processes, and details in place before you automate them. Now, say for example, something like Heartbleed comes along again – how quickly would it take you to patch even 10 systems? What about 100? or 1000 if you are doing it by hand? Much longer than it would take to leverage something to patch the systems automatically.

Automation, configuration management, devops; none of these things are a panacea – however security teams should need be relying on automation, not manual efforts to configure, and secure systems.

Getting to know ViPR SRM

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

One of the upcoming tools I will be working with is ViPR SRM. ViPR SRM is a storage management tool that allows for monitoring the end-to-end health of your environment. I know what you’re thinking, “C’mon now Frapp that sounds awfully marketingy” and you’re right – it does, BUT let me give you an example of why some of the tools in ViPR SRM interest me.

network-is-fineHave you ever went over to a friends cube to chat and they say the app it ain’t no good? The reports are slow, the app keeps crashing, and the chicken taste like wood. Okay, but seriously how many times has someone walked over and said “my application is slow/down/broken” with no further detail, leaving it up to you to isolate what is going on? It has happened to me often. Worse is when you are the personal responsible for storage and someone else responsible for networking does the Jedi hand wave and says the network is fine, it must be storage.


That is where ViPR SRM comes it, it can show you the relation from virtual machine, through the hypervisor, datastore, data path to the storage array hosting the virtual machine. Further, for heterogeneous it supports multiple types of applications, operating systems, hypervisors and storage arrays. Of course it supports more EMC products, since it is an EMC product but you don’t necessarily have to run an EMC array to leverage ViPR SRM.

Below are some of the systems supported by ViPR SRM, an updated list can always be found at


While getting ready for the installation, know that you can deploy as either a pre-packed vApp or install the application on 64-bit versions of RedHat, CentOS, SUSE, or Windows; during my post I will be deploying the vApp version which includes 4 virtual machines. The 4 virtual machines each have unique roles as a typical multi-tier application would – there is a web front end for UI and reporting, database backend for storing data, and collector for, well, collecting data. In large environments with multiple arrays you may deploy multiple collectors.


In my next few blog posts I’ll be reviewing the installation of ViPR SRM, and review some of the dashboards and how they might help you in the day to day monitoring, and troubleshooting of your environment. If you’d like to learn along with me check out the ViPR SRM free e-Learning on ECN.

Stop the Twitter spam #sumall #fllwrs

There are many tools out there that can help you manage your social media presence, unfortunately some of these tools take to the equivalent of spamming your followers with auto posted tweets, probably the most egregious of them is Sumall. You know the “How I did on twitter this week” tweets:


There are others, not to pick soley on, like,, and VMware Advocacy tweets to name a few. While they are a nice guideline for you as an individual to see, it really offers no value to your followers. There are a couple of ways to stop this. First is to unauthorize the application from your Twitter settings:

  • Log into Twitter
  • Click on  your profile picture in the upper right corner and select Settings


  • Click on Apps and see a list of all the applications you have given permission to use your Twitter account
  • Click the Revoke access button next to the app


Now keep in mind, this will block the application from using your Twitter account. As I mentioned before some of these tools can be handy to you personally. The other option is to disable auto tweets directly from the application itself. For example for sumall its:

  • Log into
  • Go to Account Settings
  • Click on Twitter preferences
  • Disable any notification

Now, you can continue to use those tools while not spamming your followers. If there are other tools you are using, please share the directions in the comments about how to turn off the auto tweets.

Devs Rool, IT Roolz

Boy, social media is proving tough to have some discussions for me lately – here is the first of two three blog posts to set my position straight. Yesterday a conversation got started on Twitter based on a tweet shared by John Troyer “Devs Rool, IT Droolz” – in fact here is another point of view on the conversation from Rynardt Spies. Now as an “infrastructure” person you may think my take here is about saving my job, or staying relevant or some such thing but it is nothing at all about that – its about working together. In fact, those who know me well know that I am trying to push the “infrastructure people” into a more application focus, not necessarily development, but stop being infrastructure focused and work on being able to deliver applications and value to the business. I’ve never had a CFO walk into my office and say a virtual machine was down or a VLAN was misconfigured, but they know when their application is down.

I was fortunate enough in my first IT job to work for a manager who “got it,” as well as several other thought leaders in the technology management space such as Gary Beach. I came out of that job thinking that all IT shops must run like this, I mean after all, it only makes sense that IT’s role is to enable technology for the the business and the people in it. My job as an infrastructure person is to understand the business needs, take the business needs and translate those into functional technology requirements and make sure they are working. In some cases it might be working with sales to ensure an SFA/CRM fits the business model, or that developers have access to the tools and systems to do their jobs, as well as to work with them to ensure access and security requirements are met.

Where I got into trouble in both cases recently (and another blog post is to follow) are “general” statements that some people are taking as “blanket” statements. When I say devs shouldn’t work on infrastructure, I’m not saying they can’t, or don’t have the skills; I’m saying that as a business – I don’t want them to. Now yes, advances in technology allow some level of orchestration at the network layer which allows people to instantiate networks on the fly, things like NSX or Neutron (I’m still pissed at you Neutron by the way) but those technologies still need a strong hardware foundation to be built upon to function and perform properly in support of the business requirements – not just software requirements. A well built network allows orchestration at the software layer to enable the needs of the business.

As I have said many times, there is no silver bullet or magical piece of technology that supports all businesses, and all workloads. “Generally” there will need to be several things, working together to provide everything a business needs. The same holds true for people; if IT is not talking to the business – Finance, Sales, Executives, Legal, Marketing, and engineering teams (development, QA, security) etc… then IT droolz. But the same holds true for developers; if devs are not taking to the business, not talking with IT, not talking with security then those bad devs drool just as much as the bad IT people do.

I have worked at enough software companies to see first hand what can be accomplished when people work together – its not about devs vs IT, its about devs AND IT working together in support of the business. Can developers setup a switch, VLAN, or ACL on a firewall – sure but it’s “generally” not their primary skill set, just as my primarily skill set is not coding or automation. However, working together each person can learn something from the other. Our skills, our experience, our roles provide us with a unique view of that no other person can have. So when you work together you may find that the way you interpreted a requirement needs to be adjusted, either because of technology or business; say for example a business security requirement could be met by coding the application a certain way, but there may be other requirements on the business that require additional layers of security. Again, it’s all about working together. If you are a CIO who isn’t talking to the business and CTO everyday then shame on you. If you are a CTO who isn’t talking to the business and CIO everyday then shame on you as well.

Some of the smartest people I have ever had the privilege to work with and for were developers (Chris, Michael, Igor, Sebby, Cayla, Sarah) . Those people have helped me grow professionally, in way I am so grateful for – they probably don’t even realize the impact they had on me; still don’t think I’d want them setting up my switches, servers, and domain controllers though :)

VMware vCenter on Windows 2012 Failover Cluster

Some time around the release of vSphere 5.5 (Update 2 maybe?) VMware officially(?) didn’t not support vCenter on a Windows Failover Cluster. I say didn’t not support because there still seems to be very limited documentation and KB’s on how to do this. The VMware vCenter Server Availability Guide documents available options such as using HA for vCenter availability, but also how to install vCenter on a Windows Failover Cluster, and configure the services appropriately since the application itself is not other cluster aware, for example like installing SQL on a failover cluster.

If you have done a failover cluster on Windows before, the process is a bit different so don’t just dive in as I did. So what does my environment look like:

  • SSO has been already deployed and working
  • A management vCenter is running; you will need this or some other means to clone the first virtual machine after installation

So wait why are you clustering vCenter if there is already a vCenter you ask? Many reasons, but primarily our availability of our management vCenter is less of a concern. The clustered vCenter is being deployed to support vRealize Automation so end users will rely on this vCenter to be able to request catalog items. Availability was more of a concern for this purpose than strictly management.

  • Start with only a single Window 2012 R2 64-bit virtual machine (not 2) as you will later clone this virtual machine to act as the 2nd node
  • I placed the original, and clone on two separate physical hosts
  • Each virtual machine has a single 60GB (C) drive for the OS
  • 2 additional volumes will be added which, in my case, are XtremIO volumes presented as a physical RDM. This should also work using in-guest iSCSI for example
  • 1 of the 2 additional volumes is a 60GB (D) drive which vCenter will be installed on and the other a quorum disk for the failover cluster
  • Each virtual machine has two NICs – one for production/client access the other for cluster communication
  • The Windows Failover Cluster will have an IP address, as well as the vCenter Service role which you will create; in total this is 6 IP address
  • An AD account was created for the vCenter services, added to the local administrators group and given permission on the SQL server as required

A few notes before I review the process;

  • If you are using RDMs, make sure you read this KB to mark the RDMs as perennially claimed otherwise storage rescans and boot times will be drastically affected (hosts were taking roughly an hour to boot)
  • The directions have you install the vCenter Web Client, Inventory Service, and vCenter services to the D drive. There is a known bug that causes the web client to not function properly when installed to a non-default location (though it seems more that it doesn’t work when not installed to the C drive). You’ll need this KB article which walks you through creating a symbolic link, after implementing this the web client operated as expected. Also, once installation is complete and working on the primary node, you’ll need to failover to the secondary node to create the sym link (well at least I did, would it let you create a sym link to a drive that didn’t exist? hmmm)

So, with that out of the way there is a few things to define before you bring up your first virtual machine – specifically the names and IP addresses of both virtual machines, the Window cluster, and the vCenter cluster. For example:

vCenter ClusterVC2192.168.1.100
Windows ClusterVC2Win192.168.1.99
Primary vCenter NodeVC2-1192.168.1.101
Secondary vCenter NodeVC2-2192.168.1.102


This is important, and I misinterpreted this step the first time I did this: When you create the first virtual machine – give it the name and IP address of what will ultimately be the vCenter cluster – using the example above you will name the computer VC2, with an IP address of and join it to your domain. After the initial install this will be changed.

Create the virtual machine, with 2 NICs and the RDMs. Mount one of the RDMs as D and one as whatever letter makes you happy, for my OCD that would be Q for quorum. Create your system DSN as you normally would, log in as your vCenter service account and perform a custom installation (not simple), installing each of the components to the D drive. During the installation process note that the name being added to SSO is the name that will ultimately be the vCenter cluster.

Before removing the RDMs, make sure to note their original file name, volume ID, and SCSI controller; they need to be added back in the same order.

These steps are pretty straight forward in the guide, change all of the vCenter services to manual, shutdown the virtual machine, remove the RDMs, and make a clone of the virtual machine. One item not clear was when to re-add the RDMs, I chose to play it safe and kept them out of the virtual machine for now. Once the clone is complete, power on the cloned virtual machine and rename it to the secondary vCenter node hostname and IP address. Power on the original virtual machine, unjoin it from the domain, rename and IP it with the hostname for the primary vCenter node, and rejoin the domain. Now you can power off the virtual machines, re-add the RDMs to the primary node, then the secondary as you typically would, making sure the SCSI controller is set to physical sharing.

Power on the virtual machines and install the Failover Cluster feature on each. Once complete, create a new cluster on the primary node – during the creation you will be asked for a cluster name and IP address – use the Windows Cluster name (VC2Win) from the example above – this is NOT the vCenter cluster name and IP address which you used on the initial virtual machine during installation. Unlike with the SQL post I wrote, you can add all available cluster storage as both additional drives are used for the cluster (D – App, Q – Quorum). Now that the cluster has been created, you should have an AD object called VC2Win. Using option #2 from this MSDN blog post, create your vCenter cluster AD object. Failing to do this will cause the cluster to fail when you attempt to start it.

The rest of the steps for creating the vCenter cluster role are well documented with one caveat, so rather than copy paste them here finish reading the VMware vCenter Server Availability Guide. That caveat, because your vCenter services were set to manual, and thus not started after the reboots, when you create the initial vCenter role service it will come us as failed – which made me go  ZOMG not again! This message is actually just the status of the clustered service, which is stopped, thus failed from a Windows Failover cluster perspective – it is okay to proceed with creating the remaining services and setting the dependencies.

At this point, you should be able to start the cluster and have all services come up.

vCenter services on Windows Failover Cluster

vCenter services on Windows Failover Cluster

Once it is up, access the web client and set permissions as required. For example, as you can see in this screenshot, here is both vCenters in the web client after since my account was given the appropriate permissions to both.

vCenter on a Windows Failover Cluster

vCenter on a Windows Failover Cluster

The last item I have to tackle is automating the backup, copy, and restore of the ADAM database. There are a lot of words in the doc which basically says – xcopy the backup to the correct location. The document talks about stopping/starting services before placing the file. But if the services aren’t running on VC2-2, I should just be able to drop it in. Now when the services start there is an up to date file which will get loaded.

So, quick a dirty like…

del d:\backup\*.* /Q
%windir%\system32\dsdbutil.exe “ac i VMwareVCMSDS” ifm “create full D:\backup” q q
xcopy /osy d:\backup\adamntds.dit “\\VC2-2\C$\ProgramData\VMware\Vmware VirtualCenter\VMwareVCMSDS”

Thinking of trying Ansible Tower? Save 20% on the Starter Kit Annual Subscription

Thought it was to good not to share, if you are thinking of trying Ansible Tower, you can save 20% off the annual subscription on the Ansible Tower Starter Kit – that’s almost $200 off the $999 annual subscription. If you were thinking of trying it out and paying the $99 monthly term ($1188/yearly) that’s almost a $400 savings.

The Ansible Tower Starter kit is good for up to 100 hosts – VMs or physical boxes. Ansible is agentless, you need only be able to SSH to your linux hosts or via PowerShell remoting for Windows.

You can check out my Ansible posts and the #vBrownBag DevOps sessions to learn more. Use this link ( to get the code for 20% off during checkout.

Configure Windows 2012 Failover Cluster for SQL 2014

Working on building out a lab that is going to be used to demonstrate setting up a vCenter environment. We were fortuneate enough to be given some time to set it up “right” – meaning setup a SQL cluster for vCenter, SSO in HA behind a load balancer with valid certificates. I drew the SQL straw, and it s the first time I have setup SQL clustering. I had to pull from a few different resources, none were completely what I was trying to do but thank you to Derek Seaman’s blog and the MSDN blogs for being able to answer questions when they came up. You can find more information on Windows Failover Clustering on vSphere 5.5 here (nope not on 6 yet). An over view of our setup:

  • Two Windows 2012 R2 virtual machines on separate hosts; SQL1 and SQL2
  • Each virtual machines with two NICs; one for production/client access the other for cluster communication.
  • Each virtual machine has 2 drives; 60GB “C” for OS and 20GB “D” for SQL installation
  • 3 XtremIO drives presented via VPLEX
  • AD accounts for SQL and SQL Agent were created in AD
  • IP addresses for each of the SQL virtual machines, the Windows cluster, and the SQL cluster; for this setup that is 4 total.

Windows was installed, patched and joined to the domain. On each virtual machine I ensured that Windows Ethernet0 was first in the biding order and used for “production.” NIC1 would be used for cluster communication. Ensure RSS is not enabled on the NICs.

Continue reading

Synology with an Awesome DSM Update – 5.2 Beta with Docker Support

Synology has released DSM update 5.2 Beta with, of all things, support for Docker! Yet another reason this makes for an awesome storage solution.

The official press release:

=== Press Release 3.12.15 ===

Synology® Releases DiskStation Manager 5.2 Beta

Achieve more with your own cloud

Bellevue, WA—March 10, 2015—Synology America Corp., today announced the beta release of DiskStation Manager (DSM) 5.2, the latest version of its award-winning NAS operating system. Coming with both exciting new functionalities and under-the-hood changes, DSM 5.2 is built to make managing a private cloud even more effective, secure, and intuitive.

“Several state-of-the-art technologies have emerged in recent years. With DSM 5.2, we want to extend their benefits to all our users,” said Derren Lu, CEO of Synology Inc. “More than perfecting the operating system with better performance and reliability, we also want to bring new ways of thinking to some of its core aspects, such as data management and application serving.”

New features in DSM 5.2 include:

– Streamlined application deployment and higher productivity: The integration of Docker allows developers to ship, and users to run, a vast number of applications on Synology NAS with minimal time and resources needed. SSO Server lets users gain access to all services in the same domain with only one single log-in. File Station now can be connected with public clouds so you can browse and manage files in them without taking up bandwidth.

– Faster, more customizable cloud syncing with encryption: The smart polling technology dramatically increases Cloud Station’s performance while at the same time reducing network traffic and server overhead. The number of file versions is also customizable for each shared folder. When syncing with public clouds, besides compatible with more Amazon S3 and WebDAV storage, you can now set up one-way sync – and even encrypt the data before uploading it to public clouds – immediately turning them to offsite backup destinations without sacrificing data privacy.

– File-based backup restoration and smart version control: The new file browsing feature allows you restore a single file, instead of an entire shared folder, from past backups. You can also automatically rotate old backup data or iSCSI LUN snapshots when designated quota is reached or even benefit from the Smart Recycle policy. The latter helps you minimize storage consumption while still maintaining enough flexibility for point-in-time recovery.

– Portable task manager and refined multimedia experience: With the all-new Synology Web Clipper and the to-do-list-turned-task-manager, Note Station and DS note lets you capture the best of the web into your pocket notebook, and then organize action points according to your schedule. Download Station’s new “preview” feature gives you a sneak peak of what’s being downloaded, saving you time and bandwidth from misleading torrents. Cue file support in Audio Station means you can switch between high-quality CD tracks at will. Want to display photos on big-screen TV? Now besides Apple TV, the support of Chromecast and DLNA TVs in DS photo also makes this possible.

– AppArmor and SMB 3 encryption to strengthen security further: AppArmor extends its reach to profiles for packages to effectively restrict malicious software from accessing unauthorized system resources. The support of SMB 3 encryption enables Synology NAS to secure file transfers to Windows 8 and Windows Server 2012, reducing the possibility of tampering and eavesdropping when data moves across a company’s network.

Synology DSM 5.2 Beta Program

Synology will hand out a DS214se to each of the three beta testers who provide the most valuable assistance and feedback. Please visit for more details.


Visit the live demo site at to try out new features.