Comparing Windows Nodes with ScriptRock

Recently ScriptRock announced their service is completely free for up to 10 nodes, when I say completely there are no features you have to pay for unlike other freemium offerings. ScriptRock allows you to monitor the configuration of your infrastructure including Windows, and Linux servers, AWS instances, and other cloud services. A really interesting new feature, if you have been interested but not sure how to get started with something like Ansible, is the ability to create an Ansible role/playbook from a monitored host. There is quite a bit you can do with ScriptRock, so let’s start by taking a look at how to compare two nodes.

After signing up, you will be asked to add some node to monitor, in this case lets take a look at a Windows node, specifically a Domain Controller (imagine being able to instantly see all of the DC’s in your organization are configured exactly the same way!). ScriptRock will use either an agent, or SSH to monitor it, in the case of Windows I’ll opt for the agent. Click on Windows


And then download the installer to a location accessible by the Windows node you wish to monitor (or directly to it). In the download window, you will also be given an API key, document this and start the install wizard (pretty basic install) entering the API key when prompted. When the installer finishes, click Continue in the ScriptRock website, it will commence the scan of the node you installed the agent on. Once complete, click view scan – you can now see all the information about the node you just scanned.

You can drill down into each of the categories, for example here is all of the Windows features I have installed


or all of my environment variables, such as the paths that are set – think of an app that has a specific path requirement to a certain version of JAVA_HOME, be nice to know if that application isn’t functioning because it doesn’t have the correct path defined!


While having a dashboard to see the configuration is nice, what is really cool is being able to compare to nodes. I installed the ScriptRock agent on a second Windows node (not a Domain Controller on purpose); click the Add Node button and running through the same process you did initially. With only 2 nodes it is easy to see all of them under “All Nodes” however there is also a concept of node groups, for example all of your Domain Controllers or SQL Servers. When I drill down into a node group I can “diff” the nodes in that group by clicking on Diff This Group. Within just a few seconds ScriptRock scanned through 713 different configurations and found 245 differences! Now I compared two different servers on purpose to have a better example, but imagine if you found 245 different configuration options across your application servers or Domain Controllers!


Not bad for 5 minutes worth of effort eh? In addition to monitoring changes between nodes, you can also use this to monitor changes on a specific node and re-scanning the nodes or waiting for the next schedule scan to run. After making some changes and running a scan, click on the node and select a previous scan to “compare to”


Here you can see I installed IsoBuster and a few files changed. This is just a few minutes into using the tool and already it can provide value from both a documentation, security, and auditing perspective. In my next post I will look to use policies and templates – stay tuned!

#vExpert Gift List at #VMworld 2015

Compiling a list of #vExpert gifts at VMworld 2015; who am I missing?

Vendor #Booth (Alphabetical order)

CatalogicSW #517 (

Cohesity # 428 (No link, check your email)

Coho #1713 (

DataGravity #1928 (Bring your #vExpert shirt

Datrium #1747 (No link needed)

SimpliVity #1029 (

SolidFire #929 (No link needed)

Tegile #1037 (

Tintri #921 (Bring your #vExpert shirt and tweet a selfie from the Tintri booth

#vBacon #ClosestHospital


Going Agile isn’t about following a text book

Recently the team I am on has been looking to adopt a more agile like approach to our projects, compared to previous years where it was more or less a waterfall methodology. Now in a text book agile environment you would start a sprint with a planning meeting on a Friday. In that planning meeting you would have, or create, a list of tasks in your backlog, ensure they are well defined (to your definition of “done”) and assign them to each of the team members. Each day during the sprint you would then hold a standup each day to check in on work in progress and identify any pitfalls or problems and attempt to rectify those. You wrap up the sprint with a review meeting where work is demoed and a retrospective where team members give feedback on the sprint, which leads into another planning meeting for the next sprint. While in many cases this can be followed quite closely, you also need to consider your actual team and business requirements around your approach.

As we have experimented with an agile approach we started with a text book version of a sprint, holding our planning meeting to kick things off, daily stand ups, and wrapping up with review and retrospective meetings. After our 2nd or 3rd sprint we ditched the retrospective meeting, as we found with our team dynamic we were “retrospective” on the fly, that is identifying what worked, what didn’t, and new possible opportunities for improvement. Now on a larger or more diverse team spread across multiple departments this might be something you chose to keep. Were we wrong to drop it? Some people I have talked to said yes, but I always go back to business requirements. Our job is to be more efficient, not follow a book that says we need to follow a specific process. In addition to dropping the retrospective, we also switched to 3 stand ups per week; dropping 1 in favor of a call with a larger group that generally proves informational and an “as needed” Friday stand up (quite honestly by Friday I’m spent), if someone is having a problem or we are getting close to a deadline we might have that Friday stand up, but if things are progressing as planned we generally skip it. Again, we are not following the text book examples of agile, but rather adopting the methodology as it applies to our work, team, and requirements.

As the team has grown, we are now more aware of the work we are each doing, able to avoid duplication of effort, be more consistent across our work, and have a clear goal as to what we are working on each day, rather than knowing we have an end date 2 or 3 months down the road. For those that have adopted an agile approach, what do you think? Do you need to have a meeting for the sake of a meeting, force fitting your workflow into the text book definition of the process or do you adopt the methodology to your team?

Installing Meat locally in VMware Workstation

As I mentioned in my last post, Meat comes packaged as an OVA to easily deploy into VMware environments. The quickest path for testing for me was to install in VMware Workstation; simply click File >> Open and select the OVA. By default, it connects in bridged mode (e.g. on your network) however I run my lab behind the Workstation NAT so I made that change. Connect appropriately for your environment. Before you begin, add another virtual drive with at least 10GB of space.

Once the VM is powered on you should be prompted to connect to the IP address it received during boot via DHCP, or you have the option to configure networking via the console; both options shown here. Again, this will depend on your environment.


When you navigate to the IP and port, in the case of Meat 8080, you’ll be taken to the first step in the setup wizard.


Start the installation, and accept the agreement. If you forgot to add additional space you will now be prompted to do so. Upload your license and the installation will begin.


Once done, you will be prompted to configure your workspace. In a workspace you can have a collection of different projects and users, permissions, groups and see activity feeds. For example, here is a workspace I created.



As I would in GitHub, I can see the URLs to access the repository in my Git client via either HTTPS or SSH


I can also import from existing popular repositories such as Git, GitHub, or BitBucket which is useful for testing Meat out as you can bring your projects in that already exist.


For example, here is my Ansible-Test-Playbooks repository from GitHub, imported into Meat. All I had to do was authorize Meat, and select the repository I wanted to import. It also supports readme files as you would be accustomed to in GitHub.


Not only did it import the repository, but a history of my commits as well!


As I mentioned in my previous post, I can edit files directly in Meat, and as I had hoped prompts me to commit those changes versus just by passing the typical commit/push process those who use GitHub are familiar with. To boot, Meat automatically made notes of my changes, versus having to enter them in vim and then saving them before doing a git push.


You may also notice the reply box, this is a social aspect of Meat I hadn’t mentioned. You can see a list of comments, adds, deletes, changes all from the activity stream as well. Here is my commit done via the Meat UI with a comment in the stream.


That’s it for today, next I will be looking into Releases which is the ability to deploy code to servers directly from Meat.

Meet Meat – The free Git collaboration tool

Man, I thought I came up with that headline before anyone else; turns out Meat is already using it. What is Meat you ask? From their mouths; “Git is a free collaboration platform with built-in deployments”

Git at its core is a version control system, and since Meat is based on Git it will do that. If you want to learn more about version control with Git check out these great tutorials from Git-Tower. But why use Meat, GitHub is already free for certain types of teams/repositories? One of the cool things about Meat is you can run it within your own data center, so for organizations with strict security requirements, or that just are quite ready to take the plunge you can benefit from Git version control privately…oh and did I mention it is (currently) free?

Okay, but then why not just run Git if I don’t want to use GitHub? Another great question. The team over at Meat has built some very nice tools into their solution. While some of this tools may not be trendy cli things, they do have a use case for junior developers or other collaborators who may not have a comfort level with cli tools.

First up, they have built a web based editor (I need to see how it treats these edits in the lab, I would hope its not means to by pass the very thing it is meant to be “version control”) so you can browse and view files.


Beyond just code repository, there is also a release management tool that allows you to deploy your code directly to your servers, and even appears to support automated deployments. You could now potentially replace two separate tools such as Git/GitHub and Jenkins with Meat.



You can also stage actions to happen in a specific order, for example uploading files to S3 or sending notifications. I have a question out on Twitter to find out if these actions are idempotent; that is to say will they always run, or only if they need to run. For example, in Ansible if I say to create a file, but the file has already been created, it won’t do work that has already been done. This is in contrast to scripts which, without proper checking and handling, will just do the thing they are asked to do even if its not needed.

There are many more features available, my next step will be to install the local version into my lab. Meat is delivered as an OVA, so folks familiar with VMware shouldn’t have a problem installing this in their labs. If you are interested in trying it out, head over to and sign up! Hopefully more to come here soon

Installing the EMC Virtual VNX (vVNX) Community Edition

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

The virtual VNX was announced at EMC World in May of 2015, and is a free to use, software only version of the VNXe. More details on the vVNX can be found at the ECN.

A few requirements for the vVNX: you’ll need to provide the virtual machine with 2 vCPU and 12GB of RAM (I’ll test that as well) and you can present up to 4TB of storage. This is a great lab solution as well as a way to test upgrades or changes to your production VNX arrays (I know I never had budget to buy additional arrays for testing). Before you get started, download the vVNX bits ( which is provided as an OVA.

1.  Once the OVA has been downloaded, log into vCenter and start the Deploy OVF Template wizard. In the image below, you can see I am deploying to an existing vApp but you could do this to an individual ESXi host or cluster as well.


2.  Once you select the OVA package, you’ll be prompted to accept the extra configuration options; click the check box, click Next, and accept the EULA


3.  In section 2 of the wizard, you will name your virtual VNX and select the appropriate resources such as storage, and configure networks.

4.  The network setup (2c) includes 3 interfaces; 1 for management and 2 for data. Select the appropriate port groups and click next. As you can see below I select two different port groups for this.

5.  In step 2d you will set the system name, as well as expand the IPv4 section to provide the management IP information as pictured below, of course that is valid in your environment. If using DHCP, leave these values blank


6. Click next, check the box to Power on after deployment and click Finish. After the vVNX deploys, it will boot and start its initial configuration which can take some time. You can watch the setup process via the console to know when it has completed, for me I started the OVA deployment at 1:25p and it was completed within about 20 minutes (I also wasn’t paying very close attention)

7.  With the Virtual VNX powered on, add more virtual disks to the VM; this would mimic having physical drives in your array. The number of drives you add will depend on your use case. I’ve added 4x 250GB drives. Do not remove the 22, 30, or 32GB drives that were already created from the VM deployment.

8.  Navigate to the IP address you set during installation and log in as as admin with a default password of Password123# to start the Unisphere Configuration Wizard (note in may take several minutes after the initial setup for the login page to be available)


9.  The first 2 steps are pretty straight forward; accept the EULA, and enter a new password (keep the service password box checked)

10.  When you get to the license screen, you will need to switch gears for a minute and navigate to and enter the System UUID in the configuration wizard


11. Download the license key file that is generated, and upload it in Unisphere using the Install License File button

12.  You should see a message that the License file installed successfully


13.  Next, select the appropriate DNS and NTP configuration for your network

14.  Create your initial storage pool, you can create others later as well. I have opted to create one called CloudStorage and added the 4x 250GB drives I added previously and assigned a tier.

15.  Next, configure your iSCSI interfaces, for example ETH0


16.  The remaining steps are optional. If you would like to use NFS, you can do so in step 9 and replication in step 10. Click Finish to complete the wizard.


At this point, if you have not done so already you will need to configure a vmkernel port on your ESXi hosts on the same network as the storage interface(s). Once that is done, you should be able to go to Storage >> LUN and create a new LUN. For example here you can see the ESXi host I added to the vVNX while creating a LUN.


In my next post, I will be seeing if I can add the vVNX to a ViPR Virtual Array…stay tuned.

*Update: It appears the vVNX ships with a version of SMI-S that is not currently supported by ViPR. While it will initially discover the vVNX it fails on subsequent discovers for both file and block**


Quick Ansible playbook for installing Sysdig

Been thinking about Sysdig, and how it can be used for troubleshooting. One thought I had was to capture events during an Ansible playbook run in the event there were any problems. Now I’m not sure how practical that is just yet, but the first task was getting Sysdig installed. Of course, that meant writing an Ansible playbook to do so (really should have been a role probably but baby steps).

You can find the sysdig.yml file for Ubuntu/Debian in my test playbooks repository on GitHub

Playbook is based on the directions from, and tested on Ubuntu 14.04. As always, I am still learning here but feel free to update as you see fit and take it all with a grain of salt.

- hosts: parent
      package: sysdig
      sysdig_key: DRAIOS-GPG-KEY.public
      dl_dir: /downloads
      sysdig_repo: stable-$(ARCH)/
      linux_headers: linux-headers-{{ ansible_kernel }}
  remote_user: [ENTERUSER]
  sudo: yes

  - name: Validating download directory
    file: path={{ dl_dir }} state=directory

  - name: Download Sysdig public key
    get_url: url={{ sysdig_key_url }}/{{ sysdig_key }} dest={{ dl_dir }} validate_certs=no

  - name: Installing Sysdig public key
    apt_key: file={{ dl_dir }}/{{ sysdig_key }} state=present

  - name: Adding Sysdig apt repository
    apt_repository: repo='deb {{ sysdig_repo }}' state=present

  - name: Update apt repositories
    apt: update_cache=yes

  - name: Install Linux Headers
    apt: name={{ linux_headers }} state=present

  - name: Install Sysdig
    apt: name={{ package }} state=present

Modules used in this playbook:


Using ansible-galaxy init to create roles

Ansible held a free online 2 hour introduction session, and while I’m not an expert I do feel I have a good handle on some of the items such as inventory files, and playbook formats. However there is always something to learn! One thing I took away from todays training was an ansible-galaxy command.

This command can save a lot of manual effort up front when creating roles. It will create the basic folder structure and files necessary for an Ansible role – something up until know I’ve been doing by hand. To use it is simple, just type the command followed by the role you are creating. For example if you were creating a role for PostgreSQL you would simply type:

ansible-galaxy init postgres

And this would create the folders such as handlers or tasks, and a main.yml file where appropriate. It was new to me, so thought I’d share!

vRealize Application Services Home Lab Upgrade

As I did in the previous post with vRealize Automation, it is now time to upgrade vRealize Application services, again based on KB2109760 this would be the second item to upgrade before upgrading vCenter with embedded SSO. Not that it is horribly difficult, but there is no management interface as we had with the vRealize Automation appliance so we will have to download the files, copy them to the appliance and start the upgrade.

Before you being, ensure you can log into the console of the application services virtual appliance as root and SSH as darwin_user. If you are unable to SSH as darwin_user follow the directions here to enable the darwin_user account. Now, download the VMware vRealize Automation Application Services 6.2.0 upgrade installer from Once the file has been download, copy the .tgz file to the appliance, for example if you are using Windows you might use WinSCP to copy the file. Once the file is on the system, SSH to the appliance and navigate to the directory you placed the file in. For example here you can see the tgz file in the 62-upg folder I created.


Next, untar the file by running tar xvfz ApplicationServices- (or the appropriate build number based on your download). Once all the files have been extracted, you should have an file ready to run (no need to chmod to be executable). Run the install as root by running sudo ./ (or sudo -su, then ./ as the VMware docs state)


Type Y to start the upgrade and the rest is scripted for you. Once the installer finishes, restart the vRealize Automation appliance and Application Services appliance, when the appliance reboots, you should be able to log in at https://{appsservicesURL}:8443/darwin/org/{vratenant} – for example https://vxprt-apps01.vxprt.local:8443/darwin/org/vsphere.local


You are now on Application Services 6.2 (as seen in the lower right corner in the above screenshot).

vRealize Automation Home Lab Upgrade

With new versions of vRealize Automation and vSphere dropping, and seemingly being stable it is time to upgrade the home lab. Since this is a home lab, and somewhat basic there are just a few steps from KB2109760 that needs to be followed:

  1. Upgrade vRA (Appliance >> IaaS)
  2. Upgrade Application Services
  3. Upgrade vCenter
  4. Upgrade ESXi

In this post, I will cover the first step in the process, upgrade vRealize Automation to 6.2.latest. First, I have shut down services on my IaaS server. Now log into the VMware vCAC Appliance management interface on port 5480 – in my case https://vxprt-vcac01.vxprt.local:5480 for example and click on the update tab. Now, click on Check Updates. As you can see here, I have an available updated from to


Now, as you might expect, click on Install Updates >> OK. The upgrade process will begin.


After a few minutes, you should be presented with a message that a reboot is required.


Click on the System tab, click the Reboot button, and click the Reboot button again; the system will reboot. Once the reboot completes, you should be able to log in and verify the version by clicking on the system tab. Notice anything different?


The updated product name; vRealize Automation is now displayed instead of vCAC Appliance and the version is Once all the services have started, you should also be able to log into the vRealize Automation console and see the tenant information from the previous configuration.


The next step is to upgrade the IaaS components. Again this should be straight forward in a lab because all of the components are on a single server, and not distributed. Log into the IaaS server as the service account used to run the IaaS components, if you followed along in my vDM 30-in-30 challenge you would have named it something along the lines of svc_vra_iaas. Open a web browser and grab the vRA 6.2 PreReq script Brian Graf has built over on GitHub ( Save, open a PowerShell console as administrator and run the script.


Follow the prompts in the prereq script, typically I have selected option 2 – I have internet access and want to download and install it automatically.



Select option 2, 2 more times. When prompted, provide the service account for the IaaS components, in my case vxprt\svc_vra_iaas and the script should complete.


Now, navigate to the vRA appliance page. Click on the vRealize Automation IaaS installation page link, download and extract the zip file containing the database upgrade scripts. From a command prompt run the following command:

dbupgrade.exe -S {servername\instancename} -d {dbname} -E

On my server I am using the default SQL Express instance, so the instance name is not needed, and my DB name is vCAC so my command looks like this:

dbupgrade -S localhost -d vCAC -E


If you are receiving any errors, make sure that Named Pipes is enabled.


Now that the DB is upgraded, download the IaaS Installer file, do not rename the file, and run it. The upgrade is of the next, next, next variety.

  1. Click Next
  2. Accept the terms and click next
  3. Enter the root password for the vRA appliance, accept the certificate, and click Next
  4. Upgrade should be the only option that is available, click Next
  5. Enter the service account password, the server name, database name, and click Next
  6. Click the Upgrade button

If the computer gods are on your side, the installation should complete


Click Next and Finish. If you flip back over to your vRA console, you should see all of the available tabs based on the user permission – in this case my iaasadmin user.


Up next is upgrading Application Services.