Installing Meat locally in VMware Workstation

As I mentioned in my last post, Meat comes packaged as an OVA to easily deploy into VMware environments. The quickest path for testing for me was to install in VMware Workstation; simply click File >> Open and select the OVA. By default, it connects in bridged mode (e.g. on your network) however I run my lab behind the Workstation NAT so I made that change. Connect appropriately for your environment. Before you begin, add another virtual drive with at least 10GB of space.

Once the VM is powered on you should be prompted to connect to the IP address it received during boot via DHCP, or you have the option to configure networking via the console; both options shown here. Again, this will depend on your environment.


When you navigate to the IP and port, in the case of Meat 8080, you’ll be taken to the first step in the setup wizard.


Start the installation, and accept the agreement. If you forgot to add additional space you will now be prompted to do so. Upload your license and the installation will begin.


Once done, you will be prompted to configure your workspace. In a workspace you can have a collection of different projects and users, permissions, groups and see activity feeds. For example, here is a workspace I created.



As I would in GitHub, I can see the URLs to access the repository in my Git client via either HTTPS or SSH


I can also import from existing popular repositories such as Git, GitHub, or BitBucket which is useful for testing Meat out as you can bring your projects in that already exist.


For example, here is my Ansible-Test-Playbooks repository from GitHub, imported into Meat. All I had to do was authorize Meat, and select the repository I wanted to import. It also supports readme files as you would be accustomed to in GitHub.


Not only did it import the repository, but a history of my commits as well!


As I mentioned in my previous post, I can edit files directly in Meat, and as I had hoped prompts me to commit those changes versus just by passing the typical commit/push process those who use GitHub are familiar with. To boot, Meat automatically made notes of my changes, versus having to enter them in vim and then saving them before doing a git push.


You may also notice the reply box, this is a social aspect of Meat I hadn’t mentioned. You can see a list of comments, adds, deletes, changes all from the activity stream as well. Here is my commit done via the Meat UI with a comment in the stream.


That’s it for today, next I will be looking into Releases which is the ability to deploy code to servers directly from Meat.

Meet Meat – The free Git collaboration tool

Man, I thought I came up with that headline before anyone else; turns out Meat is already using it. What is Meat you ask? From their mouths; “Git is a free collaboration platform with built-in deployments”

Git at its core is a version control system, and since Meat is based on Git it will do that. If you want to learn more about version control with Git check out these great tutorials from Git-Tower. But why use Meat, GitHub is already free for certain types of teams/repositories? One of the cool things about Meat is you can run it within your own data center, so for organizations with strict security requirements, or that just are quite ready to take the plunge you can benefit from Git version control privately…oh and did I mention it is (currently) free?

Okay, but then why not just run Git if I don’t want to use GitHub? Another great question. The team over at Meat has built some very nice tools into their solution. While some of this tools may not be trendy cli things, they do have a use case for junior developers or other collaborators who may not have a comfort level with cli tools.

First up, they have built a web based editor (I need to see how it treats these edits in the lab, I would hope its not means to by pass the very thing it is meant to be “version control”) so you can browse and view files.


Beyond just code repository, there is also a release management tool that allows you to deploy your code directly to your servers, and even appears to support automated deployments. You could now potentially replace two separate tools such as Git/GitHub and Jenkins with Meat.



You can also stage actions to happen in a specific order, for example uploading files to S3 or sending notifications. I have a question out on Twitter to find out if these actions are idempotent; that is to say will they always run, or only if they need to run. For example, in Ansible if I say to create a file, but the file has already been created, it won’t do work that has already been done. This is in contrast to scripts which, without proper checking and handling, will just do the thing they are asked to do even if its not needed.

There are many more features available, my next step will be to install the local version into my lab. Meat is delivered as an OVA, so folks familiar with VMware shouldn’t have a problem installing this in their labs. If you are interested in trying it out, head over to and sign up! Hopefully more to come here soon

Installing the EMC Virtual VNX (vVNX) Community Edition

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

The virtual VNX was announced at EMC World in May of 2015, and is a free to use, software only version of the VNXe. More details on the vVNX can be found at the ECN.

A few requirements for the vVNX: you’ll need to provide the virtual machine with 2 vCPU and 12GB of RAM (I’ll test that as well) and you can present up to 4TB of storage. This is a great lab solution as well as a way to test upgrades or changes to your production VNX arrays (I know I never had budget to buy additional arrays for testing). Before you get started, download the vVNX bits ( which is provided as an OVA.

1.  Once the OVA has been downloaded, log into vCenter and start the Deploy OVF Template wizard. In the image below, you can see I am deploying to an existing vApp but you could do this to an individual ESXi host or cluster as well.


2.  Once you select the OVA package, you’ll be prompted to accept the extra configuration options; click the check box, click Next, and accept the EULA


3.  In section 2 of the wizard, you will name your virtual VNX and select the appropriate resources such as storage, and configure networks.

4.  The network setup (2c) includes 3 interfaces; 1 for management and 2 for data. Select the appropriate port groups and click next. As you can see below I select two different port groups for this.

5.  In step 2d you will set the system name, as well as expand the IPv4 section to provide the management IP information as pictured below, of course that is valid in your environment. If using DHCP, leave these values blank


6. Click next, check the box to Power on after deployment and click Finish. After the vVNX deploys, it will boot and start its initial configuration which can take some time. You can watch the setup process via the console to know when it has completed, for me I started the OVA deployment at 1:25p and it was completed within about 20 minutes (I also wasn’t paying very close attention)

7.  With the Virtual VNX powered on, add more virtual disks to the VM; this would mimic having physical drives in your array. The number of drives you add will depend on your use case. I’ve added 4x 250GB drives. Do not remove the 22, 30, or 32GB drives that were already created from the VM deployment.

8.  Navigate to the IP address you set during installation and log in as as admin with a default password of Password123# to start the Unisphere Configuration Wizard (note in may take several minutes after the initial setup for the login page to be available)


9.  The first 2 steps are pretty straight forward; accept the EULA, and enter a new password (keep the service password box checked)

10.  When you get to the license screen, you will need to switch gears for a minute and navigate to and enter the System UUID in the configuration wizard


11. Download the license key file that is generated, and upload it in Unisphere using the Install License File button

12.  You should see a message that the License file installed successfully


13.  Next, select the appropriate DNS and NTP configuration for your network

14.  Create your initial storage pool, you can create others later as well. I have opted to create one called CloudStorage and added the 4x 250GB drives I added previously and assigned a tier.

15.  Next, configure your iSCSI interfaces, for example ETH0


16.  The remaining steps are optional. If you would like to use NFS, you can do so in step 9 and replication in step 10. Click Finish to complete the wizard.


At this point, if you have not done so already you will need to configure a vmkernel port on your ESXi hosts on the same network as the storage interface(s). Once that is done, you should be able to go to Storage >> LUN and create a new LUN. For example here you can see the ESXi host I added to the vVNX while creating a LUN.


In my next post, I will be seeing if I can add the vVNX to a ViPR Virtual Array…stay tuned.

*Update: It appears the vVNX ships with a version of SMI-S that is not currently supported by ViPR. While it will initially discover the vVNX it fails on subsequent discovers for both file and block**


Quick Ansible playbook for installing Sysdig

Been thinking about Sysdig, and how it can be used for troubleshooting. One thought I had was to capture events during an Ansible playbook run in the event there were any problems. Now I’m not sure how practical that is just yet, but the first task was getting Sysdig installed. Of course, that meant writing an Ansible playbook to do so (really should have been a role probably but baby steps).

You can find the sysdig.yml file for Ubuntu/Debian in my test playbooks repository on GitHub

Playbook is based on the directions from, and tested on Ubuntu 14.04. As always, I am still learning here but feel free to update as you see fit and take it all with a grain of salt.

- hosts: parent
      package: sysdig
      sysdig_key: DRAIOS-GPG-KEY.public
      dl_dir: /downloads
      sysdig_repo: stable-$(ARCH)/
      linux_headers: linux-headers-{{ ansible_kernel }}
  remote_user: [ENTERUSER]
  sudo: yes

  - name: Validating download directory
    file: path={{ dl_dir }} state=directory

  - name: Download Sysdig public key
    get_url: url={{ sysdig_key_url }}/{{ sysdig_key }} dest={{ dl_dir }} validate_certs=no

  - name: Installing Sysdig public key
    apt_key: file={{ dl_dir }}/{{ sysdig_key }} state=present

  - name: Adding Sysdig apt repository
    apt_repository: repo='deb {{ sysdig_repo }}' state=present

  - name: Update apt repositories
    apt: update_cache=yes

  - name: Install Linux Headers
    apt: name={{ linux_headers }} state=present

  - name: Install Sysdig
    apt: name={{ package }} state=present

Modules used in this playbook:


Using ansible-galaxy init to create roles

Ansible held a free online 2 hour introduction session, and while I’m not an expert I do feel I have a good handle on some of the items such as inventory files, and playbook formats. However there is always something to learn! One thing I took away from todays training was an ansible-galaxy command.

This command can save a lot of manual effort up front when creating roles. It will create the basic folder structure and files necessary for an Ansible role – something up until know I’ve been doing by hand. To use it is simple, just type the command followed by the role you are creating. For example if you were creating a role for PostgreSQL you would simply type:

ansible-galaxy init postgres

And this would create the folders such as handlers or tasks, and a main.yml file where appropriate. It was new to me, so thought I’d share!

vRealize Application Services Home Lab Upgrade

As I did in the previous post with vRealize Automation, it is now time to upgrade vRealize Application services, again based on KB2109760 this would be the second item to upgrade before upgrading vCenter with embedded SSO. Not that it is horribly difficult, but there is no management interface as we had with the vRealize Automation appliance so we will have to download the files, copy them to the appliance and start the upgrade.

Before you being, ensure you can log into the console of the application services virtual appliance as root and SSH as darwin_user. If you are unable to SSH as darwin_user follow the directions here to enable the darwin_user account. Now, download the VMware vRealize Automation Application Services 6.2.0 upgrade installer from Once the file has been download, copy the .tgz file to the appliance, for example if you are using Windows you might use WinSCP to copy the file. Once the file is on the system, SSH to the appliance and navigate to the directory you placed the file in. For example here you can see the tgz file in the 62-upg folder I created.


Next, untar the file by running tar xvfz ApplicationServices- (or the appropriate build number based on your download). Once all the files have been extracted, you should have an file ready to run (no need to chmod to be executable). Run the install as root by running sudo ./ (or sudo -su, then ./ as the VMware docs state)


Type Y to start the upgrade and the rest is scripted for you. Once the installer finishes, restart the vRealize Automation appliance and Application Services appliance, when the appliance reboots, you should be able to log in at https://{appsservicesURL}:8443/darwin/org/{vratenant} – for example https://vxprt-apps01.vxprt.local:8443/darwin/org/vsphere.local


You are now on Application Services 6.2 (as seen in the lower right corner in the above screenshot).

vRealize Automation Home Lab Upgrade

With new versions of vRealize Automation and vSphere dropping, and seemingly being stable it is time to upgrade the home lab. Since this is a home lab, and somewhat basic there are just a few steps from KB2109760 that needs to be followed:

  1. Upgrade vRA (Appliance >> IaaS)
  2. Upgrade Application Services
  3. Upgrade vCenter
  4. Upgrade ESXi

In this post, I will cover the first step in the process, upgrade vRealize Automation to 6.2.latest. First, I have shut down services on my IaaS server. Now log into the VMware vCAC Appliance management interface on port 5480 – in my case https://vxprt-vcac01.vxprt.local:5480 for example and click on the update tab. Now, click on Check Updates. As you can see here, I have an available updated from to


Now, as you might expect, click on Install Updates >> OK. The upgrade process will begin.


After a few minutes, you should be presented with a message that a reboot is required.


Click on the System tab, click the Reboot button, and click the Reboot button again; the system will reboot. Once the reboot completes, you should be able to log in and verify the version by clicking on the system tab. Notice anything different?


The updated product name; vRealize Automation is now displayed instead of vCAC Appliance and the version is Once all the services have started, you should also be able to log into the vRealize Automation console and see the tenant information from the previous configuration.


The next step is to upgrade the IaaS components. Again this should be straight forward in a lab because all of the components are on a single server, and not distributed. Log into the IaaS server as the service account used to run the IaaS components, if you followed along in my vDM 30-in-30 challenge you would have named it something along the lines of svc_vra_iaas. Open a web browser and grab the vRA 6.2 PreReq script Brian Graf has built over on GitHub ( Save, open a PowerShell console as administrator and run the script.


Follow the prompts in the prereq script, typically I have selected option 2 – I have internet access and want to download and install it automatically.



Select option 2, 2 more times. When prompted, provide the service account for the IaaS components, in my case vxprt\svc_vra_iaas and the script should complete.


Now, navigate to the vRA appliance page. Click on the vRealize Automation IaaS installation page link, download and extract the zip file containing the database upgrade scripts. From a command prompt run the following command:

dbupgrade.exe -S {servername\instancename} -d {dbname} -E

On my server I am using the default SQL Express instance, so the instance name is not needed, and my DB name is vCAC so my command looks like this:

dbupgrade -S localhost -d vCAC -E


If you are receiving any errors, make sure that Named Pipes is enabled.


Now that the DB is upgraded, download the IaaS Installer file, do not rename the file, and run it. The upgrade is of the next, next, next variety.

  1. Click Next
  2. Accept the terms and click next
  3. Enter the root password for the vRA appliance, accept the certificate, and click Next
  4. Upgrade should be the only option that is available, click Next
  5. Enter the service account password, the server name, database name, and click Next
  6. Click the Upgrade button

If the computer gods are on your side, the installation should complete


Click Next and Finish. If you flip back over to your vRA console, you should see all of the available tabs based on the user permission – in this case my iaasadmin user.


Up next is upgrading Application Services.

Home Lab – $1250 8-Core / 32GB / 750GB Flash / 2TB HDD 2015 Edition

11-139-022-TSIt was a bit over a year ago that I wrote about my 8-core home lab. I was asked if there were any updates to the build and I was curious to see how it stood up a year later. Happily for me, and anyone who has invested in this build, the same basic platform is still a solid option for your home lab. I have made a few tweaks below based on some new hardware being available. As I did last year, there was a focus on keeping cost down but having enough power to run a fully nested home lab.

With 32GB I have been able to run Windows 8.1 and VMware Workstation with 3x nested ESXi 5.5 hosts each with 8GB of RAM. One of those host runs the vCAC / vRA appliance, one runs the Application Services appliance, and the 3rd is used when provisioning virtual machines. In addition to the 3x nested hosts, I run a 5.5 VCSA at 4GB RAM, Windows 2012 R2 DC, Windows 2012 R2 vCAC / vRA IaaS with SQL Express on the same virtual machine, and CentOS 5.5 running Ansible in Workstation. With everything powered on I run at about 85% memory utilization and only push the CPU’s during provisioning processes.

The hardware…

CPU:  AMD FX8320 – This is the exact same processor as last year. It is an 8-core AMD processor that fully supports nested ESXi and 64-bit virtual machines running on the nested ESXi hosts.

Motherboard: ASRock 990FX Extreme6 – This is a new motherboard for 2015, versus the Asus I used last year (though that board is still available). The reason for the change, the ASRock Extreme6 supports up to 64GB of memory where as the Asus only supported 32GB. Now, having said that this build still uses 32GB because the 16GB memory modules are $190 each, compared to 4x 8GB (32GB total) modules being $210 TOTAL. This board has onboard RAID and has 5 6Gbps SATA ports.

Memory: G.SKILL Ripjaw X Series – Similar memory to what was used last year, just not in a full kit so pick up 4 of these.

Flash: Crucial MX200 – These were used instead of the Corsair Neutron drives I used last year, for no other reason than saving a few dollars to upgrade in other areas. The Neutron drives have been great for the last year, no problems to report so far. At $1250 you can pick up 3 of these if you like, or drop the price of your home lab.

HDD: Seagate Hybrid 1TB – I again opted for the hybrid drives for bulk / lower tier storage. I run most of my lab off these drives, configured in a RAID0. I opted for 2 of these.

NIC: Intel Dual-port 82575 – Because HCL, and wanted the possibility to install clean on baremetal. If you go the VMware Workstation route, you could skip this card potentially unless you would like more ports to get fancy with. You could again lower your cost here by going with a used card as I ended up doing last year like the HP7170 from Amazon.

Video: MSI …whatever – This is here because the motherboard doesn’t have on-board video. Buy a card based on your needs, I went cheap here because i don’t use the box for any sort of gaming. If you’ll have other uses, obviously look at your requirements.

Case: Corsair Air 540 – Case again is getting into personal preference area. The graphite 230T I used last year is perfectly capable. The Air 540 has 4 internal 2.5″ drive bays and 2 hot swap drive bays to support the 3x SSD and 2X HDD drives.

Power Supply: Rosewill RD600-M – This is the new version of the power supply used last year, which has been stable for me even through a faulty UPS.

Preparing Ubuntu template virtual machines

Bob Plankers has a great post over at for preparing CentOS based virtual machines for being a template. As I’ve started working with Ubuntu more I decided to take that list and Ubuntu-ize it (mostly from proding by Sarah Zelechoski – one of the smartest people I’ve ever had the privilege to work with…so many thank you’s). Anyways here is that guide… Ubuntu-ized.

Stop logging services (auditd and rsyslog):

service auditd stop
service rsyslog stop

Check for, and remove old kernels

Check your current kernel by running

uname -r

Then run

dpkg -l | grep linux-image-

If additional images are listed, remove them by running

apt-get autoremove linux-image-#.##.#

You can remove multiple images all on the same line just by listing them one after another.

Clean out apt-get

apt-get clean

Force the logs to rotate & remove old logs we don’t need

logrotate –f /etc/logrotate.conf
find /var/log -name "*.gz" -type f -delete

Truncate the audit logs (and other logs we want to keep placeholders for)

cat /dev/null > /var/log/audit/audit.log
cat /dev/null > /var/log/wtmp
cat /dev/null > /var/log/lastlog

Remove the udev persistent device rules

Well, saved a step here – there are rules which exclude creating files that match MAC addresses for VMware, Hyper-V, KVM, Xen, and virtualbox (see /lib/udev/rules.d/75-persistent-net-generator.rules). So long as your MAC matches this, nothing to clean up. Otherwise

rm -f /etc/udev/rules.d/70-persistent-net.rules

It will be recreated on the next boot, so any time you power on this VM (updates maybe?) you’ll need to delete this file again so it is not saved in the template.

Remove the traces of the template MAC address and UUIDs.

Here is another step you shouldn’t need to do, however you may want to check /etc/network/interfaces to verify

Clean /tmp out

rm -rf /tmp/*
rm -rf /var/tmp/*

Remove the SSH host keys

rm –rf /etc/ssh/*key*
rm –rf ~/.ssh/authorized_keys

Update network config

If you have set /etc/network/interfaces, make sure to reset for cloning purposes. For example as I wrote this it had a static IP address which I changed to DHCP before shutting down and converting to a template.

Remove hostname

If you have named your virtual machine anything other than localhost, and want the template to spin up with a generic name, versus say “ubuntu-template” remove entry from /etc/hostname

cat /dev/null > /etc/hostname

Remove the user’s shell history

If you have switched to root at any point, do this as root and individual user accounts

history -w
history -c

That should about do it, depending on where this template is going, make sure any ISOs attached to the CD-ROM or networks for the NIC’s are adjusted properly. While many of the steps were the same there were a few differences to be aware of. Anything else you like to clean out? Comment below please!

Hands on with Microsoft Visual Studio Code @code

As a Windows user I have been looking for a good markdown tool to write in, however most of the tools freely available have been mediocre at best. Enter Visual Studio Code, a (currently) free download from Microsoft that codesupports Windows, OSX, and Linux (OSX/open source gear heads take notice – write software cross platform!). You can download Code without any login from

Once downloaded, it is a pretty a-typical install, no next, next, next – it just works! The UI takes a bit of poking around to get comfortable with, but after just a few minutes all seemed to be working as expected.

Below you can see an example of some markdown syntax in Code.


The toolbar at the top of the image


allows you to change between split screen or single screen and, as I have done above show a preview of what you are writing. This is just a quick hands on, you can see how simple it is to get started. Now that I have found a tool that seems work properly in Windows, my next step is to find a tool for markdown presentations that is also easy to use (in Windows of course:) )