Quick and dirty GitHub for beginers – your first commit

I started my Ansible series really with the intention of getting to know Git/GitHub a bit better but Ansible was so awesome I couldn’t put it down.  Now that I have built a couple of example playbooks its time to “commit” those into GitHub.  For starters, we need Git/GitHub installed on our system.  In my case I am doing everything from my Ansible server, though you may want to do this on another system.  Thanks to yum, the install is pretty easy

yum install git

I had already run this on my Ansible server so I could “pull” the Ansible code.  Now I want to setup my username so:

git config --global user.name "yourusername"

Next register your email account for your GitHub account

git config --global user.email "youremail"

Now on to authentication, it’s high time I stop using passwords for everything and setup SSH keys so, lets do that.  To create the public key enter:

ssh-keygen -t rsa -C "[email protected]"
  • While logged in as root it will save to /root/.ssh/id_rsa)
  • Next enter your secure password
  • With the key now created, log into GitHub and click on the gear icon (Settings) in the upper right corner
  • Click on SSH keys
  • Click the add SSH keys button
  • Provide a title such as “ansible VM” or whatever used to identify the computer
  • Back in your terminal window type

less /root/.ssh/id_rsa.pub

  • Copy the contents of the file and paste it into the Key text box
  • Click the Add key button
  • Back in your terminal window type

ssh -T [email protected]

  • Accept the key from github.com
  • Enter the pass phrase for your key
  • You should now be logged into GitHub with your keys!
Login for GitHub via SSH key

Login for GitHub via SSH key

  • Now, switch the to folder where you are saving your Ansible playbooks
  • Type git init
  • Type git add .
  • Type git commit -m ‘playbooks’
  • Find the URL from your Git repository (create one if you haven’t), make sure to click on SSH, not HTTPS and copy that

  • Type git remote add origin [email protected]:user/repository.git
  • Type git remote -v
  • Type git push origin master then enter the pass phrase for your SSH key

Now if I go to my repository, I can see my .yml files checked in!

GitHub files checked into repository

GitHub files checked into repository

From now on, as we create or update our playbooks we can check them into GitHub for safe keeping and sharing!

Hands on with VMTurbo Operations Manager

*Disclaimer – VMTurbo is a sponsor of this blog.  I was not asked to write this post nor was it reviewed prior to being published.  The post simply represents my opinions as it relates to the first time use*

First I would like to thank the folks at VMTurbo for setting up NFR access and inviting me to their community site.  I have never used VMTurbo before, however I have used vCenter Operations manager and Veeam One.  VMTurbo comes as an appliance, I deployed mine in VMware Workstation, powered it on and was able to connect right to the web UI.  Before I log in and get started though, I want to give it the appropriate DNS server IP instead of what DHCP is providing.  Log into the console as ipsetup/ipsetup

  • Select Static Address Setup by using the tab key then pressing the space bar to select it
  • Tab to the IP address field and enter the appropriate information; repeat for all fields
  • Press the tab key until you get to OK then press the space bar again
  • Now, create a DNS entry in your forward and reverse lookup zone

You should now be able to navigate to the VMTurbo UI with the new IP address or FQDN.  The initial login is administrator/administrator (though as the UI suggests you probably want to change that).  Once logged in for the first time, you will run through an initial setup wizard.

  • First, provide your license key which arrives as an XML file.  In my case since I had the license key I selected the I have a license key and pasted the XML I received with the key into the text box
  • Once the license key is accepted, click next
  • On the Target Configuration screen click Add
  • Here you chose the type of system you want to monitor, I have selected vCenter and entered my vCenter server information.  Once everything is entered, click the Save button
  • Add any additional systems you wish to monitor and click Next
  • Enter your email credentials if you have them and click Finish

Right away VMTurbo is able to look at resource utilization in the environment, as you can see from the charts below.

VMTurbo Dashboard

VMTurbo Dashboard

You can click through the various tabs in the UI to see different information, for example on the Suppy Chain tab I can see a map of my infrastructure.  Using the navigator I can click on the components in my environment and instantly get information about that resource.  For example here I clicked on Storage and can see my vxprt-esxi01-gold-local datastore is about about 60% utilization.

VMTurbo Navigator

VMTurbo Navigator

Beyond just monitoring, VMTurbo can also make recommendations about how to improve the environment.  For example if you had a host over utilized, it could generate recommendations on how to resolve the problem and take action on it.  I can also use VMTurbo to perform deployments; one thing I found interesting was that in addition to my existing vSphere templates, there were several already defined such as Microsoft_IIS-small which can help you determine the best location for a virtual machine and use the template to deploy the virtual machine.

I am really excited to watch VMTurbo in my lab over the next few days.  It was by far the simplest deployment of this type of system that I have ever done – within minutes it was monitoring my environment.  A lab may not be the best litmus test for VMTurbo, but given the ease of install and the fact it does not need any agents you may want to go ahead and see what it finds in a larger test or production environment.  I’ll keep you posted on what other cool things I find as I explore the many options in VMTurbo Operations Manager.

Smashed Honey Sweet Potatoes – Thanksgiving Side Dish Recipe

My cousin is hosting Thanksgiving this year, so I don’t have to cook turkey but was asked to bring a side dish – Smashed Honey Sweet Potatoes .  What you’ll need (to feed roughly 6-8 people or so):

  • 5 medium+ sweet potatoes
  • 4 bananas
  • Honey
  • Brown sugar
  • Cranberry granola mix
  • Olive oil

Pre-heat the oven to 400-deg; while the oven is pre-heating microwave two of the sweet potatoes until they are very soft (repeat until all 5 or mushy).  When the oven gets to 400-deg place the 4 bananas in the oven in the peel and let cook for about 10-12 minutes.

While the banans are cooking, scoop out the sweet potato from the skin and place into a large oven save bowl you can put back in the oven.  I started with masher to get the sweet potatoes to the consistency I wanted, but because they are so thick I ended used a knife to whip the sweet potatoes in the bowl – the also wisk kept getting “clogged.”  With the sweet potatoes whipped, remove the bananas from the oven, they should be very soft, almost melted; add them to the sweet potatoes and whip again until you cannot see the bananas.

Add 1/2 to 1 1/2 cups of brown sugar and 1/2 to 2 cups of honey depending on how sweet you like it.  I started with honey and if the honey taste was too strong, I added a little more brown sugar.  Mix well.

Place 2-3 cups (or more if you really like the mix) in a separate bowl and lightly coat with olive oil.  Now sprinkle in brown sugar and mix together by hand.  Crumble the mix over the sweet potatoes and bananas and bake for an additional 10-15 minutes or so.  The result..

Sweet potatoes even I'll eat

Sweet potatoes even I’ll eat

Filed under A page for aspiring VMware admins who cant balance a checkbook, hate grocery shopping, dont know whether to buy a condo or know how to use Twitter

 

Setting up a minimal CentOS7 VM with VMXNET3 NIC

Now that CentOS7 is out, time to make sure I can setup my virtual machines with the VMXNET3 vmnic.  As I documented in my previous post, CentOS 6.x using the VMXNET3 driver requires VMware Tools, VMware Tools needs Perl, Perl is not included in the minimal ISO so I need network access to get Perl to install VMware Tools to get network access.  That order of operations doesn’t work very well.

Also, as of CentOS7, VMware now recommends Open Virtual Machine Tools so you would not be installing VMware Tools as I pointed out in my CentOS 6.x post on VMXNET3.  Good news, though, VMXNET3 drivers in CentOS7 do not need VMware Tools but there are seemingly some new steps to get networking working.  So, lets get started; now obviously we don’t want to install the OS every single time you need it, so my assumption here is that the use case if for your initial template build.  With that assumption out of the way I am going to create a new virtual machine in the vSphere Web Client with the following settings:

  • VM hardware version 10 (since now we can edit them in the C# client)
  • Guest OS Family – Linux
  • Guest OS Version – CentOS 4/5/6/7 (64-bit)
  • Virtual Hardware
    • 1 vCPU
    • 1GB memory
    • New hard disk – thin provisioned
    • 1x vmnic – VMXNET3 connected

Once the new machine is created, power it on and connect to the console, install the OS as you normally would – notice when you get to the Installation Summary screen the network says Not Connected; click on it and you’ll see that i seems to recognize the VMXNET3 controller.  I am not going to set this adapter to “ON” right now, I am going to leave it “OFF” to show you how to bring it up on the command line.  Finish the and reboot once completed.  Log in and run ifconfig…egads command not found?  What is Linux going all Microsoft on us and changing things for the sake of changing it!!  Well if you tried to do a yum or ping anything right now, you’d not have network access as you might expect. So where to go from here?

Well it appears there is no more /etc/udev/rules.d/70-persistent-net.rules file any more, so lets have a look at /etc/sysconfig/network-scripts.  Hmm where is my ifcfg-eth0 file?

centos7-ifcfg-file

That has been replaced now, notice the ifcfg-eno16777984 file, that is what we want (though not sure where the numbering comes from) – open it in vi so we can have a look.  Yup looks just like the old ifcfg-eth0 file, lets get to work.  Change BOOTPROTO from dhcp to none and add the following with valid information for your network; IPADDR, NETMASK, GATEWAY, DNS1.  Here is what my file looks like now:

CentOS7 ifconfig file

CentOS7 ifconfig file

Now that you are all set, [esc] :wq [enter] to save it and service network restart – now ping 8.8.8.8…wait what – I STILL can’t ping?  What is wrong?  Apparently in CentOS7 restarting the network service is not enough, we need to bring the actual interface up.  If you do an ls in /etc/sysconfig/network-scripts you’ll notice the ifup command – always been there, I never used it before but this is what you’ll use to bring up your interface, something like

ifup ifcfg-eno16777984

Now, here you can see our network is up

centos7-ping-vmxnet3

But… this will not be persistent over network service or virtual machine restarts so you’ll need also edit the ifcfg-eno######## file and change ONBOOT to yes, now you can restart networking or your virtual machine and maintain networking

More information about the changes can be found in the CentOS7 FAQ.

 

 

 

Useful yum options I learned this week

Always trying to learn new things, that’s what it is all about right?  In working with Application Services and Ansible over the last few weeks there are a few things I learned about yum I did not know before.  First, an easy one that will show how much of a linux noob I am, to install multiple packages, I knew I could string commands together with an && like

yum install python && yum install python-setuptools

But knew there was likely a better way… which is just list each package one after another with no separators.  For example to install Python and Python tools simply run

yum install python python tools

You can see here yum went out and searched for both, I already happened to have them on my system in this case but if not the command would have installed them both

yum install python python-setuptools

yum install python python-setuptools

Another excellent option I learned was using the provides options.  For example something new to me was that in CentOS7 minimal, ifconfig is not inclued; however there is no ifconfig package.  In order to find it I ran

yum provides ifconfig

Now that I knew which package I needed, I could simply run

yum install net-tools

That’s it for now, Do you have any tips using yum beyond the basics of doing an install?

Adding Stand Alone Hyper-V – vRealize Automation Series Part 16

A question came up on Twitter recently about how to add a stand-alone Hyper-V server as an endpoint.  What I can gather from the documentation is that you need to have an agent deployed for Hyper-V but the directions were otherwise unclear so this is an attempt to document the required steps.  First, the assumption is you have a Windows Server with the Hyper-V role at a minimum available (if you are running as a virtual machine; make sure the OS type is set to Hyper-V).

First, we need to install an agent for Hyper-V.  For my lab I am doing this on my existing IaaS server

  • Log into the IaaS server as your vCAC / vRA service account
  • Download and run the IaaS installer from http://vcacappurl:5480/i as administrator
  • When you get to the Installation Type page; select Proxy Agents
vRealize Automation / vCloud Automation Center add agent for Hyper-V

vRealize Automation / vCloud Automation Center add agent for Hyper-V

  • Follow the wizard until you get to the Install Proxy Agent screen; enter the following information
    • Agent type:  HyperV
    • Agent Name:  HyperV
    • Manager / Model Manager Host Name:  Your IaaS server if you followed along in my vRA Home Lab series
    • An administrative user for your HyperV server; then click the Add, and Next button
vRealize Automation / vCloud Automation Center Hyper0V agent configuration

vRealize Automation / vCloud Automation Center Hyper0V agent configuration

  • Click the install button and wait for the installation to finish

Once the installation finishes, we now need to add the agent to vRA.  To do this, log into vRA as the iaasadmin user and perform the following:

  • Navigate to Infrastructure >> Endpoints >> Agents
  • Provide the FQDN for the Compute resource – your HyperV server, select the HyperV agent from the the pull down and click OK
  • Navigate to  Infrastructure >> Compute Resources >> Compute Resources; you should see your Hyper-V server listed as “OK”
vRealize Automation / vCloud Automation Center Hyper-V compute resource

vRealize Automation / vCloud Automation Center Hyper-V compute resource

If you click on the small magnifying glass icon, you should see information about your server – I can confirm these are the specs for my Hyper-V virtual machine:

vRealize Automation / vCloud Autoamtion Center Hyper-V resources

vRealize Automation / vCloud Autoamtion Center Hyper-V resources

  • Hover over your Hyper-V server and select New Reservation
  • Fill in the reservation information and click the OK button; you should now see your Hyper-V server listed

This is just adding Hyper-V as a compute resource, I assume will still need virtual machines in Hyper-V (or templates?) to create blueprints from and entitlements for that blueprint – I’ll try to get on that ASAP.

 

Ansible Playbooks – A more advanced example

In my last post, I showed you a simple example of an Ansible playbook using yum to update a package.  Still really awesome, especially when you consider how often you might need to do that and how simple it is to handle that type of otherwise manual task.  In this post, I am going to try and put together a slightly more complicated playbook to look at some of the other options available.

**Please note I am not necessarily trying to reflect best practice here, but rather demonstrate some of the different options available in an Ansible playbook.  There is likely an easier way to do this for production purposes**

The simple playbook looks something like this:

---
- hosts: db
  remote_user: root
  tasks:
  - name: update mysql-libs-latest 
    yum: pkg=mysql-libs state=latest

So what else can we do?  Let’s start at the top.  First and foremost you are probably not going to log into your systems as root, typically you log in with a user account and then use “sudo” to elevate your privileges.  I have created an account on my “db” server called virtxpert, and added that user to the sudoers file.  Now I can do something like this in my playbook:

---
- hosts: db
  remote_user: virtxpert
  sudo: yes
  tasks:  
  - name: update mysql-libs-latest 
    yum: pkg=mysql-libs state=latest

Since, as I mentioned in the last post playbooks take an idempotent stance, I can run that playbook again with the remote user being virtxpert and see no changes, because my single task was already complete.  With sudo a requirement now I also need to add –ask-sudo-pass to the command.

Ansible playbook with sudo

Ansible playbook with sudo

As you can see here, everything “worked” as expected – there were no changes that needed to be made.  Now instead of just updating a package, lets install something generally a bit more complicated – PostgreSQL.  I could just do yum: pkg=postgresql state=;latest and in CentOS6.4 that would be Postgres 8.4 – but what if I want 9.x?  My playbook would look something like:

---
- hosts: db
  remote_user: virtxpert
  sudo: yes
  tasks:  
  - name: update mysql-libs-latest 
    yum: name=http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-centos93-9.3-1.noarch.rpm state=present

And just like that, the playbook runs and I now have PostgreSQL 9.3 available!

Continue reading

Ansible Playbooks – A simple example to get started

With Ansible installed, and a basic inventory file created we can now move beyond ad-hoc tasks (which by the way is still a great use case for Ansible) and take advantage of Playbooks.  Playbooks are a set of commands organized as required to perform complicated tasks.  Maybe you have provisioned dozens or hundreds of new virtual machines and you now need to make sure they are in the desired state – standardized versions of OpenSSL or MySQL for example, then deploy your custom software packages to those servers; that (simplistically) is where playbooks come in.

Ansible Playbooks are written in YAML format, generally considered a human readable syntax compared to say HTML other markup languages.  You can read more about YAML on Ansible sites.  Let’s take a basic use case here (time to move on from using ping).  I have a server installed with its included version of MySQL-libs – in my case CentOS 6.4 with mysql-libs 5.1.66-2.el6_3 but my requirements state I need to always be on the latest version of this package, currently 5.1.73

Checking mysql-libs version before running Ansible playbook

Checking mysql-libs version before running Ansible playbook

On a single server, maybe even two or 3 I might just ssh or clusterssh to those servers and run the update command manually; again I’d probably suggest against that but as you get into 10s or 100s of systems that is not sustainable.  Enter Ansible playbooks.  Now I did not see (or missed) where the standard location for playbooks is, so I just created a playbooks folder in my Ansible folder, and named my file update-mysql-libs.yml.  First we need to define the hosts and user account we will run the playbook on.  Recall from my last posts I have a two groups in my inventory file; web and db.  In my case my playbook might start off like:

---
- hosts: db
  remote_user: root

This will instruct the playbook to run on all hosts in my db group from my inventory file.  Remote_user, as you might expect is the user which to run the command from.  Now we need a task or list of tasks to run; in my case maybe something like this will do:

tasks:
  - name: update mysql-libs-latest
    yum: pkg=mysql-libs state=latest

Neat thing about tasks, they only run if they need to.  If mysql-libs is alrady the latest version, why update it again?  Ansible will check whatever you are doing to see if it needs to be done, and if it does will bring it to the desired state.  Pretty simple example so far, here is what it looks like all together:

 

Now, to run my playbook I use ansible-playbook instead of just ansible, for example:

ansible-playbook /ansible/playbooks/update-mysql-libs.yml

Oops, looks like I forgot to mention something.  Ansible prefers the use of SSH keys to systems, not passwords so running the above command gave me the following error:

Example failed Ansible playbook

Example failed Ansible playbook

Guess its finally time to start using keys, guess that will be another post for all my Windows friends :)  I can, however run this as is and have it prompt for a password, I just need to add the –ask-pass flag as we did in the previous posts, so something like:

ansible-playbook /ansible/playbooks/update-mysql-libs.yml --ask-pass

As you can see Ansible prompted me for the SSH password, found the hosts from my [db] group in my inventory file and suggests that it changed 1 item.  Let’s go check the version on our DB server (.137)

Ansible playbook successfully run

Ansible playbook successfully run

 

Yup, mysql-libs has been updated as expected!

Ansible remote host updated via playbook

Ansible remote host updated via playbook

This was a simple playbook with 1 item in it, but playbooks can be much more complex.  In a future post, I hope to look at a more advanced playbook file, for now need to go figure out the whole SSH key thing.  Much of this post was summarized from Ansible’s official documentation to provide a more simple example of what is possible if you are just getting started with Ansible.  Please check out their official documentation on Playbooks to see everything that is possible.

Ansible Inventory Files

In my last post on Ansible, the installation documentation walked us through a simple example of how to issue a command on a host by putting 127.0.0.1 in the inventory file.  Now as you know 127.0.0.1 is that server itself;  the real power of an automation tool is working on multiple systems.  You can manage which systems Ansible runs commands or playbooks on (more on playbooks in a future posts) by putting them in an inventory file – and what’s really cool; Ansible does this all agentless!

If you look at your inventory file (/etc/ansible/hosts) you can see just the single IP address in there like this:

Ansible simple hosts file

Ansible simple inventory file

Handy, but again I want to be able to perform operations on multiple systems.  I setup another linux system in my lab, in this case it has an IP address of 192.168.6.137 so all I need to do is add that file into my inventory file;

echo "192.168.6.137" >> /etc/ansible/hosts

Now the inventory file should look like this

Ansible hosts file with two hosts / IP

Ansible inventory file with two hosts / IP

Just as a quick test, and in the case of sshpass to add the servers key to our list of known hosts.  Yup, I can SSH to 192.168.6.137.  Let’s try the same example from the installation post:

Ansible ping multiple hosts

Ansible ping multiple hosts

Just like that, we can now perform tasks on multiple hosts; very cool.  But I suspect many people are managing more than two hosts (though even with just two I’d highly suggest Ansible), or even if there are two you may have a different purpose for each host – say a web and database server.  You may not want to install PostgreSQL or MySQL on your web server, but with the above scenario you’d be running your Ansible commands on all of the hosts in your inventory file – would you edit your inventory file every time you needed to do something; kind of defeats the point of automation.  Have no fear, we can logically group items in the inventory file.  Using the previous example lets call the Ansible server our “web” server and .137 our “db” server; your inventory file would look something like this:

Ansible hosts file with groups

Ansible inventory file with groups

Now, if we issue the following command what do you think will happen?

ansible db -m ping --ask-pass

If you guessed that it would only run the command on 192.168.6.137 you would be correct!

Ansible ping only DB group

Ansible ping only DB group

One last thing I will show you here on inventory files; you can also do thinks like match names or numbers.  For example say you have a range of IP address you use, rather than listing each IP address you could do something like

192.168.6.[1:254]

This would include every IP address in the 192.168.6.x range.  Taking that bit of knowledge, I have this example below:

Ansible Inventory file match pattern

Ansible Inventory file match pattern

Going back to our ping command example, what would happen if I run

ansible 192-sub -m ping --ask-pass

Yup, it would attempt to run on each of the 3 IP address in my range – 192.168.6.136, .137 and .138.  Since in my case only one of those hosts is alive, I have one success and two failures!

Ansible ping results

Ansible ping results

There is actually much more you can do with inventory files here, check out the Ansible documentation on this as its well written and informative.  While this is getting more awesomer by the moment (yes I said more awesomer), setting up playbooks continues with the awesomeness!

Installing Ansible via Git

I’ll admit, Git is a completely foreign language to me but it is something I am going to need to learn.  In an attempt to do that, I am going to take something I sort of know how to do manually-ish – install Ansible, but this time install it via git.  Once this is wrapped up I am going to catch up on Matthew Brender’s Git #vBrownBag which he did as part of the DevOps series.  Hopefully this post, if nothing else, can help you get Ansible setup and running.  Now Ansible is fairly well documented, but like most open source projects I find they assume a bit to much in their documentation to get you completely up and running.  For example you might have trouble following getting everything needed to install by following their directions.

To start, you’ll need a linux box to use, I have a CentOS 6.4 setup already so I will be using that.  To get started, we will need git and Python on our linux box – pretty easy to do (When prompted, press y to install the various packages needed):

yum install git

Ansible also requires Python, so now to install Python

yum install python python-setuptools

Almost there, just a few more things Ansible needs get up and running.  Since python-setuptools is installed you can setup pip using a python tool called easy_install, then pip to install the others

sudo easy_install pip

Now that pip is installed (there is no yum package I could find for this)

sudo pip install paramiko PyYAML Jinja2 httplib2

Now that Python is installed we can move on to installing Ansible.  This is where we will start to use git, from / run

git clone git://github.com/ansible/ansible.git --recursive

With Ansible now cloned, its time to setup the environment which Ansible was nice enough to provide a script for; cd to /ansible and run:

source ./hacking/env-setup

By default, Ansible will look for an inventory file in /etc/ansible/hosts but the clone process or env-setup does not create that for us so;

mkdir /etc/ansible && touch /etc/ansible/hosts && echo "127.0.0.1" > /etc/ansible/hosts

Now we have a host file which is where we will store all the systems we want to manage with Ansible and added our localhost IP to the file.  Next up you should be able to run a command, the documentation suggests

ansible all -m ping --ask-pass

But as you can see we are still missing something, a package called sshpass

ansible-almost

We need just a couple more packages to complete this test example assuming you are on a clean install of CentOS

yum install wget
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/sshpass-1.05-1.el6.x86_64.rpm
rpm -Uvh sshpass-1.05-1.el6.x86_64.rpm

Now…

ssh -l root 127.0.0.1

Accept the certificate, enter the password for your root user and log out

NOW…..we have success

Ansible working test

Ansible working test

Now that we have Ansible setup and working, it’s time to go back and review the #vBrownBag Jeff Geerling (@geerlingguy) did on it to take it to the next level. My next goal is to create a simple inventory file to perform tasks on multiple systems.