VMware vCenter on Windows 2012 Failover Cluster

Some time around the release of vSphere 5.5 (Update 2 maybe?) VMware officially(?) didn’t not support vCenter on a Windows Failover Cluster. I say didn’t not support because there still seems to be very limited documentation and KB’s on how to do this. The VMware vCenter Server Availability Guide documents available options such as using HA for vCenter availability, but also how to install vCenter on a Windows Failover Cluster, and configure the services appropriately since the application itself is not other cluster aware, for example like installing SQL on a failover cluster.

If you have done a failover cluster on Windows before, the process is a bit different so don’t just dive in as I did. So what does my environment look like:

  • SSO has been already deployed and working
  • A management vCenter is running; you will need this or some other means to clone the first virtual machine after installation

So wait why are you clustering vCenter if there is already a vCenter you ask? Many reasons, but primarily our availability of our management vCenter is less of a concern. The clustered vCenter is being deployed to support vRealize Automation so end users will rely on this vCenter to be able to request catalog items. Availability was more of a concern for this purpose than strictly management.

  • Start with only a single Window 2012 R2 64-bit virtual machine (not 2) as you will later clone this virtual machine to act as the 2nd node
  • I placed the original, and clone on two separate physical hosts
  • Each virtual machine has a single 60GB (C) drive for the OS
  • 2 additional volumes will be added which, in my case, are XtremIO volumes presented as a physical RDM. This should also work using in-guest iSCSI for example
  • 1 of the 2 additional volumes is a 60GB (D) drive which vCenter will be installed on and the other a quorum disk for the failover cluster
  • Each virtual machine has two NICs – one for production/client access the other for cluster communication
  • The Windows Failover Cluster will have an IP address, as well as the vCenter Service role which you will create; in total this is 6 IP address
  • An AD account was created for the vCenter services, added to the local administrators group and given permission on the SQL server as required

A few notes before I review the process;

  • If you are using RDMs, make sure you read this KB to mark the RDMs as perennially claimed otherwise storage rescans and boot times will be drastically affected (hosts were taking roughly an hour to boot)
  • The directions have you install the vCenter Web Client, Inventory Service, and vCenter services to the D drive. There is a known bug that causes the web client to not function properly when installed to a non-default location (though it seems more that it doesn’t work when not installed to the C drive). You’ll need this KB article which walks you through creating a symbolic link, after implementing this the web client operated as expected. Also, once installation is complete and working on the primary node, you’ll need to failover to the secondary node to create the sym link (well at least I did, would it let you create a sym link to a drive that didn’t exist? hmmm)

So, with that out of the way there is a few things to define before you bring up your first virtual machine – specifically the names and IP addresses of both virtual machines, the Window cluster, and the vCenter cluster. For example:

vCenter ClusterVC2192.168.1.100
Windows ClusterVC2Win192.168.1.99
Primary vCenter NodeVC2-1192.168.1.101
Secondary vCenter NodeVC2-2192.168.1.102


This is important, and I misinterpreted this step the first time I did this: When you create the first virtual machine – give it the name and IP address of what will ultimately be the vCenter cluster – using the example above you will name the computer VC2, with an IP address of and join it to your domain. After the initial install this will be changed.

Create the virtual machine, with 2 NICs and the RDMs. Mount one of the RDMs as D and one as whatever letter makes you happy, for my OCD that would be Q for quorum. Create your system DSN as you normally would, log in as your vCenter service account and perform a custom installation (not simple), installing each of the components to the D drive. During the installation process note that the name being added to SSO is the name that will ultimately be the vCenter cluster.

Before removing the RDMs, make sure to note their original file name, volume ID, and SCSI controller; they need to be added back in the same order.

These steps are pretty straight forward in the guide, change all of the vCenter services to manual, shutdown the virtual machine, remove the RDMs, and make a clone of the virtual machine. One item not clear was when to re-add the RDMs, I chose to play it safe and kept them out of the virtual machine for now. Once the clone is complete, power on the cloned virtual machine and rename it to the secondary vCenter node hostname and IP address. Power on the original virtual machine, unjoin it from the domain, rename and IP it with the hostname for the primary vCenter node, and rejoin the domain. Now you can power off the virtual machines, re-add the RDMs to the primary node, then the secondary as you typically would, making sure the SCSI controller is set to physical sharing.

Power on the virtual machines and install the Failover Cluster feature on each. Once complete, create a new cluster on the primary node – during the creation you will be asked for a cluster name and IP address – use the Windows Cluster name (VC2Win) from the example above – this is NOT the vCenter cluster name and IP address which you used on the initial virtual machine during installation. Unlike with the SQL post I wrote, you can add all available cluster storage as both additional drives are used for the cluster (D – App, Q – Quorum). Now that the cluster has been created, you should have an AD object called VC2Win. Using option #2 from this MSDN blog post, create your vCenter cluster AD object. Failing to do this will cause the cluster to fail when you attempt to start it.

The rest of the steps for creating the vCenter cluster role are well documented with one caveat, so rather than copy paste them here finish reading the VMware vCenter Server Availability Guide. That caveat, because your vCenter services were set to manual, and thus not started after the reboots, when you create the initial vCenter role service it will come us as failed – which made me go  ZOMG not again! This message is actually just the status of the clustered service, which is stopped, thus failed from a Windows Failover cluster perspective – it is okay to proceed with creating the remaining services and setting the dependencies.

At this point, you should be able to start the cluster and have all services come up.

vCenter services on Windows Failover Cluster

vCenter services on Windows Failover Cluster

Once it is up, access the web client and set permissions as required. For example, as you can see in this screenshot, here is both vCenters in the web client after since my account was given the appropriate permissions to both.

vCenter on a Windows Failover Cluster

vCenter on a Windows Failover Cluster

The last item I have to tackle is automating the backup, copy, and restore of the ADAM database. There are a lot of words in the doc which basically says – xcopy the backup to the correct location. The document talks about stopping/starting services before placing the file. But if the services aren’t running on VC2-2, I should just be able to drop it in. Now when the services start there is an up to date file which will get loaded.

So, quick a dirty like…

del d:\backup\*.* /Q
%windir%\system32\dsdbutil.exe “ac i VMwareVCMSDS” ifm “create full D:\backup” q q
xcopy /osy d:\backup\adamntds.dit “\\VC2-2\C$\ProgramData\VMware\Vmware VirtualCenter\VMwareVCMSDS”

Thinking of trying Ansible Tower? Save 20% on the Starter Kit Annual Subscription

Thought it was to good not to share, if you are thinking of trying Ansible Tower, you can save 20% off the annual subscription on the Ansible Tower Starter Kit – that’s almost $200 off the $999 annual subscription. If you were thinking of trying it out and paying the $99 monthly term ($1188/yearly) that’s almost a $400 savings.

The Ansible Tower Starter kit is good for up to 100 hosts – VMs or physical boxes. Ansible is agentless, you need only be able to SSH to your linux hosts or via PowerShell remoting for Windows.

You can check out my Ansible posts and the #vBrownBag DevOps sessions to learn more. Use this link (http://ansible.refr.cc/9KG5XP3) to get the code for 20% off during checkout.

Configure Windows 2012 Failover Cluster for SQL 2014

Working on building out a lab that is going to be used to demonstrate setting up a vCenter environment. We were fortuneate enough to be given some time to set it up “right” – meaning setup a SQL cluster for vCenter, SSO in HA behind a load balancer with valid certificates. I drew the SQL straw, and it s the first time I have setup SQL clustering. I had to pull from a few different resources, none were completely what I was trying to do but thank you to Derek Seaman’s blog and the MSDN blogs for being able to answer questions when they came up. You can find more information on Windows Failover Clustering on vSphere 5.5 here (nope not on 6 yet). An over view of our setup:

  • Two Windows 2012 R2 virtual machines on separate hosts; SQL1 and SQL2
  • Each virtual machines with two NICs; one for production/client access the other for cluster communication.
  • Each virtual machine has 2 drives; 60GB “C” for OS and 20GB “D” for SQL installation
  • 3 XtremIO drives presented via VPLEX
  • AD accounts for SQL and SQL Agent were created in AD
  • IP addresses for each of the SQL virtual machines, the Windows cluster, and the SQL cluster; for this setup that is 4 total.

Windows was installed, patched and joined to the domain. On each virtual machine I ensured that Windows Ethernet0 was first in the biding order and used for “production.” NIC1 would be used for cluster communication. Ensure RSS is not enabled on the NICs.

Continue reading

Synology with an Awesome DSM Update – 5.2 Beta with Docker Support

Synology has released DSM update 5.2 Beta with, of all things, support for Docker! Yet another reason this makes for an awesome storage solution.

The official press release:

=== Press Release 3.12.15 ===

Synology® Releases DiskStation Manager 5.2 Beta

Achieve more with your own cloud

Bellevue, WA—March 10, 2015—Synology America Corp., today announced the beta release of DiskStation Manager (DSM) 5.2, the latest version of its award-winning NAS operating system. Coming with both exciting new functionalities and under-the-hood changes, DSM 5.2 is built to make managing a private cloud even more effective, secure, and intuitive.

“Several state-of-the-art technologies have emerged in recent years. With DSM 5.2, we want to extend their benefits to all our users,” said Derren Lu, CEO of Synology Inc. “More than perfecting the operating system with better performance and reliability, we also want to bring new ways of thinking to some of its core aspects, such as data management and application serving.”

New features in DSM 5.2 include:

– Streamlined application deployment and higher productivity: The integration of Docker allows developers to ship, and users to run, a vast number of applications on Synology NAS with minimal time and resources needed. SSO Server lets users gain access to all services in the same domain with only one single log-in. File Station now can be connected with public clouds so you can browse and manage files in them without taking up bandwidth.

– Faster, more customizable cloud syncing with encryption: The smart polling technology dramatically increases Cloud Station’s performance while at the same time reducing network traffic and server overhead. The number of file versions is also customizable for each shared folder. When syncing with public clouds, besides compatible with more Amazon S3 and WebDAV storage, you can now set up one-way sync – and even encrypt the data before uploading it to public clouds – immediately turning them to offsite backup destinations without sacrificing data privacy.

– File-based backup restoration and smart version control: The new file browsing feature allows you restore a single file, instead of an entire shared folder, from past backups. You can also automatically rotate old backup data or iSCSI LUN snapshots when designated quota is reached or even benefit from the Smart Recycle policy. The latter helps you minimize storage consumption while still maintaining enough flexibility for point-in-time recovery.

– Portable task manager and refined multimedia experience: With the all-new Synology Web Clipper and the to-do-list-turned-task-manager, Note Station and DS note lets you capture the best of the web into your pocket notebook, and then organize action points according to your schedule. Download Station’s new “preview” feature gives you a sneak peak of what’s being downloaded, saving you time and bandwidth from misleading torrents. Cue file support in Audio Station means you can switch between high-quality CD tracks at will. Want to display photos on big-screen TV? Now besides Apple TV, the support of Chromecast and DLNA TVs in DS photo also makes this possible.

– AppArmor and SMB 3 encryption to strengthen security further: AppArmor extends its reach to profiles for packages to effectively restrict malicious software from accessing unauthorized system resources. The support of SMB 3 encryption enables Synology NAS to secure file transfers to Windows 8 and Windows Server 2012, reducing the possibility of tampering and eavesdropping when data moves across a company’s network.

Synology DSM 5.2 Beta Program

Synology will hand out a DS214se to each of the three beta testers who provide the most valuable assistance and feedback. Please visit  www.synology.com/en-us/dsm/5.2beta/ for more details.


Visit the live demo site at www.synology.com/products/dsm_livedemo to try out new features.

My favorite part of being an EMC customer, they took the time to get to know me

**Disclaimer: After being a customer in my last role, I now work for EMC. This post is purely my opinion and was not requested, read, or approved by my employer**

It was about a year and a half ago I wrote a post when I was still on the customer side of the world, about how impressed I was that my EMC sales team didn’t say a word about a competitor during a sales meeting when I walked in wearing a shirt that basically said “No SAN.” Rather than spending any time talking negatively (or at all) about a competitor, they laughed, said nice shirt, and proceeded to show me how their solution was positioned to support my workload.

What I didn’t mention in that post is that the EMC sales team was able to do this because they spent a good deal of time with us asking us questions about our application, our environment, even our sales, and new customer outlook for the coming year. See they wanted to find a solution that supported our application and all of its requirements. During a several week period, in which I talked with many vendors, EMC was more interested in get details about our current environment and what our thoughts were on how it was going to expand. It wasn’t until they had collected data from our existing environment, worked through scenarios with us that detailed our expected growth, and analyzed that data that they suggested a solution for us.

Even after that, I was sure that there was some new technology out there that could better fit our needs. I must have talked with 15 different vendors; from hyperconverged, to hybrid arrays, and all flash arrays. Do you know how many of those vendors ever asked to view details about our existing environment? Or how many of the vendors asked about our expected changes or growth? The answer – zero, zilch, nadda. I didn’t provide a bit of data about our application or how it was operating. At most I was asked about IOPs to which all of them assured me they could handle it. And they very well might have. But there was only one vendor I could truly be certain was suggesting a solution that would fit the demands of our application.

Next time you are meeting with a vendor, and they brush over application requirements, current trends in use, or expected growth, ask yourself if that is really the best way to go about supporting your organization. Sure someone can drop a new widget into your network in 2 weeks, but what if it ends up not supporting your application workloads the way you expected? Personally, I’d rather take a few extra weeks to really analyze what’s going on.

Getting to the next level and have a life

Duncan Epping recently posted an article called “How do I get to the next level” which was an interesting read, that I almost didn’t do. See in the beginning of the article he stated

If you can’t be bothered freeing up time, or have a too busy family schedule don’t even bother reading past this point.

Typically I’d stop there, because for me nothing is more important than family, and dedicating time to be with my daughter, wife, extended family, and friends. I’m not interested in reading an opinion if family schedule is a consideration, because for me it is. While I am no where near Duncan’s status or skill, I think there things everyone can do, regardless of family schedule to help you grow personally and professionally.

First and foremost grab a piece of paper and pencil, scribble down where you would like your career to go in the next few years. For example maybe you would like to be in an SE role or Technical Marketing type role. I say write it with pencil and paper because the path you set today is going to take a few side roads and detours along the way. For example a friend on mine has talked to me recently about wanting to be in a technical marketing role, yet other interesting opportunities may present themselves that will help get to that position. Personally, I knew I wanted to be involved in virtualization back in 2004 when Microsoft released Virtual Server – I could instantly see the benefits of virtualization and made an effort to to get hands one with that, or similar technology in my future roles. It wasn’t until 2008-2009 that I did my first VMware deployment but I had done plenty of Virtual Server and Virtual Iron deployments prior to that. During that time I weaved through a stint in as an IT Director for two organizations. When I realized I wanted to be more hands on, I pivoted out of the management roles and back to a hands on one. Even at that, I never thought I’d end up working in an education department, but given my personal values it totally made sense when the opportunity presented itself (by the way for me it’s family, friends, health, education, work – in that order).

Once you think you have an idea, reach out to people in the community that are at companies or in positions you think you’d like to go. Benefit from their experiences in how they got there. No one else’s path will be quite like yours, but I talked to people about SE roles at VARs – a position I thought I wanted to be in, and realized it wasn’t going to be for me given the travel (I have no desire to be away from my family for any extended period of time on a regular basis).

Where I think Duncan is spot on, contribute to the community. Whether that is via vendor forums such as the VMware Communities site or independent forums such as Spiceworks, you’ll make great connections and, if you follow Duncan’s advice, learn a few things. Take on the difficult questions, go research problems, read documentation that may take you out of your comfort zone. I also often hear from people that they don’t have time to blog. My response to that is shenanigans. As a technologist, you should be documenting what you are working on anyways – why can’t that be a blog post? Obviously you do not want to share confidential information, but even if you are setting up an XtremIO, why does that have to be associated with a particular customer or business? Document the steps you took to set it up, grab screenshots prior to entering confidential information such as domain names or IP ranges and, not only do you now have a blog post, you have documentation for your organization – two birds, one stone. This process has really allowed me to create content for my blog, and be documenting what I am working on to support my employer.

So, how do you do this while maintaining commitments to friends and family. You need to put some effort into managing and planning your life. Look at all the things you do on a regular basis – are they worth it? For example I used to play in a basketball league on Saturday mornings, before kids the hour long ride to the gym and back was no big deal, now losing half a day to play basketball isn’t an option. Now the goal here is maintaining your life, since playing basketball was a key way for me to get exercise it seems a bit counter productive to drop it. The key is to replace, not drop activities with things that are more efficient. I looked around local town programs and found a pickup basketball game on Sunday mornings, which just so happened to be on my way to the grocery store, and I just so happened to do my grocery shopping on Sunday mornings. Now, rather than losing a half day Saturday to play basketball I do it on Sundays just before something else I needed to do anyways. The point here, eliminate time sinks in your schedule, don’t give up having a life, just evaluate how you spend your time.

Speaking of evaluating how you spend your time, this is a key element at work as well. As I write this blog post I am at work. How do I have time to write a blog post at work? Well I speak to my manager on a regular basis to understand what our teams priorities are, as well as upcoming work that needs to be done. While I don’t always have time to jot down a blog post, these last two days I have so I took advantage of the time to write this, and one over at www.wickedts.com yesterday. I spend less time monitoring emails by taking a version of Cody Bunch’s post on how to manage email so I am not sinking unnecessary time into reading emails as they come in. If you don’t think you can do it, you absolutely can. For what it’s worth, stop checking emails at home or while you’re on vacation – unless you’re on the brink of curing cancer, you don’t need to be working all the time. This has freed me up to spend a bit of my free time at home playing with new technologies. In the amount of time it takes for my wife to read my daughter a book at night before bed, I have been able install Ansible and get to know the vsphere_guest task. Later at night while my wife and daughter are sleeping is also when I spend time recording podcast such as a #vBrownBag session I did on Ansible+vCenter and Application Services, as well as recording Size Matters podcast.

I’ve been able to do this, as well as advance my career all while spending time with friends and family. Realize that you do NOT need to be working all the time, you are not selling your entire life to your employer, you are selling them 40 hours of your week. The rest of the time should be spent on you, if your employer does not recognize this it may be time to move on. For those managers/employers that may be reading this blog post, realize that it is literally science that employee productivity goes down around 30 hours worked, and that freedom and time spent learning, and engaging others in the community can actually increase productivity – not mention the skills, of your employees. If anyone would like to chat more about their career, how I balance personal and work commitments or just want to talk fantasy baseball, as always, reach out.

#vBrownBag Using Ansible with vCenter Examples

I wanted to share some of the example Ansible playbooks used during last Wednesday’s US #vBrownBag. During the show I went over examples of how you can use Ansible to create, clone, and update virtual machines in vCenter without the need for other provisioning tools. Based on my testing (and I’m still learning as well), the items noted in the comments are the bare minimum needed to run the playbook, even though the official documentation may currently state otherwise. If you are already using Ansible for configuration management, this is a handy option to have as you can perform the provisioning tasks without leaving Ansible.

All playbooks have been uploaded to my GitHub Ansible-Test-Playbooks repository (https://github.com/jfrappier/ansible-test-playbooks/).


EMC Elect 2015 – Thank you! #EMCElect

I was pretty excited to hear I have once again been selected as a member of the EMC Elect community for the 2nd year. Since becoming EMC Elect in 2014, I also joined EMC full time in the Education Services department.

For those curious what EMC Elect is all about, head on over to ECN; here is a brief snippet:

EMC Elect is a community-driven recognition and thank you for an individual’s engagement with EMC as a brand. Why is this important – critical, even – for EMC? Here’s where it all began:

Our charter: The team at EMC has designed this program to increase investment and engagement with EMC within our global community – customers, partners, and EMC employees alike.

Our goal: Provide EMC Elect members with unprecedented access into EMC product and service teams; enable them with exclusive privileges; and further their status as community leaders.

Our vision: EMC Elect will be held with high regard within the industry, highlighting and enhancing thought leadership in the fields of data center management, cloud computing, and big data. The program will lead to innovative breakthroughs and a stronger community across the globe.

The  EMC Elect FAQ document details the parameters of the program and should also be reviewed for a full picture of what the EMC Elect program represents.

Ansible-lint for playbooks

During the #vBrownBag DevOps series after-show from my Using Ansible to provision VM’s in vCenter, Mike Marseglia asked about options for linting Ansible playbooks. Since I didn’t know, I thought it would be worthwhile to look into it. There is an Ansible-Lint repo on GitHub, reading through the information, it seemed straight forward. Here I am going to have a look at installing and using it against some example playbooks.

Installation should be easy, assuming you’ve got the correct packages installed, see my previous Ansible posts – if you got through that install, you should be able to install this with a single line:

pip install ansible-lint

Once installed you should now be able to do something like this:

ansible-lint clone-vm.yml

The clone-vm.yml is from my #vBrownBag series. As you can see in this screenshot, it suggests I have some trailing whitespace

ansible-lint-whitespaceOnce I tiddy up the extra whitespace in the playbook, no suggestions are returned.

ansible-lint-fixedThat is a pretty basic example, let’s say I’ve missed something such as a { when using vars_prompt, here you can see I have a missing backet for vm


Once again, now that it is fixed, no suggestions are returned. One thing that at least this specific tool does not help with is spacing errors, so your playbook will need to be valid, running ansible-lint here for example where my spacing is incorrect results in an general Ansible error, though it does point out where the error likely is:


Going forward I’ll certainly be looking into using this when writing a playbook to ensure general recommended practices are adhereed to. I’m still on the lookout for a tool that can help with spacing though!

Enter to win a free home lab from @VMTurbo

*Disclaimer – VMTurbo is a sponsor of this blog*

On February 19th VMTurbo will be holding a webinar to introduce Operations Manager 5.1. As part of the webinar they will be giving away a free home lab kit featuring an Intel NUC i5 with 16GB of RAM and a 4-bay Synology DS415+ (diskless) and a few more goodies to round out the home lab; all in you would be into this setup for about $1600. Not to bad to attend a webinar about a product that could help you identify problems in your infrastructure ay?

Head over to the VMTurbo registration page to sign up and find out more.