vRealize Automation Home Lab Upgrade

With new versions of vRealize Automation and vSphere dropping, and seemingly being stable it is time to upgrade the home lab. Since this is a home lab, and somewhat basic there are just a few steps from KB2109760 that needs to be followed:

  1. Upgrade vRA (Appliance >> IaaS)
  2. Upgrade Application Services
  3. Upgrade vCenter
  4. Upgrade ESXi

In this post, I will cover the first step in the process, upgrade vRealize Automation to 6.2.latest. First, I have shut down services on my IaaS server. Now log into the VMware vCAC Appliance management interface on port 5480 – in my case https://vxprt-vcac01.vxprt.local:5480 for example and click on the update tab. Now, click on Check Updates. As you can see here, I have an available updated from 6.1.1.0 to 6.2.1.0

vcac-upgrade

Now, as you might expect, click on Install Updates >> OK. The upgrade process will begin.

vcac-upgrade-starting

After a few minutes, you should be presented with a message that a reboot is required.

vcac-upgrade-complete

Click on the System tab, click the Reboot button, and click the Reboot button again; the system will reboot. Once the reboot completes, you should be able to log in and verify the version by clicking on the system tab. Notice anything different?

vra-branding

The updated product name; vRealize Automation is now displayed instead of vCAC Appliance and the version is 6.2.1.0. Once all the services have started, you should also be able to log into the vRealize Automation console and see the tenant information from the previous configuration.

vra-console

The next step is to upgrade the IaaS components. Again this should be straight forward in a lab because all of the components are on a single server, and not distributed. Log into the IaaS server as the service account used to run the IaaS components, if you followed along in my vDM 30-in-30 challenge you would have named it something along the lines of svc_vra_iaas. Open a web browser and grab the vRA 6.2 PreReq script Brian Graf has built over on GitHub (https://github.com/vtagion/Scripts/blob/master/vRA%206.2%20PreReq%20Automation%20Script.ps1). Save, open a PowerShell console as administrator and run the script.

vra-iaas-script-upgrade

Follow the prompts in the prereq script, typically I have selected option 2 – I have internet access and want to download and install it automatically.

 

vra-iaas-net-upgrading

Select option 2, 2 more times. When prompted, provide the service account for the IaaS components, in my case vxprt\svc_vra_iaas and the script should complete.

vra-script-complete

Now, navigate to the vRA appliance page. Click on the vRealize Automation IaaS installation page link, download and extract the zip file containing the database upgrade scripts. From a command prompt run the following command:

dbupgrade.exe -S {servername\instancename} -d {dbname} -E

On my server I am using the default SQL Express instance, so the instance name is not needed, and my DB name is vCAC so my command looks like this:

dbupgrade -S localhost -d vCAC -E

db-upgrade

If you are receiving any errors, make sure that Named Pipes is enabled.

sql-named-pipes

Now that the DB is upgraded, download the IaaS Installer file, do not rename the file, and run it. The upgrade is of the next, next, next variety.

  1. Click Next
  2. Accept the terms and click next
  3. Enter the root password for the vRA appliance, accept the certificate, and click Next
  4. Upgrade should be the only option that is available, click Next
  5. Enter the service account password, the server name, database name, and click Next
  6. Click the Upgrade button

If the computer gods are on your side, the installation should complete

iaas-upgrade-done

Click Next and Finish. If you flip back over to your vRA console, you should see all of the available tabs based on the user permission – in this case my iaasadmin user.

vra-portal

Up next is upgrading Application Services.

Home Lab – $1250 8-Core / 32GB / 750GB Flash / 2TB HDD 2015 Edition

11-139-022-TSIt was a bit over a year ago that I wrote about my 8-core home lab. I was asked if there were any updates to the build and I was curious to see how it stood up a year later. Happily for me, and anyone who has invested in this build, the same basic platform is still a solid option for your home lab. I have made a few tweaks below based on some new hardware being available. As I did last year, there was a focus on keeping cost down but having enough power to run a fully nested home lab.

With 32GB I have been able to run Windows 8.1 and VMware Workstation with 3x nested ESXi 5.5 hosts each with 8GB of RAM. One of those host runs the vCAC / vRA appliance, one runs the Application Services appliance, and the 3rd is used when provisioning virtual machines. In addition to the 3x nested hosts, I run a 5.5 VCSA at 4GB RAM, Windows 2012 R2 DC, Windows 2012 R2 vCAC / vRA IaaS with SQL Express on the same virtual machine, and CentOS 5.5 running Ansible in Workstation. With everything powered on I run at about 85% memory utilization and only push the CPU’s during provisioning processes.

The hardware…

CPU:  AMD FX8320 – This is the exact same processor as last year. It is an 8-core AMD processor that fully supports nested ESXi and 64-bit virtual machines running on the nested ESXi hosts.

Motherboard: ASRock 990FX Extreme6 – This is a new motherboard for 2015, versus the Asus I used last year (though that board is still available). The reason for the change, the ASRock Extreme6 supports up to 64GB of memory where as the Asus only supported 32GB. Now, having said that this build still uses 32GB because the 16GB memory modules are $190 each, compared to 4x 8GB (32GB total) modules being $210 TOTAL. This board has onboard RAID and has 5 6Gbps SATA ports.

Memory: G.SKILL Ripjaw X Series – Similar memory to what was used last year, just not in a full kit so pick up 4 of these.

Flash: Crucial MX200 – These were used instead of the Corsair Neutron drives I used last year, for no other reason than saving a few dollars to upgrade in other areas. The Neutron drives have been great for the last year, no problems to report so far. At $1250 you can pick up 3 of these if you like, or drop the price of your home lab.

HDD: Seagate Hybrid 1TB – I again opted for the hybrid drives for bulk / lower tier storage. I run most of my lab off these drives, configured in a RAID0. I opted for 2 of these.

NIC: Intel Dual-port 82575 – Because HCL, and wanted the possibility to install clean on baremetal. If you go the VMware Workstation route, you could skip this card potentially unless you would like more ports to get fancy with. You could again lower your cost here by going with a used card as I ended up doing last year like the HP7170 from Amazon.

Video: MSI …whatever – This is here because the motherboard doesn’t have on-board video. Buy a card based on your needs, I went cheap here because i don’t use the box for any sort of gaming. If you’ll have other uses, obviously look at your requirements.

Case: Corsair Air 540 – Case again is getting into personal preference area. The graphite 230T I used last year is perfectly capable. The Air 540 has 4 internal 2.5″ drive bays and 2 hot swap drive bays to support the 3x SSD and 2X HDD drives.

Power Supply: Rosewill RD600-M – This is the new version of the power supply used last year, which has been stable for me even through a faulty UPS.

Preparing Ubuntu template virtual machines

Bob Plankers has a great post over at lonelysysadmin.net for preparing CentOS based virtual machines for being a template. As I’ve started working with Ubuntu more I decided to take that list and Ubuntu-ize it (mostly from proding by Sarah Zelechoski – one of the smartest people I’ve ever had the privilege to work with…so many thank you’s). Anyways here is that guide… Ubuntu-ized.

Stop logging services (auditd and rsyslog):

service auditd stop
service rsyslog stop

Check for, and remove old kernels

Check your current kernel by running

uname -r

Then run

dpkg -l | grep linux-image-

If additional images are listed, remove them by running

apt-get autoremove linux-image-#.##.#

You can remove multiple images all on the same line just by listing them one after another.

Clean out apt-get

apt-get clean

Force the logs to rotate & remove old logs we don’t need

logrotate –f /etc/logrotate.conf
find /var/log -name "*.gz" -type f -delete

Truncate the audit logs (and other logs we want to keep placeholders for)

cat /dev/null > /var/log/audit/audit.log
cat /dev/null > /var/log/wtmp
cat /dev/null > /var/log/lastlog

Remove the udev persistent device rules

Well, saved a step here – there are rules which exclude creating files that match MAC addresses for VMware, Hyper-V, KVM, Xen, and virtualbox (see /lib/udev/rules.d/75-persistent-net-generator.rules). So long as your MAC matches this, nothing to clean up. Otherwise

rm -f /etc/udev/rules.d/70-persistent-net.rules

It will be recreated on the next boot, so any time you power on this VM (updates maybe?) you’ll need to delete this file again so it is not saved in the template.

Remove the traces of the template MAC address and UUIDs.

Here is another step you shouldn’t need to do, however you may want to check /etc/network/interfaces to verify

Clean /tmp out

rm -rf /tmp/*
rm -rf /var/tmp/*

Remove the SSH host keys

rm –rf /etc/ssh/*key*
rm –rf ~/.ssh/authorized_keys

Update network config

If you have set /etc/network/interfaces, make sure to reset for cloning purposes. For example as I wrote this it had a static IP address which I changed to DHCP before shutting down and converting to a template.

Remove hostname

If you have named your virtual machine anything other than localhost, and want the template to spin up with a generic name, versus say “ubuntu-template” remove entry from /etc/hostname

cat /dev/null > /etc/hostname

Remove the user’s shell history

If you have switched to root at any point, do this as root and individual user accounts

history -w
history -c

That should about do it, depending on where this template is going, make sure any ISOs attached to the CD-ROM or networks for the NIC’s are adjusted properly. While many of the steps were the same there were a few differences to be aware of. Anything else you like to clean out? Comment below please!

Hands on with Microsoft Visual Studio Code @code

As a Windows user I have been looking for a good markdown tool to write in, however most of the tools freely available have been mediocre at best. Enter Visual Studio Code, a (currently) free download from Microsoft that codesupports Windows, OSX, and Linux (OSX/open source gear heads take notice – write software cross platform!). You can download Code without any login from vistualstudio.com.

Once downloaded, it is a pretty a-typical install, no next, next, next – it just works! The UI takes a bit of poking around to get comfortable with, but after just a few minutes all seemed to be working as expected.

Below you can see an example of some markdown syntax in Code.

code-markdown

The toolbar at the top of the image

toolbar

allows you to change between split screen or single screen and, as I have done above show a preview of what you are writing. This is just a quick hands on, you can see how simple it is to get started. Now that I have found a tool that seems work properly in Windows, my next step is to find a tool for markdown presentations that is also easy to use (in Windows of course:) )

New free software from EMC to build your own SDS solution

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

There were two software related announcements at EMC World this week which I found very exciting. Building on the free for no production use of RecoverPoint for Virtual Machines from VMworld 2014, EMC announced the same for ScaleIO. ScaleIO allows you build your own Hyperconverged Infrastructure solution (HCI). This is the same software used in the new VxRack from VCE which was also announced at EMC World.

CoprHDIn addition to ScaleIO, EMC also announced CoprHD which is an open source version of EMC ViPR (@coprhd). ViPR (which is also free for non production use) is a solution that allows you to manage multiple arrays and present those as virtual volumes to hosts. In addition to managing the arrays, it also provides a self-service and automation at the storage layer. EMC ViPR also supports ScaleIO, assuming this carries over to CoprHD you could deploy a fully managed, and automated storage solution on commodity hardware for test/dev or QA (I hope they publish more specific guidelines on just what they mean by “non-production”).

Last, but not least, the community version of the VNXe which you can use to provide full block and file servers on commodity hardware. The vVNX will later come in a supported ROBO and cloud edition.

My hope is that CoprHD, ScaleIO, and the community edition of the vVNX will lead to more solutions being open sourced and offered in a free to use model. CoprHD should be available on GitHub by June, ScaleIO by the end of May, whereas the vVNX is available now for download.

 

Yummy! – PowerCLI Cookbook Review by Phillip Sellers (@pbsellers)

The PowerCLI Cookbook by Phillip Sellers is an excellent resource for any skill level, whether you are a beginner or looking for a great reference to have with you.

PowerCLI Cookbook by Phillip Sellers

First and foremost, this book far exceeds what I expect out of a technology cookbook. If you step back and think about a (food) cookbook you get the recipe for what you are going to make (i.e. what you are going to do in PowerCLI) and the ingredients to make it (i.e. the cmdlets necessary to perform the task). Phillip took that a step further and began the cookbook with how to actually start the oven, or in this case a simple recipe to connect to vCenter and get started using PowerCLI.

The chapters in the book are laid out very well, starting with basic hosts related tasks, before moving on to vCenter, virtual machines, and other more complex scenarios – the build up in this format makes it excellent for those who are new to PowerCLI, or even VMware for that matter. Each recipie also has a “how it works” section where the components use are explained (no one has ever told me how food flavors work together!).

You could quite literally use the book to just about stand up a complete vSphere environment as all the major topics such as networking, datastores, clusters, and virtual machine management (including using PowerCLI to invoke in guest scrips) is covered.

**Disclaimer – I have a book published with Packt Publishing and spoke to Phillip before he decided to write the book. This book was provided to me by the author but the review was not read, or approved by Phillip, it is simply my opinion on the book and its contents.**

ViPR SRM Explore Reports and Topology Maps

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

Up until now I went through a basic ViPR SRM installation, getting a basic single VM environment setup. What I want to show in this post is my favorite ViPR SRM feature – topology maps. To understand why these are useful, lets step back and give some scenarios:

You are the personal responsible for supporting the storage within your environment, you may support other things but ultimately when there is a storage related problem your name is called. An application own comes to you and says their application is slow, and that the network team said everything on their end is fine so its probably the storage. Great – now what?

  1. You come into a new organization – whether as an internal IT person or a var and you’ve inherited an environment cabled by 3 monkeys and a cat with no documentation – now what?

This is where topology maps can be very useful. The topology maps is that end-to-end visualization and monitoring component I mentioned in previous posts. I see from my virtual machine or even some applications such as SQL Server all the way through to the underlying storage, and drill down on each component. Let me shows you some examples.

To access the topology maps, click on Explore >> Hosts – small aside here – host could be any physical or virtual server in the environment discovered by ViPR SRM, not just ESXi hosts. So this could be an ESXi host, a virtual machine, or a physical host running its own OS.

vipr-srm-explore-reports-hosts

From this report, you can see a list of all the hosts in the environment, which for some could be a very extensive list. I should mention that the filter field is not a search field, so you cannot type the end of a machine name; for example maybe all your VM names end in OS type or some other identifier, you couldn’t just type W2K8 to find a server name myserver-w2k8, you would have to start with myserver, but would then see a list of all servers starting with that string. You can filter on any column that has the funnel icon, so for example I could filter on just physical hosts, or virtual machines by clicking the funnel icon in the host type column;

vipr-srm-filter

Using the example above, let’s say an application owner has complained about performance and you need to investigate to see if storage could be the problem. Filter on the host name, in this case I will pick on mhmbd078-W2K8, as you can see below I start typing that name and can select it from a the list or type it in full and hit enter to filter on that one host

vipr-srm-filter-hostname

 

Now I just see that specific host, in this case a virtual machine as you can see here with 16GB of memory and 4 vCPU:

vipr-srm-single-host-explore

This much information is available in just a few clicks, now there are many places you could get this information but as I continue to drill deeper, you will start to see just how much information we have at hand. With just what is available so far, you might be able to say to the application owner who issued the complain that there is not enough memory, for example maybe you know that this particular application needs 32GB of memory, so disk I/O could be a problem if the application and OS are constantly swapping to disk. But, maybe so far everything checks out, if I click on any of the text here, it will take me into the detail of that virtual machine.

Now, this is where it gets interesting; what you see below is the topology map for mbmbd078-w2k8, we can see the host, the datastore it is on, the host it is on, the VSANs it is connected to and the arrays connected to those VSANs. Also, notice to the right we have different reports related to the host, we can see attributes about the host which is show by default, you can also see:

  • Capacity information about the hosts local disks, in this case VMDKs and since it is a virtual machine, the datastore
  • Path details for the disks attached to the host
  • Related storage performance
  • Events related to the host

vipr-srm-topology-map

You can click on any element in the map to see details specific to that item, for example if you click on the datastore – DS_Bootcamp_D you can see reports about the datastore, or on the host – you guessed it, reports about the host. You may have also noticed the + icon next to some of the elements, this is because there are additional components, using VSAN0040 as an example, we can click on the + sign to see switches in that VSAN

vipr-srm-exapanded-element

Now I see two switches, each with their own + icon, I can keep drilling down and see ports on that switch as well. I can expand different elements and hover over different components to see how they are connected. For example I have expanded my host to see my HBAs, I can see that the particular HBA I am interested in is connected to VSAN mptb023 so I have expanded that as well and drilled down to see the switch ports. While I have some limited lab resolution available, you can see here that when I hover over the HBA from the host it highlights the path to the port on the switch – in this case fc1/6 (as shown by the blue highlighted line)

vipr-srm-show-details

This is just one specific report, and I have only skimmed the surface of the data available in this report. Imagine being able to show this to an application owner as you troubleshoot each component, and explain how/why any particular piece of the infrastructure supporting the application is, or isn’t doing what it is supposed to. For those folks who worked in a silo’d type group, I’d urge you not use this information to punt back over your wall to someone else, but rather be the person to start poking some pinholes in the silo, call up a virtualization, OS, or network person depending on what you might think the problem is and work with them, sharing knowledge and help the application owner be a happy customer. After all, even if you are “internal” IT – you are still providing a service to the business – they are you customers, treat them like it. Silos will only fall if someone starts poking holes, no reason it can’t be you.

If you haven’t done so, chat with your EMC rep (they can likey get you in touch with an SE who can help if you have any setup questions) and head over to support.emc.com to sign up for an account and download ViPR SRM which comes with a 30 day license.

The MOST important infrastructure design consideration – the Applications!

I’ve spent a fair amount of time over the last two years preparing for my VCDX, while the VCAP-DCA/VCIX-DCA is the last step before I fully dive in, I’ve been preparing myself not just for an exam/test but to be the best possible architect I can be. To that end, preparation is an on-going, ever evolving process.

During this time, I have seen many blog posts about doing designs, talking about application requirements in terms of CPU, memory, disk (space, I/O, throughput), network, and security. I have seen some posts (even by current VCDX’s) from vendors talking about how their solutions solve problems, even if they might not be the best solution. I’ve long said that most organizations have the need for different products to support their infrastructure, and that no one solution is a fit for all workloads. Now, as this has got me in trouble in the past, nothing is 100% valid, just like no product is a 100% fit for all workloads. There very well could be environments that run, fully supported, on one “stack” of vendor hardware.

For example maybe a solution for application A is preferred, but for application B it might be ranked lower. Some peoples take, is it worth deploying different solutions for different applications and adding in complexity to your environment? That answer, as I think any good architect would say, “It depends” – and not just on raw performance statistics like IOPS, or CPU utilization. Let’s take a look.

If were starting a design today, I would gather requirements from the business (including things like local, state, regional, or industry regulations), application owners, application performance requirements, team/training considerations, space, cooling, availability – its a long list. I would then compile an initial list of those requirements, known risks, and constraints before I even begin considering hardware. If you are engaging a vendor or VAR to help you in this process, and they are leading with a hardware solution and fitting your business onto that – it could be a red flag that you and  your organizations best interest are not front and center, but rather just making a sale of a product.

Now that I have all that information gathered, I need to once again step back and dive further into the applications the organization relies upon. Let’s say for example that from a performance perspective, and business perspective that 3x Isilon S210 nodes will provide sufficient near term performance and capacity, and since adding additional nodes is easy, the organization can expand as needed. Great right? Well not if we failed to take a look at the application requirements. In my earlier example I mentioned application A and application B, let’s put a name with those – call it Microsoft SQL 2014 and Microsoft Exchange 2013. Taking a look at the SQL server requirements, it would appear as though our hardware selection is a fit – there are some considerations for analysis services, and for clustering but we can adhere to those requirements with the Isilon S210.

Now, let’s take a look at Exchange; hmmm interesting note here. It seems as though Exchange is NOT supported on NFS storage – even if it is presented and managed via a hypervisor:

A network-attached storage (NAS) unit is a self-contained computer connected to a network, with the sole purpose of supplying file-based data storage services to other devices on the network. The operating system and other software on the NAS unit provide the functionality of data storage, file systems, and access to files, and the management of these functionalities (for example, file storage).

All storage used by Exchange for storage of Exchange data must be block-level storage because Exchange 2013 doesn’t support the use of NAS volumes, other than in the SMB 3.0 scenario outlined in the topic Exchange 2013 virtualization. Also, in a virtualized environment, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.

So, here is a scenario where the proposed hardware solution, which is NAS/NFS based is not supported by one of the applications. In this case, the application vendor has a specific requirement around the application, above and beyond things like CPU, memory, or I/O. There is now an additional design decision – leverage two different solutions – one NFS such as Isilon, or NetApp for applications that support NFS, and a block solution for Exchange? This also dives into another level of “it depends” – if certain applications are expecting NFS storage some place, is it worth having two hardware solutions or making changes to the design of how the application works? For example, if you decided to go all block, either the application requiring NFS needs to change, or you still add another layer of complexity and management by possibly virtualizing an NFS server.  You may also find a storage solution that supports both block, NFS, and/or SMB 3.0 – the later of which would be supported as storage for Hyper-V. However, you’ve now introduced multiple-hypervisors into the environment, is that really any less complex than two hardware solutions? As an internal IT person, if the appropriate work was done at the begining, you hopefully are not spending a lot of time managing storage, that would be a waste. I can tell you, from personal experience as a former internal IT person, whose company supported a write heavy application that then offloaded cold data to cheap storage, I spent almost no time configuring storage because we put the appropriate thought into it before hand. Instead we focused on our applications, automation, and other projects in support of new application features.

There are many possibilities, it is all about building a reslient, reliable, and supported solution. Make sure all OTS and custom applications are documented, and considerations such as things like supported configurations are taken into account before you move on to design and hardware selections. Now this is just a singular example, there are considerations in designing infrastructure for much more than just storage, such as networking – ACLs, communication between applications, and the affects an ACL/firewall might put on communication. There is backup, disaster recovery, replication, automation – one of the reasons a VCDX design in many cases ends up being hundreds of pages.

Having this level of information helps you properly identify all the actual project requirements, after all – hardware, virtualization, storage, SDN, SDS, etc… is all about application availability and supportability, not about what kind of metal it runs on! I wouldn’t want anyone to think this is an anti-NFS or anti-block post, far from it – again its about supporting the applications your organization needs. This also presents an opportunity for you to reflect on if there are better, supported, options available. If your VAR/vendor is trying to sell you an unsupported solution, it is probably in your best interest to move along.

**Now that I’ve said that applications are the MOST important consideration, I’m sure someone will find that use case where its not, but standing by this one – I’ve never had a CFO ask me if their virtual machine was having problems – its all about the apps!**

ViPR SRM Solution Packs for vCenter and XtremIO

**Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

In my last two posts I touched on what ViPR SRM can do, and the quick installation.

With the ViPR SRM installation out of the way, it’s time to start adding Solution Packs. Solution Packs are use to connect to various systems, such as VMware vCenter, so ViPR SRM can collection information about virtual machines, ESXi hosts, datastores, and HBA’s. Additionally, you connect ViPR SRM to your switches and storage for, quite literally, an end to end view of your environment.

  • First, log into http://<vipr srm IP or DNS>:58080/APG and click on Administration (upper right corner)
  • Once you are in the Administration interface, click on Centralized Management on the left navigation menu, a new window or tab will open
  • In the new window, click on Solution Pack Center (back in the upper right corner)

vipr-srm-solution-packs

  • In the search box in the upper right corner, type vCenter to filer the results, and click on VMware vCenter
  • When the vCenter box opens, click on the install button.

virp-srm-vcenter-install-pack

  • Follow the wizard and review the options; it’s a basic wizard – next, next; if using PowerPath click Enable the Host PowerPath alerts for example and click next, next, next, next, and finally install. ViPR SRM will go through and install the selected components.

vipr-srm-solutions-pack-vcenter-installed

  • Click OK. Repeat the above steps for your environment. At the very least, the Storage Compliance pack is useful. Here is the EMC XtremIO solution pack which I will be installing to show examples from.

vipr-srm-solution-pack-xtremio

  • With the solution packs installed, we need to provide each some information. Expand Discovery Center in the left navigation menu, expand Devices Management and click on VMware vCenter
  • Click on the Add new device… button and fill in the information to connect to vCenter. I suggest using dedicated accounts for external services, so for example here is my app_viprsrm user account which has admin privileges in vCenter. Click the test button to confirm the account has access, and then click OK. Repeat for multiple vCenters or the storage in your environment you added a pack for.

info

Don’t forget to click the Save button!

save

vcenter-vipr-srm-credsDepending on your environment, you may also want to add your FC switches as well. Switch monitoring is done by adding a Solution pack for your switch, and connecting to it via SNMP. While logged in as admin go to http://<viprsrm IP or DNS>:58080/device-discovery, click Collectors, click New Collector, and Save. This will add an SNMP collector to the local VM. Once the collector is added click on Devices, New Device, and fill in the appropriate information.

vipr-srm-snp-device-discovery

With all switches added, click the check box next to it, and click the magnifying glass icon under actions; this will discover the switch.

ViPR SRM will now start collecting data, to expidite the process click on Scheduled Tasks (left navigation menu), check box for the “import-properties-default” task, and click the Run Now button. If you return to the User Interface (back in the Administration page, click User Interface) and go to Explore >> Hosts you should see your vCenter hosts as well as virtual machines.

vipr-srm-vcenter-hosts

If you navigate to Explore >> Storage you should also see the storage devices you added.

vipr-srm-storage

With the configuration out of the way, I can now start to explore my environment with the various reports available, which I will do in the next post!

Installing ViPR SRM

*Disclaimer: I am an EMC employee, this post was not sponsored or in any way required by my employer, it is my experience getting to know this particular product.**

In my last ViPR SRM post, I introduced you to some of the features if you were not already aware of them. In this post, I will look at installing ViPR SRM 6.5.2. I downloaded ViPR SRM from support.emc.com; while I am an EMC employee, I logged into the support site with my personal email account to download the files. Once logged in, search for ViPR SRM and click on the downloads menu, as I mentioned I will be going with the vApp option versus a binary installation.

vipr-srm-search-support

Once downloaded, extract the content of the zip file – you’ll have 2 OVF’s. One is the 4 VM vApp I mentioned in my last post, the other, a 1VM vApp useful for lab and evaluation purposes. Given I have limited resources in my home lab, I will be deploying the 1 VM vApp.

info

Important note here, you will need to deploy the OVF to vCenter, not a stand-alone ESXi host as some of the OVF properties will not be exposed properly, causing the deployment to fail.

Follow the OVF deployment wizard, when prompted select the All-In-One configuration:

vipr-srm-all-in-one-ovf

By default, the VM deploys with 4 vCPU – adjust according to your lab, I have set mine to 2, 16GB RAM and removed the reservation (performance here would not be ideal obviously, but this is for lab purposes only). Once the OVF has been deployed, you should be able to log into http://<viprsrm DNS or IP>:58080/APG. Login as admin/change me to access ViPR SRM.

vipr-srm-UI

sadfsadf

By default, you are in the “User” interface, if you click on “Administration” in the upper right corner, you will go to the administration screen. Go ahead and click on Administration >> Centralized Management (on the left nav menu) >> License Management (also on the left nav menu). As you can see you have a 30 day trial license to test out ViPR SRM.

vipr-srm-trial-license

Close the license window/tab. Notice where the “Administration” menu was, you now see a “User Interface” menu, this will (like the administration link did) take you to the User interface (where you initially landed when you logged in.

In the next post, I will look at connecting ViPR SRM to vCenter and, in my case, XtremIO.