Showing posts with label vmware. Show all posts
Showing posts with label vmware. Show all posts

Wednesday, June 5, 2013

Neuron: Using the ESXi CLI to Fix a VMK0 Mistake

In VMWare ESX, the management traffic for the host is sent to the interface vmk0 which is a virtual interface.  This morning while troubleshooting another vmk* interface because of a vMotion problem, I accidentally changed the dvsPortGroup (VLAN) on vmk0.  As soon as that took effect, the host was not able to be seen by vCenter.  Thankfully the guest VMs continued to run without any failure.  

Now came a chicken and the egg problem.  I needed to change the dvsPortGroup on vmk0 back, but I couldn't access the host using vCenter until vmk0 was back online.  This led me to Google to find a way to accomplish the same thing using the CLI on the individual host.  This article pointed me in the right direction for the commands.

What I ended up doing was the following:

1. Lookup the DVPort number using esxcfg-vmknic -l command.  As you can see below the DVPort currently used by a VMK* interface is easily found in the output.

2. Lookup the DVPort of a free port in the distributed vSwitch (in our case a Nexus 1000V) in the proper port group using vCenter. 
3. Delete the existing vmk* nic by using the command:

esxcfg-vmknic -d -s DVSwitch_name -p DVPort

4. Recreate the vmk* nic by using the command below with the DVPort found in step 2.

esxcfg-vmknic -a -s DVSwitch_name -p DVPort -i IPAddress -n NetMask

At this point I had my vmk0 back with the proper IP and VLAN so I was able to reconnect the host to vCenter and all was well.  The moral of the story is be careful what you're clicking on.

Tuesday, June 5, 2012

Not your Father's CiscoWorks...

It's budget time and as such I've been playing with several demos of Cisco products trying to figure out what is worth fighting for and what isn't.  While budgeting for SmartNet renewals I discovered that my Wireless Control System (WCS) software was EoS next February.  WCS has been one of those applications that "just works", so I haven't really worried about replacing it before.

Knowing now that WCS needed replaced I went out to Cisco to figure out what the replacement was.  Cisco is migrating WCS customers to Cisco Prime Network Control System (Prime is their overall branding for all things network management).  The number one difference with NCS is that it has the ability to show both wired and wireless clients on the network in one tool.  For the most part the interface is the same as WCS, but it has been polished a bit with fancy new graphics.

The real surprise for me was that Cisco Prime NCS is bundled with Cisco Prime Lan Management Solution (LMS).  My first thought was that Cisco has had too much trouble selling CiscoWorks LMS so they just renamed it.  I have been pleasantly surprised.  Prime LMS is not your father's CiscoWorks.  The web interface is clean and for the most part easy to use like NCS or WCS.  Every so often you can see that the GUI designers ran back to the mothership as CiscoWorks-esque screens do still pop up in some areas.

Overall I've been impressed with both my demo of Cisco Prime NCS and Cisco Prime LMS.  My one complaint is the size of the VMWare appliance.  I have a relatively small network so I have chosen to use the "small" appliance version of both applications.  If I chose to thick provision both appliances, the combination would have been almost 512GB.  Now I realize that disk is "cheap" on laptops and such, but for Enterprise storage on a SAN, that's quite a chunk of change.  Surely for a network under 50 switches and 200 APs, the applications don't need that much space.  Maybe Cisco needs a ultra small tier too?

Monday, August 29, 2011

Cisco Nexus 1000v Roundup

This post is merely an index for the 6 post series I did on the Cisco Nexus 1000v.  I hope that beyond being a good learning experience for myself that it will benefit others. 

Cisco Nexus 1000v - Adding Physical Ports (Part 6)

The previous posts have established a fully configured, but unused Nexus 1000v.  At this point it's like having a physical switch in the rack, powered up and configured, but with no network cables attached.  In VMWare, the "cables" are attached using the vSphere Client.


Attaching Physical Ports to the Nexus 1000v



  1. Connect to vCenter using the vSphere Client
  2. Go to Networking Inventory and select the Nexus distributed virtual switch (dVS).
  3. Right click on the Nexus and choose add host.
  4. Select the host and vmnic(s) to use and change their DVUplink port group to system-uplink (or what you named the system uplinks in your port group on the Nexus) for the system uplink ports and vm-uplink for the VM networking ports.
  5. Click next and choose not to migrate the vmk0 or VMs. (I prefer to verify the Nexus 1000v's operation before migrating anything.)
  6. Click Finish.
  7. Repeat for all hosts in the cluster.
Migrating vmk0 Interfaces to Nexus

Once you have added a few test VMs to the Nexus and are certain that the Nexus 1000v is working properly, it's time to migrate the last physical NIC from the vSwitch to the Nexus 1000v and with it the vmk0 interface used for vMotion and VMWare host management.  Keep in mind that if you don't need this NIC for bandwidth reasons, it is not mandatory to move these services to the Nexus 1000v.
  1. Connect to vCenter using the vSphere Client.
  2. Go to Networking Inventory and select the Nexus dVS.
  3. Right click on the Nexus and choose manage host.
  4. Select the hosts and click next twice.
  5. Click on the destination port group for the vmnic used by vmk0 and choose the Nexus port group.
  6. Click next and then finish without migrating VMs.
You will need to repeat this for each host in the cluster.  Leave the host with the active VSM for last and make sure to migrate it's NICs to the Nexus before disconnecting the vSwitch from the vmnic.  

Friday, August 26, 2011

Cisco Nexus 1000v Software Installation (Part 5)

In this article I will run through installing the Virtual Ethernet Module (VEM) and creating the initial port groups on the Nexus 1000v.

Installing the Virtual Ethernet Module (VEM)

  1. Open the vSphere client and connect to vCenter.
  2. Right click on the host that you are going to install the VEM on and choose Maintenance Mode. (NOTE:  This will vMotion all guests from that host to other hosts if you have vMotion enabled, otherwise those guests will be shutdown.)
  3. Copy the VEM bundle from the Nexus 1000v install zip file to the vMA or to the computer that you are running vCLI on.
  4. Use the vCLI to install the VEM with the following command: vihostupdate -install -bundle <path to VEM Bundle> --server <host IP>
As you can see, installing the VEM software is fairly simple.

Creating the Port Groups on the Nexus 1000v

The Nexus 1000v uses port-profile configurations to define the configuration for each type of interface.  In this part of the install we need to setup profiles for the physical NICs that will uplink to the hardware switch infrastructure for both the system VLANs like VMK0 and the Nexus Control traffic as well as the VM uplinks for normal guest VLAN traffic.  On the Nexus 1000v, physical NICs are all of type Ethernet and virtual NICs are vEthernet.

  1. Connect to the switch management IP address using SSH.
  2. Type config t and enter to enter configuration mode.
  3. Configure a port profile to use for your system uplink ports (VMK0, Nexus Control, Nexus Packet, Nexus Management).  Below is an example:

    port-profile type ethernet vm-uplink
      vmware port-group
      switchport mode trunk
    ! In my lab, 255 is MGMT, 256 is Nexus Packet and Control and 101 is for VMK0
      switchport trunk allowed vlan 101, 255-256
      switchport trunk native vlan 255
    ! This command has Nexus create port-channels automatically
      channel-group auto mode on
      no shutdown
    ! System VLANs come up before the VSM is fully initialized
      system vlan 101,255-256
      description SYSTEM-UPLINK
      state enabled
    
  4. Configure a port profile to use for the VM Guest networks.

    port-profile type ethernet vm-uplink
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 2,102,104-105,259
      switchport trunk native vlan 102
    ! This command has Nexus create port-channels automatically
      channel-group auto mode on
      no shutdown
    ! System VLANs come up before the VSM is fully initialized.
      system vlan 102
      description VM-UPLINK
      state enabled 
  5. Configure port profiles for the guest networking to match the old vSwitch port-groups.

    port-profile type vethernet example-vlan
      vmware port-group example-vlan
      switchport access vlan 
      switchport mode access
      no shutdown
      state enabled
     
  6. Save the new configuration by doing copy running-config startup-config

Now that we have everything configured, the next post will be how to plug the network into the Nexus 1000v.

Cisco Nexus 1000v Software Installation (Part 4)

In the last post, I configured the VMWare stock vSwitch to allow us to start the Nexus 1000v install.  I forgot to mention one thing that I usually do (it's not required) to help with mapping physical ports.  The VMWare vSwitch supports CDP, but is usually set to listen only.  To change it so that it will send out CDP packets you need to do the following:
  1. Using the VMWare vCLI either from a desktop of the vMA appliance run the following command: vicfg-vswitch -server <servername> -B both vSwitch0
  2. Repeate for all of the ESX Hosts in the cluser.
The next step in the Nexus 1000v installation is to install the Virtual Supervisor Module (VSM)  VM appliance(s).

Deploying the Virtual Supervisor Module (VSM)


  1. Download and decompress the Nexus 1000v software from Cisco.
  2. Open up the vSphere Client and connect to vCenter.
  3. Go to File and then Deploy OVF Template.
  4. Select the OVF file under the Nexus 1000v VSM install folder.
  5. Accept the details and the EULA.
  6. Give the VSM a name and choose next.
  7. Choose the ESX data store for the VSM and choose next.
  8. Choose thick provisioned and click next. (NOTE: Thin provisioning is not supported by Cisco for the VSM.)
  9. Map the virtual NICs to the appropriate port groups on the vSwitch.
  10. Power up the VSM and connect to its console using vSphere client.
  11. Once booted, go through the text based administrative setup to configure the following:
    • Administrative user password
    • Role (Standalone/primary/standby)  The first VSM will be primary unless it is to be the only VSM and then it would be standalone.
    • Domain ID.  This number serves to tie VSMs to a VEM and to vCenter.  Each Nexus 1000v instance must have a unique domain ID.
  12. Continue with the basic system configuration including:
    • SNMP Read-Only community string
    • Naming the switch
    • Assigning the management IP address settings
    • Enabling or disabling services.  NOTE: Make sure http and ssh are enabled as they are used later in the process.
  13. Open up a web browser to http://<nexus1000vIPaddress/ and click on "Launch Installer Application" (Requires Java Web Start and doesn't seem to currently work with Chrome).
  14. Give the wizard the credentials for the VSM that you created earlier.
  15. Give the wizard the credentials for accessing vCenter and the vCenter IP Address.
  16. Select the cluster where the VSM is installed.
  17. Assign the proper VLANs to the VSM's virtual NICs by choosing advanced L2 and then chose the proper port groups.
  18. Configure settings for the VSM.
  19. Review the configuration.
  20. Tell the wizard not to migrate any hosts or VMs and let it finish.  It will reboot the VSM multiple times before reporting that it is done.

Friday, August 19, 2011

Cisco Nexus 1000v Software Installation (Part 3)

In this article I will examine the process of configuring the default VMWare vSwitch with the VLANs needed to start the Nexus 1000v Install.


Assumptions: 
·         ESX is already installed
·         vCenter VM is already installed and configured
·         vSphere Client is installed on workstation
·         VLANs for Nexus Packet and Capture interfaces (can be the same VLAN) are created on the network.
·         VLAN for Nexus Management interface is created on the network.
·         The ESXi hosts have their ports configured as trunks with the Nexus Packet, Capture and Management VLANs allowed as well as the VLAN used for the ESX hosts’ IP addresses (VMK0).

The first steps to getting a Nexus 1000v installed are actually to get the basic VMWare vSwitch configured and operating.



Configuring ESXi vSwitch

  1. Open vSphere Client and connect to vCenter.
  2. Click on the host to configure.
  3. Click on the configuration tab.
  4. Click on networking. 
  5. Click on properties.
  6. Click on add.
  7. Click on Virtual Machine and then Next 
  8. Give the network a label and a VLAN ID (0 indicates the native vlan).  NOTE:  This label must be consistent on all hosts for vMotion.
Click finish to complete the process and repeat this on all of the hosts in the cluster being sure to make the network labels identical (case sensitive).  You will need to repeat this for every VLAN that you will need for the Nexus install including the Nexus Packet, Nexus Control and Management VLANs.

Tuesday, June 21, 2011

Cisco Nexus 1000v Virtual Switch for VMWare ESX

Recently the server group came to me and let me know that they had purchased Nexus 1000v licensing for the new ESX cluster.  As a Cisco geek, I was pretty stoked to get to work with the Nexus 1000v as it was virtual and the first Nexus platform I would get to work on.

In this article, I am going to try to lay out some of the basic nomenclature surrounding the Nexus 1000v.  If you are like me and have been living in your network world with little exposure to the guts of VMWare, this article will hopefully help bring you up to speed.

The first concept that can get a bit confusing in the VMWare world is the way they refer to their NICs.  The physical NICs on the server are referred to as vmnic<x> starting with vmnic0.  When I was first getting started, I kept wanting to think that these were virtual NICs since it started with VM, but that is not the case.  Usually the onboard NICs are the first vmnics and then any expansion cards are after that.


Next there is the matter of the different types of virtual switches that can be configured.  The most basic type is the standard vSwitch.  To the average networking guy this is basically configuring the ESX host's interfaces to be used as tagged (trunked) VLAN interfaces.  Each VLAN that you want to support is added as a standard vSwitch.  Once added the vSwitch can be associated with any VM.  The drawback to vSwitch from a VMWare perspective is that they need to be identically configured on every ESX host in the cluster to allow for vMotion of VMs since their network has to be present for them to move.  From a network guy's point of view the drawbacks are numerous including:



  • No ACL capabilities.
  • No SNMP monitoring of traffic counters.
  • No Netflow monitoring.
Realizing the problems with vSwitch design for vMotion, VMWare came up with the distributed vSwitch or DVS.  This switch is configured for the entire cluster and ensures that all members of the cluster have the same network configuration available for VMs.  From a VMWare perspective this is the cat's meow, but it still lacks SNMP, ACL and Netflow capabilities that us networking nerds crave.  

That's where the Nexus 1000v comes in.  It is a cisco developed distributed vSwitch that uses the VMWare APIs to give a full NX-OS controlled DVS.  If you can do it in NX-OS you can do it with the Nexus 1000v.  In my next post I will discuss how the Nexus 1000v architecture interacts with VMWare.