Monday, August 29, 2011

Cisco Nexus 1000v Roundup

This post is merely an index for the 6 post series I did on the Cisco Nexus 1000v.  I hope that beyond being a good learning experience for myself that it will benefit others. 

Cisco Nexus 1000v - Adding Physical Ports (Part 6)

The previous posts have established a fully configured, but unused Nexus 1000v.  At this point it's like having a physical switch in the rack, powered up and configured, but with no network cables attached.  In VMWare, the "cables" are attached using the vSphere Client.


Attaching Physical Ports to the Nexus 1000v



  1. Connect to vCenter using the vSphere Client
  2. Go to Networking Inventory and select the Nexus distributed virtual switch (dVS).
  3. Right click on the Nexus and choose add host.
  4. Select the host and vmnic(s) to use and change their DVUplink port group to system-uplink (or what you named the system uplinks in your port group on the Nexus) for the system uplink ports and vm-uplink for the VM networking ports.
  5. Click next and choose not to migrate the vmk0 or VMs. (I prefer to verify the Nexus 1000v's operation before migrating anything.)
  6. Click Finish.
  7. Repeat for all hosts in the cluster.
Migrating vmk0 Interfaces to Nexus

Once you have added a few test VMs to the Nexus and are certain that the Nexus 1000v is working properly, it's time to migrate the last physical NIC from the vSwitch to the Nexus 1000v and with it the vmk0 interface used for vMotion and VMWare host management.  Keep in mind that if you don't need this NIC for bandwidth reasons, it is not mandatory to move these services to the Nexus 1000v.
  1. Connect to vCenter using the vSphere Client.
  2. Go to Networking Inventory and select the Nexus dVS.
  3. Right click on the Nexus and choose manage host.
  4. Select the hosts and click next twice.
  5. Click on the destination port group for the vmnic used by vmk0 and choose the Nexus port group.
  6. Click next and then finish without migrating VMs.
You will need to repeat this for each host in the cluster.  Leave the host with the active VSM for last and make sure to migrate it's NICs to the Nexus before disconnecting the vSwitch from the vmnic.  

Friday, August 26, 2011

Cisco Nexus 1000v Software Installation (Part 5)

In this article I will run through installing the Virtual Ethernet Module (VEM) and creating the initial port groups on the Nexus 1000v.

Installing the Virtual Ethernet Module (VEM)

  1. Open the vSphere client and connect to vCenter.
  2. Right click on the host that you are going to install the VEM on and choose Maintenance Mode. (NOTE:  This will vMotion all guests from that host to other hosts if you have vMotion enabled, otherwise those guests will be shutdown.)
  3. Copy the VEM bundle from the Nexus 1000v install zip file to the vMA or to the computer that you are running vCLI on.
  4. Use the vCLI to install the VEM with the following command: vihostupdate -install -bundle <path to VEM Bundle> --server <host IP>
As you can see, installing the VEM software is fairly simple.

Creating the Port Groups on the Nexus 1000v

The Nexus 1000v uses port-profile configurations to define the configuration for each type of interface.  In this part of the install we need to setup profiles for the physical NICs that will uplink to the hardware switch infrastructure for both the system VLANs like VMK0 and the Nexus Control traffic as well as the VM uplinks for normal guest VLAN traffic.  On the Nexus 1000v, physical NICs are all of type Ethernet and virtual NICs are vEthernet.

  1. Connect to the switch management IP address using SSH.
  2. Type config t and enter to enter configuration mode.
  3. Configure a port profile to use for your system uplink ports (VMK0, Nexus Control, Nexus Packet, Nexus Management).  Below is an example:

    port-profile type ethernet vm-uplink
      vmware port-group
      switchport mode trunk
    ! In my lab, 255 is MGMT, 256 is Nexus Packet and Control and 101 is for VMK0
      switchport trunk allowed vlan 101, 255-256
      switchport trunk native vlan 255
    ! This command has Nexus create port-channels automatically
      channel-group auto mode on
      no shutdown
    ! System VLANs come up before the VSM is fully initialized
      system vlan 101,255-256
      description SYSTEM-UPLINK
      state enabled
    
  4. Configure a port profile to use for the VM Guest networks.

    port-profile type ethernet vm-uplink
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan 2,102,104-105,259
      switchport trunk native vlan 102
    ! This command has Nexus create port-channels automatically
      channel-group auto mode on
      no shutdown
    ! System VLANs come up before the VSM is fully initialized.
      system vlan 102
      description VM-UPLINK
      state enabled 
  5. Configure port profiles for the guest networking to match the old vSwitch port-groups.

    port-profile type vethernet example-vlan
      vmware port-group example-vlan
      switchport access vlan 
      switchport mode access
      no shutdown
      state enabled
     
  6. Save the new configuration by doing copy running-config startup-config

Now that we have everything configured, the next post will be how to plug the network into the Nexus 1000v.

Cisco Nexus 1000v Software Installation (Part 4)

In the last post, I configured the VMWare stock vSwitch to allow us to start the Nexus 1000v install.  I forgot to mention one thing that I usually do (it's not required) to help with mapping physical ports.  The VMWare vSwitch supports CDP, but is usually set to listen only.  To change it so that it will send out CDP packets you need to do the following:
  1. Using the VMWare vCLI either from a desktop of the vMA appliance run the following command: vicfg-vswitch -server <servername> -B both vSwitch0
  2. Repeate for all of the ESX Hosts in the cluser.
The next step in the Nexus 1000v installation is to install the Virtual Supervisor Module (VSM)  VM appliance(s).

Deploying the Virtual Supervisor Module (VSM)


  1. Download and decompress the Nexus 1000v software from Cisco.
  2. Open up the vSphere Client and connect to vCenter.
  3. Go to File and then Deploy OVF Template.
  4. Select the OVF file under the Nexus 1000v VSM install folder.
  5. Accept the details and the EULA.
  6. Give the VSM a name and choose next.
  7. Choose the ESX data store for the VSM and choose next.
  8. Choose thick provisioned and click next. (NOTE: Thin provisioning is not supported by Cisco for the VSM.)
  9. Map the virtual NICs to the appropriate port groups on the vSwitch.
  10. Power up the VSM and connect to its console using vSphere client.
  11. Once booted, go through the text based administrative setup to configure the following:
    • Administrative user password
    • Role (Standalone/primary/standby)  The first VSM will be primary unless it is to be the only VSM and then it would be standalone.
    • Domain ID.  This number serves to tie VSMs to a VEM and to vCenter.  Each Nexus 1000v instance must have a unique domain ID.
  12. Continue with the basic system configuration including:
    • SNMP Read-Only community string
    • Naming the switch
    • Assigning the management IP address settings
    • Enabling or disabling services.  NOTE: Make sure http and ssh are enabled as they are used later in the process.
  13. Open up a web browser to http://<nexus1000vIPaddress/ and click on "Launch Installer Application" (Requires Java Web Start and doesn't seem to currently work with Chrome).
  14. Give the wizard the credentials for the VSM that you created earlier.
  15. Give the wizard the credentials for accessing vCenter and the vCenter IP Address.
  16. Select the cluster where the VSM is installed.
  17. Assign the proper VLANs to the VSM's virtual NICs by choosing advanced L2 and then chose the proper port groups.
  18. Configure settings for the VSM.
  19. Review the configuration.
  20. Tell the wizard not to migrate any hosts or VMs and let it finish.  It will reboot the VSM multiple times before reporting that it is done.

Friday, August 19, 2011

Cisco Nexus 1000v Software Installation (Part 3)

In this article I will examine the process of configuring the default VMWare vSwitch with the VLANs needed to start the Nexus 1000v Install.


Assumptions: 
·         ESX is already installed
·         vCenter VM is already installed and configured
·         vSphere Client is installed on workstation
·         VLANs for Nexus Packet and Capture interfaces (can be the same VLAN) are created on the network.
·         VLAN for Nexus Management interface is created on the network.
·         The ESXi hosts have their ports configured as trunks with the Nexus Packet, Capture and Management VLANs allowed as well as the VLAN used for the ESX hosts’ IP addresses (VMK0).

The first steps to getting a Nexus 1000v installed are actually to get the basic VMWare vSwitch configured and operating.



Configuring ESXi vSwitch

  1. Open vSphere Client and connect to vCenter.
  2. Click on the host to configure.
  3. Click on the configuration tab.
  4. Click on networking. 
  5. Click on properties.
  6. Click on add.
  7. Click on Virtual Machine and then Next 
  8. Give the network a label and a VLAN ID (0 indicates the native vlan).  NOTE:  This label must be consistent on all hosts for vMotion.
Click finish to complete the process and repeat this on all of the hosts in the cluster being sure to make the network labels identical (case sensitive).  You will need to repeat this for every VLAN that you will need for the Nexus install including the Nexus Packet, Nexus Control and Management VLANs.

Cisco Nexus 1000v Part 2

As promised in my last post, this post will be an explanation of how the Nexus 1000v's architecture is laid out as well as how that fits into vSphere.  Cisco uses the familiar imagery from the physical world of a chassis (think Nexus 7000 or Catalyst 6500) with line cards.  In the physical world you would have one or two supervisor engines that provide the brains and then several line cards that provide ports.  In the Nexus 1000v paradigm, the chassis is a virtual container in which you place one or two Virtual Supervisor Modules (VSM) that are actually guest VMs that run NX-OS.  The VSMs can be hosted on the ESX cluster, another ESX host, or the Nexus 1010 appliance.  These VSM modules communicate through the VMWare infrastructure to a Virtual Ethernet Module (VEM) that resides on each ESX host.  Think of the connection through the infrastructure as the back plane or switch fabric of the virtual switch.  The VEMs show up in the Nexus 1000v as line cards in the virtual chassis.

The VSMs communicate with vCenter to coordinate the physical and virtual NICs on the servers and how they are connected to the VEMs.  You use vCenter to manage which physical NICs are associated with a VEM.  Physical NICs show up as eth<id> interfaces on the Nexus 1000v.  The virtual NICs associated with the hosts show up as veth<id> interfaces.  For those not familiar with NX-OS, ethernet interfaces can be 10Mbps, 100Mbps, 1Gbps or 10Gbps.  Unlike IOS the name doesn't designate the speed.


Because of the way that the VEMs communicate with the VSM, it is crucial to maintain the networking links between the VSMs and the VEM or the VEM will disappear from the Nexus 1000v.  If it is disconnected, the VEM continues to forward traffic in the last known configuration but it is not configurable.


In my next posts on the Nexus 1000v, I will run through the basics of getting a Nexus 1000v installed into a vSphere 4.1 environment.

Monday, August 8, 2011

Catalyst 6509-E VSS Software Upgrade Gone Bad

My work network has a pair of Cisco Catalyst 6509-E chassis that are configured in a Virtual Switching System (VSS) to serve as the network core.  Last week we had a supervisor engine crash and were having some residual craziness with our CAM table.  TAC suggested a reboot and software upgrade so we scheduled one for Sunday afternoon.  

Usually a software upgrade on the 6509 is relatively painless, but this time it proved to be very painful.  The previous software load on the VSS pair was 12.2(33)SXI, but it was the modular version (keep this in mind it's important).  The new software load suggested by TAC was 12.2(33)SXJ1 which as of SXJ is only offered in monolithic versions.

Assuming that all was well with these two versions, I started down the path of doing an enhanced Fast Software Upgrade (eFSU) of my VSS pair using the ISSU commands as listed in the Catalyst 6500 Release 12.2SX Software Configuration Guide - Virtual Switching Systems (VSS) on Cisco's website.  After issuing issue loadversion disk0:s72033-ipservicesk9_wan-mz.122-33.SXJ1.bin on the active console, I waited for the standby chassis to reload.  Unfortunately it entered a reboot loop because the new software was not compatible for ISSU.  Here is where it got hairy.  At this point I could neither abort nor complete the upgrade on the active supervisor.  It wouldn't let me change the boot system variable because it had a state somewhere that said it was in an ISSU upgrade even after power cycling the chassis.  

After 5 hours on the phone with TAC, we were able to clear this persistence and finish the upgrade, but it was a very long downtime.  The moral of the story... modular and monolithic IOS don't mix well.