These are the random bits and bytes that come out of the brain of a Network Engineer from Springfield, IL. Hopefully they'll be of some use to someone other than myself.
Monday, August 29, 2011
Cisco Nexus 1000v Roundup
This post is merely an index for the 6 post series I did on the Cisco Nexus 1000v. I hope that beyond being a good learning experience for myself that it will benefit others.
Cisco Nexus 1000v - Adding Physical Ports (Part 6)
The previous posts have established a fully configured, but unused Nexus 1000v. At this point it's like having a physical switch in the rack, powered up and configured, but with no network cables attached. In VMWare, the "cables" are attached using the vSphere Client.
Attaching Physical Ports to the Nexus 1000v
Attaching Physical Ports to the Nexus 1000v
- Connect to vCenter using the vSphere Client
- Go to Networking Inventory and select the Nexus distributed virtual switch (dVS).
- Right click on the Nexus and choose add host.
- Select the host and vmnic(s) to use and change their DVUplink port group to system-uplink (or what you named the system uplinks in your port group on the Nexus) for the system uplink ports and vm-uplink for the VM networking ports.
- Click next and choose not to migrate the vmk0 or VMs. (I prefer to verify the Nexus 1000v's operation before migrating anything.)
- Click Finish.
- Repeat for all hosts in the cluster.
Migrating vmk0 Interfaces to Nexus
Once you have added a few test VMs to the Nexus and are certain that the Nexus 1000v is working properly, it's time to migrate the last physical NIC from the vSwitch to the Nexus 1000v and with it the vmk0 interface used for vMotion and VMWare host management. Keep in mind that if you don't need this NIC for bandwidth reasons, it is not mandatory to move these services to the Nexus 1000v.
- Connect to vCenter using the vSphere Client.
- Go to Networking Inventory and select the Nexus dVS.
- Right click on the Nexus and choose manage host.
- Select the hosts and click next twice.
- Click on the destination port group for the vmnic used by vmk0 and choose the Nexus port group.
- Click next and then finish without migrating VMs.
You will need to repeat this for each host in the cluster. Leave the host with the active VSM for last and make sure to migrate it's NICs to the Nexus before disconnecting the vSwitch from the vmnic.
Friday, August 26, 2011
Cisco Nexus 1000v Software Installation (Part 5)
In this article I will run through installing the Virtual Ethernet Module (VEM) and creating the initial port groups on the Nexus 1000v.
Installing the Virtual Ethernet Module (VEM)
- Open the vSphere client and connect to vCenter.
- Right click on the host that you are going to install the VEM on and choose Maintenance Mode. (NOTE: This will vMotion all guests from that host to other hosts if you have vMotion enabled, otherwise those guests will be shutdown.)
- Copy the VEM bundle from the Nexus 1000v install zip file to the vMA or to the computer that you are running vCLI on.
- Use the vCLI to install the VEM with the following command: vihostupdate -install -bundle <path to VEM Bundle> --server <host IP>
As you can see, installing the VEM software is fairly simple.
Creating the Port Groups on the Nexus 1000v
The Nexus 1000v uses port-profile configurations to define the configuration for each type of interface. In this part of the install we need to setup profiles for the physical NICs that will uplink to the hardware switch infrastructure for both the system VLANs like VMK0 and the Nexus Control traffic as well as the VM uplinks for normal guest VLAN traffic. On the Nexus 1000v, physical NICs are all of type Ethernet and virtual NICs are vEthernet.
- Connect to the switch management IP address using SSH.
- Type config t and enter to enter configuration mode.
- Configure a port profile to use for your system uplink ports (VMK0, Nexus Control, Nexus Packet, Nexus Management). Below is an example:
port-profile type ethernet vm-uplink vmware port-group switchport mode trunk ! In my lab, 255 is MGMT, 256 is Nexus Packet and Control and 101 is for VMK0 switchport trunk allowed vlan 101, 255-256 switchport trunk native vlan 255 ! This command has Nexus create port-channels automatically channel-group auto mode on no shutdown ! System VLANs come up before the VSM is fully initialized system vlan 101,255-256 description SYSTEM-UPLINK state enabled
- Configure a port profile to use for the VM Guest networks.
port-profile type ethernet vm-uplink vmware port-group switchport mode trunk switchport trunk allowed vlan 2,102,104-105,259 switchport trunk native vlan 102 ! This command has Nexus create port-channels automatically channel-group auto mode on no shutdown ! System VLANs come up before the VSM is fully initialized. system vlan 102 description VM-UPLINK state enabled
- Configure port profiles for the guest networking to match the old vSwitch port-groups.
port-profile type vethernet example-vlan vmware port-group example-vlan switchport access vlan
switchport mode access no shutdown state enabled - Save the new configuration by doing copy running-config startup-config
Now that we have everything configured, the next post will be how to plug the network into the Nexus 1000v.
Cisco Nexus 1000v Software Installation (Part 4)
In the last post, I configured the VMWare stock vSwitch to allow us to start the Nexus 1000v install. I forgot to mention one thing that I usually do (it's not required) to help with mapping physical ports. The VMWare vSwitch supports CDP, but is usually set to listen only. To change it so that it will send out CDP packets you need to do the following:
- Using the VMWare vCLI either from a desktop of the vMA appliance run the following command: vicfg-vswitch -server <servername> -B both vSwitch0
- Repeate for all of the ESX Hosts in the cluser.
The next step in the Nexus 1000v installation is to install the Virtual Supervisor Module (VSM) VM appliance(s).
Deploying the Virtual Supervisor Module (VSM)
- Download and decompress the Nexus 1000v software from Cisco.
- Open up the vSphere Client and connect to vCenter.
- Go to File and then Deploy OVF Template.
- Select the OVF file under the Nexus 1000v VSM install folder.
- Accept the details and the EULA.
- Give the VSM a name and choose next.
- Choose the ESX data store for the VSM and choose next.
- Choose thick provisioned and click next. (NOTE: Thin provisioning is not supported by Cisco for the VSM.)
- Map the virtual NICs to the appropriate port groups on the vSwitch.
- Power up the VSM and connect to its console using vSphere client.
- Once booted, go through the text based administrative setup to configure the following:
- Administrative user password
- Role (Standalone/primary/standby) The first VSM will be primary unless it is to be the only VSM and then it would be standalone.
- Domain ID. This number serves to tie VSMs to a VEM and to vCenter. Each Nexus 1000v instance must have a unique domain ID.
- Continue with the basic system configuration including:
- SNMP Read-Only community string
- Naming the switch
- Assigning the management IP address settings
- Enabling or disabling services. NOTE: Make sure http and ssh are enabled as they are used later in the process.
- Open up a web browser to http://<nexus1000vIPaddress/ and click on "Launch Installer Application" (Requires Java Web Start and doesn't seem to currently work with Chrome).
- Give the wizard the credentials for the VSM that you created earlier.
- Give the wizard the credentials for accessing vCenter and the vCenter IP Address.
- Select the cluster where the VSM is installed.
- Assign the proper VLANs to the VSM's virtual NICs by choosing advanced L2 and then chose the proper port groups.
- Configure settings for the VSM.
- Review the configuration.
- Tell the wizard not to migrate any hosts or VMs and let it finish. It will reboot the VSM multiple times before reporting that it is done.
Friday, August 19, 2011
Cisco Nexus 1000v Software Installation (Part 3)
In this article I will examine the process of configuring the default VMWare vSwitch with the VLANs needed to start the Nexus 1000v Install.
The first steps to getting a Nexus 1000v installed are actually to get the basic VMWare vSwitch configured and operating.
Assumptions:
· ESX is already installed
· vCenter VM is already installed and configured
· vSphere Client is installed on workstation
· VLANs for Nexus Packet and Capture interfaces (can be the same VLAN) are created on the network.
· VLAN for Nexus Management interface is created on the network.
· The ESXi hosts have their ports configured as trunks with the Nexus Packet, Capture and Management VLANs allowed as well as the VLAN used for the ESX hosts’ IP addresses (VMK0).
Configuring ESXi vSwitch
- Open vSphere Client and connect to vCenter.
- Click on the host to configure.
- Click on the configuration tab.
- Click on networking.
- Click on properties.
- Click on add.
- Click on Virtual Machine and then Next
- Give the network a label and a VLAN ID (0 indicates the native vlan). NOTE: This label must be consistent on all hosts for vMotion.
Cisco Nexus 1000v Part 2
As promised in my last post, this post will be an explanation of how the Nexus 1000v's architecture is laid out as well as how that fits into vSphere. Cisco uses the familiar imagery from the physical world of a chassis (think Nexus 7000 or Catalyst 6500) with line cards. In the physical world you would have one or two supervisor engines that provide the brains and then several line cards that provide ports. In the Nexus 1000v paradigm, the chassis is a virtual container in which you place one or two Virtual Supervisor Modules (VSM) that are actually guest VMs that run NX-OS. The VSMs can be hosted on the ESX cluster, another ESX host, or the Nexus 1010 appliance. These VSM modules communicate through the VMWare infrastructure to a Virtual Ethernet Module (VEM) that resides on each ESX host. Think of the connection through the infrastructure as the back plane or switch fabric of the virtual switch. The VEMs show up in the Nexus 1000v as line cards in the virtual chassis.
The VSMs communicate with vCenter to coordinate the physical and virtual NICs on the servers and how they are connected to the VEMs. You use vCenter to manage which physical NICs are associated with a VEM. Physical NICs show up as eth<id> interfaces on the Nexus 1000v. The virtual NICs associated with the hosts show up as veth<id> interfaces. For those not familiar with NX-OS, ethernet interfaces can be 10Mbps, 100Mbps, 1Gbps or 10Gbps. Unlike IOS the name doesn't designate the speed.
Because of the way that the VEMs communicate with the VSM, it is crucial to maintain the networking links between the VSMs and the VEM or the VEM will disappear from the Nexus 1000v. If it is disconnected, the VEM continues to forward traffic in the last known configuration but it is not configurable.
In my next posts on the Nexus 1000v, I will run through the basics of getting a Nexus 1000v installed into a vSphere 4.1 environment.
Because of the way that the VEMs communicate with the VSM, it is crucial to maintain the networking links between the VSMs and the VEM or the VEM will disappear from the Nexus 1000v. If it is disconnected, the VEM continues to forward traffic in the last known configuration but it is not configurable.
In my next posts on the Nexus 1000v, I will run through the basics of getting a Nexus 1000v installed into a vSphere 4.1 environment.
Monday, August 8, 2011
Catalyst 6509-E VSS Software Upgrade Gone Bad
My work network has a pair of Cisco Catalyst 6509-E chassis that are configured in a Virtual Switching System (VSS) to serve as the network core. Last week we had a supervisor engine crash and were having some residual craziness with our CAM table. TAC suggested a reboot and software upgrade so we scheduled one for Sunday afternoon.
Usually a software upgrade on the 6509 is relatively painless, but this time it proved to be very painful. The previous software load on the VSS pair was 12.2(33)SXI, but it was the modular version (keep this in mind it's important). The new software load suggested by TAC was 12.2(33)SXJ1 which as of SXJ is only offered in monolithic versions.
Assuming that all was well with these two versions, I started down the path of doing an enhanced Fast Software Upgrade (eFSU) of my VSS pair using the ISSU commands as listed in the Catalyst 6500 Release 12.2SX Software Configuration Guide - Virtual Switching Systems (VSS) on Cisco's website. After issuing issue loadversion disk0:s72033-ipservicesk9_wan-mz.122-33.SXJ1.bin on the active console, I waited for the standby chassis to reload. Unfortunately it entered a reboot loop because the new software was not compatible for ISSU. Here is where it got hairy. At this point I could neither abort nor complete the upgrade on the active supervisor. It wouldn't let me change the boot system variable because it had a state somewhere that said it was in an ISSU upgrade even after power cycling the chassis.
After 5 hours on the phone with TAC, we were able to clear this persistence and finish the upgrade, but it was a very long downtime. The moral of the story... modular and monolithic IOS don't mix well.
Subscribe to:
Posts (Atom)