These are the random bits and bytes that come out of the brain of a Network Engineer from Springfield, IL. Hopefully they'll be of some use to someone other than myself.
Showing posts with label nexus. Show all posts
Showing posts with label nexus. Show all posts
Monday, August 29, 2011
Cisco Nexus 1000v Roundup
This post is merely an index for the 6 post series I did on the Cisco Nexus 1000v. I hope that beyond being a good learning experience for myself that it will benefit others.
Cisco Nexus 1000v - Adding Physical Ports (Part 6)
The previous posts have established a fully configured, but unused Nexus 1000v. At this point it's like having a physical switch in the rack, powered up and configured, but with no network cables attached. In VMWare, the "cables" are attached using the vSphere Client.
Attaching Physical Ports to the Nexus 1000v
Attaching Physical Ports to the Nexus 1000v
- Connect to vCenter using the vSphere Client
- Go to Networking Inventory and select the Nexus distributed virtual switch (dVS).
- Right click on the Nexus and choose add host.
- Select the host and vmnic(s) to use and change their DVUplink port group to system-uplink (or what you named the system uplinks in your port group on the Nexus) for the system uplink ports and vm-uplink for the VM networking ports.
- Click next and choose not to migrate the vmk0 or VMs. (I prefer to verify the Nexus 1000v's operation before migrating anything.)
- Click Finish.
- Repeat for all hosts in the cluster.
Migrating vmk0 Interfaces to Nexus
Once you have added a few test VMs to the Nexus and are certain that the Nexus 1000v is working properly, it's time to migrate the last physical NIC from the vSwitch to the Nexus 1000v and with it the vmk0 interface used for vMotion and VMWare host management. Keep in mind that if you don't need this NIC for bandwidth reasons, it is not mandatory to move these services to the Nexus 1000v.
- Connect to vCenter using the vSphere Client.
- Go to Networking Inventory and select the Nexus dVS.
- Right click on the Nexus and choose manage host.
- Select the hosts and click next twice.
- Click on the destination port group for the vmnic used by vmk0 and choose the Nexus port group.
- Click next and then finish without migrating VMs.
You will need to repeat this for each host in the cluster. Leave the host with the active VSM for last and make sure to migrate it's NICs to the Nexus before disconnecting the vSwitch from the vmnic.
Friday, August 26, 2011
Cisco Nexus 1000v Software Installation (Part 5)
In this article I will run through installing the Virtual Ethernet Module (VEM) and creating the initial port groups on the Nexus 1000v.
Installing the Virtual Ethernet Module (VEM)
- Open the vSphere client and connect to vCenter.
- Right click on the host that you are going to install the VEM on and choose Maintenance Mode. (NOTE: This will vMotion all guests from that host to other hosts if you have vMotion enabled, otherwise those guests will be shutdown.)
- Copy the VEM bundle from the Nexus 1000v install zip file to the vMA or to the computer that you are running vCLI on.
- Use the vCLI to install the VEM with the following command: vihostupdate -install -bundle <path to VEM Bundle> --server <host IP>
As you can see, installing the VEM software is fairly simple.
Creating the Port Groups on the Nexus 1000v
The Nexus 1000v uses port-profile configurations to define the configuration for each type of interface. In this part of the install we need to setup profiles for the physical NICs that will uplink to the hardware switch infrastructure for both the system VLANs like VMK0 and the Nexus Control traffic as well as the VM uplinks for normal guest VLAN traffic. On the Nexus 1000v, physical NICs are all of type Ethernet and virtual NICs are vEthernet.
- Connect to the switch management IP address using SSH.
- Type config t and enter to enter configuration mode.
- Configure a port profile to use for your system uplink ports (VMK0, Nexus Control, Nexus Packet, Nexus Management). Below is an example:
port-profile type ethernet vm-uplink vmware port-group switchport mode trunk ! In my lab, 255 is MGMT, 256 is Nexus Packet and Control and 101 is for VMK0 switchport trunk allowed vlan 101, 255-256 switchport trunk native vlan 255 ! This command has Nexus create port-channels automatically channel-group auto mode on no shutdown ! System VLANs come up before the VSM is fully initialized system vlan 101,255-256 description SYSTEM-UPLINK state enabled - Configure a port profile to use for the VM Guest networks.
port-profile type ethernet vm-uplink vmware port-group switchport mode trunk switchport trunk allowed vlan 2,102,104-105,259 switchport trunk native vlan 102 ! This command has Nexus create port-channels automatically channel-group auto mode on no shutdown ! System VLANs come up before the VSM is fully initialized. system vlan 102 description VM-UPLINK state enabled
- Configure port profiles for the guest networking to match the old vSwitch port-groups.
port-profile type vethernet example-vlan vmware port-group example-vlan switchport access vlan
switchport mode access no shutdown state enabled - Save the new configuration by doing copy running-config startup-config
Now that we have everything configured, the next post will be how to plug the network into the Nexus 1000v.
Friday, August 19, 2011
Cisco Nexus 1000v Software Installation (Part 3)
In this article I will examine the process of configuring the default VMWare vSwitch with the VLANs needed to start the Nexus 1000v Install.
The first steps to getting a Nexus 1000v installed are actually to get the basic VMWare vSwitch configured and operating.
Assumptions:
· ESX is already installed
· vCenter VM is already installed and configured
· vSphere Client is installed on workstation
· VLANs for Nexus Packet and Capture interfaces (can be the same VLAN) are created on the network.
· VLAN for Nexus Management interface is created on the network.
· The ESXi hosts have their ports configured as trunks with the Nexus Packet, Capture and Management VLANs allowed as well as the VLAN used for the ESX hosts’ IP addresses (VMK0).
Configuring ESXi vSwitch
- Open vSphere Client and connect to vCenter.
- Click on the host to configure.
- Click on the configuration tab.
- Click on networking.
- Click on properties.
- Click on add.
- Click on Virtual Machine and then Next

- Give the network a label and a VLAN ID (0 indicates the native vlan). NOTE: This label must be consistent on all hosts for vMotion.

Cisco Nexus 1000v Part 2
As promised in my last post, this post will be an explanation of how the Nexus 1000v's architecture is laid out as well as how that fits into vSphere. Cisco uses the familiar imagery from the physical world of a chassis (think Nexus 7000 or Catalyst 6500) with line cards. In the physical world you would have one or two supervisor engines that provide the brains and then several line cards that provide ports. In the Nexus 1000v paradigm, the chassis is a virtual container in which you place one or two Virtual Supervisor Modules (VSM) that are actually guest VMs that run NX-OS. The VSMs can be hosted on the ESX cluster, another ESX host, or the Nexus 1010 appliance. These VSM modules communicate through the VMWare infrastructure to a Virtual Ethernet Module (VEM) that resides on each ESX host. Think of the connection through the infrastructure as the back plane or switch fabric of the virtual switch. The VEMs show up in the Nexus 1000v as line cards in the virtual chassis.
The VSMs communicate with vCenter to coordinate the physical and virtual NICs on the servers and how they are connected to the VEMs. You use vCenter to manage which physical NICs are associated with a VEM. Physical NICs show up as eth<id> interfaces on the Nexus 1000v. The virtual NICs associated with the hosts show up as veth<id> interfaces. For those not familiar with NX-OS, ethernet interfaces can be 10Mbps, 100Mbps, 1Gbps or 10Gbps. Unlike IOS the name doesn't designate the speed.
Because of the way that the VEMs communicate with the VSM, it is crucial to maintain the networking links between the VSMs and the VEM or the VEM will disappear from the Nexus 1000v. If it is disconnected, the VEM continues to forward traffic in the last known configuration but it is not configurable.
In my next posts on the Nexus 1000v, I will run through the basics of getting a Nexus 1000v installed into a vSphere 4.1 environment.
Because of the way that the VEMs communicate with the VSM, it is crucial to maintain the networking links between the VSMs and the VEM or the VEM will disappear from the Nexus 1000v. If it is disconnected, the VEM continues to forward traffic in the last known configuration but it is not configurable.
In my next posts on the Nexus 1000v, I will run through the basics of getting a Nexus 1000v installed into a vSphere 4.1 environment.
Tuesday, June 21, 2011
Cisco Nexus 1000v Virtual Switch for VMWare ESX
Recently the server group came to me and let me know that they had purchased Nexus 1000v licensing for the new ESX cluster. As a Cisco geek, I was pretty stoked to get to work with the Nexus 1000v as it was virtual and the first Nexus platform I would get to work on.
In this article, I am going to try to lay out some of the basic nomenclature surrounding the Nexus 1000v. If you are like me and have been living in your network world with little exposure to the guts of VMWare, this article will hopefully help bring you up to speed.
The first concept that can get a bit confusing in the VMWare world is the way they refer to their NICs. The physical NICs on the server are referred to as vmnic<x> starting with vmnic0. When I was first getting started, I kept wanting to think that these were virtual NICs since it started with VM, but that is not the case. Usually the onboard NICs are the first vmnics and then any expansion cards are after that.
Next there is the matter of the different types of virtual switches that can be configured. The most basic type is the standard vSwitch. To the average networking guy this is basically configuring the ESX host's interfaces to be used as tagged (trunked) VLAN interfaces. Each VLAN that you want to support is added as a standard vSwitch. Once added the vSwitch can be associated with any VM. The drawback to vSwitch from a VMWare perspective is that they need to be identically configured on every ESX host in the cluster to allow for vMotion of VMs since their network has to be present for them to move. From a network guy's point of view the drawbacks are numerous including:
Next there is the matter of the different types of virtual switches that can be configured. The most basic type is the standard vSwitch. To the average networking guy this is basically configuring the ESX host's interfaces to be used as tagged (trunked) VLAN interfaces. Each VLAN that you want to support is added as a standard vSwitch. Once added the vSwitch can be associated with any VM. The drawback to vSwitch from a VMWare perspective is that they need to be identically configured on every ESX host in the cluster to allow for vMotion of VMs since their network has to be present for them to move. From a network guy's point of view the drawbacks are numerous including:
- No ACL capabilities.
- No SNMP monitoring of traffic counters.
- No Netflow monitoring.
Realizing the problems with vSwitch design for vMotion, VMWare came up with the distributed vSwitch or DVS. This switch is configured for the entire cluster and ensures that all members of the cluster have the same network configuration available for VMs. From a VMWare perspective this is the cat's meow, but it still lacks SNMP, ACL and Netflow capabilities that us networking nerds crave.
That's where the Nexus 1000v comes in. It is a cisco developed distributed vSwitch that uses the VMWare APIs to give a full NX-OS controlled DVS. If you can do it in NX-OS you can do it with the Nexus 1000v. In my next post I will discuss how the Nexus 1000v architecture interacts with VMWare.
Subscribe to:
Comments (Atom)