Pages

Friday, August 19, 2011

Cisco Nexus 1000v Part 2

As promised in my last post, this post will be an explanation of how the Nexus 1000v's architecture is laid out as well as how that fits into vSphere.  Cisco uses the familiar imagery from the physical world of a chassis (think Nexus 7000 or Catalyst 6500) with line cards.  In the physical world you would have one or two supervisor engines that provide the brains and then several line cards that provide ports.  In the Nexus 1000v paradigm, the chassis is a virtual container in which you place one or two Virtual Supervisor Modules (VSM) that are actually guest VMs that run NX-OS.  The VSMs can be hosted on the ESX cluster, another ESX host, or the Nexus 1010 appliance.  These VSM modules communicate through the VMWare infrastructure to a Virtual Ethernet Module (VEM) that resides on each ESX host.  Think of the connection through the infrastructure as the back plane or switch fabric of the virtual switch.  The VEMs show up in the Nexus 1000v as line cards in the virtual chassis.

The VSMs communicate with vCenter to coordinate the physical and virtual NICs on the servers and how they are connected to the VEMs.  You use vCenter to manage which physical NICs are associated with a VEM.  Physical NICs show up as eth<id> interfaces on the Nexus 1000v.  The virtual NICs associated with the hosts show up as veth<id> interfaces.  For those not familiar with NX-OS, ethernet interfaces can be 10Mbps, 100Mbps, 1Gbps or 10Gbps.  Unlike IOS the name doesn't designate the speed.


Because of the way that the VEMs communicate with the VSM, it is crucial to maintain the networking links between the VSMs and the VEM or the VEM will disappear from the Nexus 1000v.  If it is disconnected, the VEM continues to forward traffic in the last known configuration but it is not configurable.


In my next posts on the Nexus 1000v, I will run through the basics of getting a Nexus 1000v installed into a vSphere 4.1 environment.

No comments:

Post a Comment