how Unified Fabric

Unified Fabric
Dr.MohitBansal Profile Pic
Published Date:25-10-2017
Your Website URL(Optional)
Unified Fabric This chapter covers the following topics related to Unified Fabric: ■ Unified Fabric Overview ■ Enabling Technologies ■ Nexus 5000 Unified Fabric Configuration ■ N-Port Virtualization (NPV) ■ FCoE Configuration The Nexus family of switches represents a revolutionary approach to I/O within the data center that is referred to as Unified Fabric. Unified Fabric Overview One of the biggest trends in data centers today is consolidation, which can mean many different things. In some cases, consolidation refers to a physical consolidation of data centers themselves where dozens or even hundreds of data centers are geographically dis- persed and consolidated into a smaller number of large data centers. Consolidation can also exist within a data center where a large number of underutilized physical servers are consolidated, usually by leveraging some type of virtualization technology, into a smaller number of physical servers. Although virtualization offers many benefits, including con- solidation of processors, memory, and storage, little is done to consolidate the amount of adapters, cables, and ports within the data center. In most virtualization implementations, there is actually a requirement for more adapters, cables, and ports to achieve the dense I/O requirements associated with virtualization. Data centers today contain multiple net- work fabrics that require discreet connectivity components to each fabric. I/O consolidation is a trend within data centers that refers to the capability to aggregate connectivity to multiple fabrics into a single or redundant pair of adapters, cables, and port. Although new technologies have emerged to enable this consolidation to occur, the362 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures concept itself is not new. Fibre Channel, iSCSI, Infiniband, and others were all introduced in an attempt to consolidate I/O. Although the merits or consolidation capabilities of each of these technologies might be open to debate, for one reason or another, all failed to reach mainstream adoption as the single fabric for all I/O requirements. As a consolidation technology, Unified Fabric offers several benefits to customers, including ■ Lower capital expenditures: Through the reduction of adapters, cables, and ports re- quired within the infrastructure. ■ Lower operational expenses: Through the reduction of adapters, cables, and ports drawing power within the data center. ■ Reduced deployment cycles: Unified Fabric provides a “wire once” model, in which all LAN, SAN, IPC, and management traffic is available to every server without requiring additional connectivity components. ■ Higher availability: Quite simply, fewer adapters and ports means fewer components that could fail. Enabling Technologies Ethernet represents an ideal candidate for I/O consolidation. Ethernet is a well-under- stood and widely deployed medium that has taken on many consolidation efforts already. Ethernet has been used to consolidate other transport technologies such as FDDI, Token Ring, ATM, and Frame Relay networking technologies. It is agnostic from an upper layer perspective in that IP, IPX, AppleTalk, and others have used Ethernet as transport. More recently, Ethernet and IP have been used to consolidate voice and data networks. From a financial aspect, there is a tremendous investment in Ethernet that also must be taken into account. For all the positive characteristics of Ethernet, there are several drawbacks of looking to Ethernet as an I/O consolidation technology. Ethernet has traditionally not been a lossless transport and relied on other protocols to guarantee delivery. Additionally, a large por- tion of Ethernet networks range in speed from 100 Mbps to 1 Gbps and are not equipped to deal with the higher bandwidth applications such as storage. New hardware and technology standards are emerging that will enable Ethernet to over- come these limitations and become the leading candidate for consolidation. 10-Gigabit Ethernet 10-Gigabit Ethernet (10G) represents the next major speed transition for Ethernet tech- nology. Like earlier transitions, 10G started as a technology reserved for backbone appli- cations in the core of the network. New advances in optic and cabling technologies have made the price points for 10G attractive as a server access technology as well. The desire for 10G as a server access technology is driven by advances in compute technology in the way of multisocket/multicore, larger memory capacity, and virtualization technology. InChapter 8: Unified Fabric 363 some cases, 10G is a requirement simply for the amount of network throughput required for a device. In other cases, however, the economics associated with multiple 1G ports versus a single 10G port might drive the consolidation alone. In addition, 10G becoming the de facto standard for LAN-on-motherboard implementations is driving this adoption. In addition to enabling higher transmission speeds, current 10G offerings provide a suite of extensions to traditional Ethernet. These extensions are standardized within IEEE 802.1 Data Center Bridging. Data Center Bridging is an umbrella referring to a collection of specific standards within IEEE 802.1, which are as follows: ■ Priority-based flow control (PFC; IEEE 802.1Qbb): One of the basic challenges as- sociated with I/O consolidation is that different protocols place different require- ments on the underlying transport. IP traffic is designed to operate in large WAN en- vironments that are global in scale, and as such applies mechanisms at higher layers to account for packet loss, for example, Transmission Control Protocol (TCP). Due to the capabilities of the upper layer protocols, underlying transports can experience packet loss and in some case even require some loss to operate in the most efficient manner. Storage-area networks (SAN), on the other hand, are typically smaller in scale than WAN environments. These protocols typically provide no guaranteed delivery mechanisms within the protocol and instead rely solely on the underlying transport to be completely lossless. Ethernet networks traditionally do not provide this lossless behavior for a number of reasons including collisions, link errors, or most commonly congestion. Congestion can be avoided with the implementation of PAUSE frames. When a receiving node begins to experience congestion, it transmits a PAUSE frame to the transmitting station, notifying it to stop sending frames for a period of time. Although this link level PAUSE creates a lossless link, it does so at the expense of performance for protocols equipped to deal with it in a more elegant manner. PFC solves this problem by enabling a PAUSE frame to be sent only for a given Class of Service (CoS) value. This per-priority pause enables LAN and SAN traffic to coexist on a single link between two devices. ■ Enhanced transmission selection (ETS; IEEE 802.1Qaz): The move to multiple 1- Gbps connections is done primarily for two reasons. One reason is that the aggregate throughput for a given connection exceeds 1 Gbps; this is straightforward but is not always the only reason that multiple 1-Gbps links are used. The second case for mul- tiple 1-Gbps links is to provide a separation of traffic, guaranteeing that one class of traffic will not interfere with the functionality of other classes. ETS provides a way to allocate bandwidth for each traffic class across a shared link. Each class of traffic can be guaranteed some portion of the link, and if a particular class doesn’t use all the allocated bandwidth, that bandwidth can be shared with other classes. ■ Congestion notification (IEEE 802.1Qau): Although PFC provides a mechanism for Ethernet to behave in a lossless manner, it is implemented on a hop-by-hop basis and provides no way for multihop implementations. 802.1Qau is currently proposed as a mechanism to provide end-to-end congestion management. Through the use of back- ward congestion notification (BCN) and quantized congestion notification (QCN),364 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Ethernet networks can provide dynamic rate limiting similar to what TCP provides only at Layer 2. ■ Data Center Bridging Capability Exchange Protocol extensions to LLDP (IEEE 802.1AB): To negotiate the extensions to Ethernet on a specific connection and to ensure backward compatibility with legacy Ethernet networks, a negotiation proto- col is required. Data Center Bridging Capability Exchange (DCBX) represents an ex- tension to the industry standard Link Layer Discovery Protocol (LLDP). Using DCBX, two network devices can negotiate the support for PFC, ETS, and Congestion Management. Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) represents the latest in standards-based I/O consoli- dation technologies. FCoE was approved within the FC-BB-5 working group of INCITS (formerly ANSI) T11. The beauty of FCoE is in its simplicity. As the name implies, FCOE is a mechanism that takes Fibre Channel (FC) frames and encapsulates them into Ethernet. This simplicity enables for the existing skillsets and tools to be leveraged while reaping the benefits of a Unified I/O for LAN and SAN traffic. FCoE provides two protocols to achieve Unified I/O: ■ FCoE: The data plane protocol that encapsulates FC frames into an Ethernet header. ■ FCoE Initialization Protocol (FIP): A control plane protocol that manages the login/logout process to the FC fabric. FCoE standards also defines several new port types: ■ Virtual N_Port (VN_Port): An N_Port that operates over an Ethernet link. N_Ports, also referred to as Node Ports, are the ports on hosts or storage arrays used to con- nect to the FC fabric. ■ Virtual F_Port (VF_Port): An F_port that operates over an Ethernet link. F_Ports are switch or director ports that connect to a node. ■ Virtual E_Port (VE_Port): An E_Port that operates over an Ethernet link. E_Ports or Expansion ports are used to connect fibre channel switches together; when two E_Ports are connected the link, it is said to be an interswitch link or ISL. Nexus 5000 Unified Fabric Configuration The Nexus 5000 product line represents the industry’s first product that enables Unified I/O through the use of FCoE. The remainder of this chapter discusses the FCoE imple- mentation on the Nexus 5000. FCoE can be deployed in many different scenarios; however, the maturity of the technol- ogy, product availability, and economics are dictating the first implementations of FCoE. As a result of the large number of adapters, cables, and ports that exist at the server interconnect or access layer, this has been the target of organizations wanting to takeChapter 8: Unified Fabric 365 immediate advantage of the benefits of Unified I/O. This topology takes advantage of the Nexus 5000 as an access layer FCoE switch, connecting to hosts equipped with a Converged Network Adapter (CNA). CNAs present the host with standard 10G ports, as well as native FC ports. The first generation of CNAs focus on maintaining compatibility with existing chipsets and drivers. As such, from the server administrators’ standpoint, FCoE is completely transparent to operational practices. Figure 8-1 shows how a CNA appears in Device Manager of a Microsoft Windows Server. Figure 8-1 CNA in Device Manager Figure 8-2 shows a common FCoE topology. This topology reduces the requirements on the existing infrastructure, while still taking advantage of the consolidation of adapters, cables, and ports to each server connected to the Nexus 5000. With the Nexus 5000 switch, FCoE functionality is a licensed feature. After the license is installed, FCoE configuration can be completed. Note Enabling FCoE functionality on a Nexus 5000 requires a reboot of the system. Planning for this feature should take this into account. Example 8-1 shows how to verify the installed licenses.366 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Native FCoE Server Figure 8-2 FCoE Network Topology Example 8-1 Verifying FCoE License Nexus5000(config) sho license usage Feature Ins Lic Status Expiry Date Comments Count FM_SERVER_PKG No - Unused - ENTERPRISE_PKG Yes - Unused Never - FC_FEATURES_PKG Yes - Unused Never - Nexus5000(config) Nexus5000(config) Example 8-2 shows how to enable the FCoE feature. Example 8-2 Enabling FcoE Nexus5000(config) feature fcoe Nexus5000(config) 2009 Nov 3 20:46:23 Nexus5000 %PLATFORM-3-FC_LICENSE_DESIRED: FCoE/FC feature will be enabled after the configuration is saved followed by a reboot Nexus5000(config) exit Nexus5000 copy running-config startup-config 100%Chapter 8: Unified Fabric 367 Packaging and storing to flash: / Nexus5000 Nexus5000 reload WARNING: This command will reboot the system Do you want to continue? (y/n) n y Broadcast message from root (pts/0) (Tue Nov 3 20:49:15 2009): The system is going down for reboot NOW Connection closed by foreign host. N-Port Virtualization (NPV) The fibre channel module of the Nexus 5000 series switch can operate in two modes: ■ Fabric mode ■ NPV (N-Port Virtualization) mode When in fabric mode, the switch module operates as any switch in a fibre channel net- work does. Fabric mode switches have the following characteristics: ■ Unique domain ID per virtual storage area network (VSAN) ■ Participation in all domain services (zoning, fabric security, Fibre Channel Identification FCID allocation, and so on) ■ Support for interoperability modes When the fibre channel module is configured in NPV mode, it does not operate as a typi- cal fibre channel switch and leverages a service, N-Port ID Virtualization (NPIV), on the upstream or core fibre channel switch for domain services. The switch operates in a simi- lar fashion as an NPIV-enabled host on the fabric. The advantage NPV provides the net- work administrator is the control of domain IDs and points of management on a fibre channel network as it scales. Note The fibre channel specification supports 239 domain IDs per VSAN; however, the reality is that many SAN vendors recommend and support a much lower number. Consult your storage vendor (Original Storage Manufacturer OSM) for specific scalability numbers. Additional benefits of NPV include the capability to manage the fibre channel switch as a discrete entity for tasks such as software management and debugging the fibre channel network. NPV also enables network administrators to connect FCoE hosts to non-FCoE- enabled SANs and simplifies third-party interoperability concerns because the NPV368 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures enabled fibre channel module does not participate in domain operations or perform local switching. This enables multivendor topologies to be implemented without the restric- tions interoperability mode requires. The fibre channel module in the Nexus 5000 creates a new port type to the fibre chan- nel network when in NPV mode—the NP-port. The NP-port proxies fabric login (FLOGI) requests from end stations and converts them to Fabric Discoveries (FDISC) dynamically and transparently to the end device. The result is that end systems see the NPV-enabled switch as a Fabric Port (F-port), and the upstream/core switch sees the NPV-enabled switch as an F-port as well. Figure 8-3 illustrates the port roles used in a NPV-enabled network. NPV Core/ N Port Upstream Switch NPV Edge Switch F Port NP Port F Port N Port Figure 8-3 Port Roles in an NPV-Enabled Network N-Port Identification Virtualization A key component to enable the proper operation of NPV is the need for N-Port Identification Virtualization (NPIV) on the core/upstream fibre channel switch. NPIV is an industry-standard technology defined by the T11 committee as part of the Fibre Channel Link Services (FC-LS) specification and enables multiple N Port IDs or FCIDs to share a single physical N Port. Prior to NPIV, it was not possible to have a system that used multiple logins per physical port—it was a one login to one port mapping. With the increasing adoption of technologies such as virtualization, the need to allow multiple logins was created. NPIV operates by using Fabric Discovery (FDISC) requests to obtain additional FCIDs. Enabling NPV mode can cause the current configuration to be erased and the device rebooted. It is therefore recommended that NPV be enabled prior to completing any additional configuration, as demonstrated in Example 8-3. Example 8-3 Enabling NPV Mode Verify that boot variables are set and the changes are saved. Changing to npv mode erases the current configuration and reboots the switch in npv mode. Do you w ant to continue? (y/n):y Unexporting directories for NFS kernel daemon...done. Stopping NFS kernel daemon: rpc.mountd rpc.nfsddone.Chapter 8: Unified Fabric 369 Unexporting directories for NFS kernel daemon... done. Stopping portmap daemon: portmap. Stopping kernel log daemon: klogd. Sending all processes the TERM signal... done. Sending all processes the KILL signal... done. Unmounting remote filesystems... done. Deactivating swap...done. Unmounting local filesystems...done. mount: you must specify the filesystem type Starting reboot command: reboot Rebooting... Restarting system. FCoE Configuration The remainder of this chapter provides a step-by-step configuration of a simple FCoE configuration, depicted in the topology shown in Figure 8-4. MDS Nexus 5000 eth 1/34 fc 2/1 fc 1/9 Figure 8-4 Network Topology for FCoE Configuration The first step in the configuration is to configure the connectivity between the Nexus 5000 and the existing SAN environment. The following examples show this configuration on a Cisco MDS SAN director. Example 8-4 shows how to configure the ISL between the MDS and the Nexus 5000. Example 8-4 Enabling MDS Port MDS-FCOE conf t Enter configuration commands, one per line. End with CNTL/Z. MDS-FCOE(config) int fc1/9 MDS-FCOE(config-if) no shutdown Example 8-5 shows how to configure the fibre channel uplink to the SAN core.370 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Example 8-5 Configuring FC Uplink Nexus5000 conf t Nexus5000(config) int fc2/1 Nexus5000(config-if) no shut Nexus5000(config-if) exit Nexus5000(config) Nexus5000(config) sho run int fc2/1 version 4.0(1a)N2(1a) interface fc2/1 no shutdown To verify the MDS and Nexus 5000 are configured correctly, the FC uplink port can be verified, as shown in Example 8-6. Zoning is a typical task within FC SANs to restrict the storage targets that a particular host has access to. Zonesets can be thought of as access lists that grant a particular initia- tors access to a target. FCoE is transparent to the zoning process. The active zoneset from the SAN fabric should be automatically downloaded to the Nexus 5000. To verify the zoneset, you issue the show zoneset active command on the Nexus 5000, as demon- strated in Example 8-7. Example 8-6 Verifying FC Uplink Nexus5000(config) sho interface fc2/1 fc2/1 is trunking Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN) Port WWN is 20:41:00:0d:ec:a3:fd:00 Peer port WWN is 20:09:00:0d:ec:34:37:80 Admin port mode is auto, trunk mode is on snmp link state traps are enabled Port mode is TE Port vsan is 1 Speed is 2 Gbps Transmit B2B Credit is 255 Receive B2B Credit is 16 Receive data field Size is 2112 Beacon is turned off Trunk vsans (admin allowed and active) (1) Trunk vsans (up) (1) Trunk vsans (isolated) () Trunk vsans (initializing) () 5 minutes input rate 784 bits/sec, 98 bytes/sec, 1 frames/sec 5 minutes output rate 736 bits/sec, 92 bytes/sec, 1 frames/sec 492492 frames input, 38215024 bytesChapter 8: Unified Fabric 371 0 discards, 0 errors 0 CRC, 0 unknown class 0 too long, 0 too short 492223 frames output, 28507204 bytes 0 discards, 0 errors 2 input OLS, 4 LRR, 1 NOS, 0 loop inits 6 output OLS, 4 LRR, 2 NOS, 0 loop inits 16 receive B2B credit remaining 255 transmit B2B credit remaining Example 8-7 Verifying Active Zoneset Nexus5000 sho zoneset active zoneset name ZS_FCoE vsan 1 zone name z_FCoE vsan 1 fcid 0x970002 pwwn 10:00:00:00:c9:76:f7:e5 fcid 0x6a00ef pwwn 50:06:01:61:41:e0:d5:ad Each VSAN is represented as an FCoE VLAN and is required for FCoE functionality. Example 8-8 configures VLAN 100 as an FCoE VLAN for VSAN 1. Example 8-8 Creating an FCoE VLAN Nexus5000 conf t Nexus5000(config) vlan 100 Nexus5000(config-vlan) name FCoE Nexus5000(config-vlan) fcoe vsan 1 Nexus5000(config-vlan) exit Note The FCoE VLAN should not be a VLAN, which is the native for any trunk links. Finally, you define the Virtual Fibre Channel (vfc) port and bind it to a physical interface. Example 8-9 shows how to create a vfc interface and bind it to a physical interface. Example 8-9 Creating a Virtual Fibre Channel Interface Nexus5000 conf t Nexus5000(config) interface vfc34 Nexus5000(config-if) no shutdown Nexus5000(config-if) bind interface ethernet1/34 Nexus5000(config-if) exit Nexus5000(config) exit372 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures To carry data traffic and the FCoE traffic, the physical interface must be defined as an 802.1Q trunk carrying the necessary VLANs. Example 8-10 defines the physical interface as a 802.1Q trunk and enables the data and FCoE VLANs. Example 8-10 Creating an 802.1Q Trunk Nexus5000 conf t Nexus5000(config) interface ethernet 1/34 Nexus5000(config-if) switchport mode trunk Nexus5000(config-if) switchport trunk native vlan 89 Nexus5000(config-if) spanning-tree port type edge trunk Nexus5000(config-if) switchport trunk allowed vlan 89,100 Nexus5000(config-if) exit Finally, you can verify that the VFC interface is operational with the show interface vfc34 brief command that provides a brief overview of the status of a VFC interface, as demonstrated in Example 8-11. Example 8-11 Verifying VFC Interfaces Nexus5000 show int vfc34 brief Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) vfc34 1 F up F auto Example 8-12 shows a more detailed status of the VFC interface. Example 8-12 VFC Interface Information Nexus5000 show interface vfc34 vfc34 is up Bound interface is Ethernet1/34 Hardware is GigabitEthernet Port WWN is 20:21:00:0d:ec:a3:fd:3f Admin port mode is F snmp link state traps are enabled Port mode is F, FCID is 0x970002 Port vsan is 1 Beacon is turned unknown 5 minutes input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 5 minutes output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 212797 frames input, 18750280 bytes 0 discards, 0 errorsChapter 8: Unified Fabric 373 213056 frames output, 57436752 bytes 0 discards, 0 errors Summary Unified Fabric offers several benefits to customers, including ■ Lower capital expenditures: Through the reduction of adapters, cables, and ports re- quired within the infrastructure. ■ Lower operational expenses: Through the reduction of adapters, cables, and ports drawing power within the data center. ■ Reduced deployment cycles: Unified Fabric provides a “wire once” model, where all LAN, SAN, IPC, and management traffic is available to every server without requir- ing additional connectivity components. ■ Higher availability: Few adapters and fewer ports means fewer components that could fail. By taking advantage of enhancements to traditional Ethernet technologies, and the emer- gence of technologies such as FCoE, customers can realize these benefits with minimal disruption to operational models. This chapter showed the basic Nexus 5000 configura- tions necessary to provide a Unified access method for LAN data traffic and SAN stor- age traffic.This page intentionally left blankNexus 1000V This chapter covers the following topics: ■ Hypervisor and vSphere Introduction ■ Nexus 1000V System Overview ■ Nexus 1000V Switching ■ Nexus 1000V Installation ■ Nexus 1000V Port Profiles Hypervisor and vSphere Introduction A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems to share a single hardware host. Each operating system appears to have the host’s processor, memory, and other resources. The hypervisor controls the host processor, memory, and other resources and allocates what is needed to each operating system. Each operating system is called a guest operating system or virtual machine running on top of the hypervisor. The Cisco Nexus 1000V Series Switch is a software-based Cisco NX-OS switch with intelligent features designed specifically for integration with VMware vSphere 4 environ- ments. As more organizations move toward cloud services, VMware vSphere manages collections of CPU(s), storage, and networking as a seamless and dynamic operating376 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures environment. The Nexus 1000V operates inside the VMware ESX hypervisor. The Cisco Nexus 1000V Series supports Cisco VN-Link server virtualization technology to provide: ■ Policy-based virtual machine (VM) connectivity ■ Mobile VM security ■ Network policy ■ Non-disruptive operational model for your server virtualization, and networking teams With the Nexus 1000V, virtual servers have the same network configuration, security policy, diagnostic tools, and operational models as physical servers. The Cisco Nexus 1000V Series is certified by VMware to be compatible with VMware vSphere, vCenter, ESX, and ESXi. VMware vCenter provides a single point of management for VMware vir- tual envonments providing access control, performance monitoring, and configuration. The main difference between ESX and ESXi is that ESXi does not contain the service console. The VSM can be deployed in high-availability mode; each VSM in an active- standby pair; the active and standby should run on separate VMware ESX hosts. This requirement helps ensure high availability if one of the VMware ESX servers fails. A hardware appliance will be available as an alternative optive option in the future. Nexus 1000V System Overview The Cisco Nexus 1000V Series Switch has two major components: ■ Virtual Ethernet Module (VEM): Executes inside the hypervisor ■ External Virtual Supervisor Module (VSM): Manages the VEMs. Figure 9-1 shows the Cisco Nexus 1000V Architecture The Cisco Nexus 1000V Virtual Ethernet Module (VEM) executes as part of the VMware ESX or ESXi kernel. The VEM uses the VMware vNetwork Distributed Switch (vDS) application programming interface (API). The API is used to provide advanced net- working capability to virtual machines; allowing for integration with VMware VMotion and Distributed Resource Scheduler (DRS). The VEM takes configuration information from the VSM and performs Layer 2 switching and advanced networking functions: ■ Port channels ■ Quality of service (QoS) ■ Security, including Private VLAN, access control lists, and port security ■ Monitoring, including NetFlow, Switch Port Analyzer (SPAN), and Encapsulated Remote SPAN (ERSPAN)Chapter 9: Nexus 1000V 377 VMware vSphere Server 1 SRV1 SRV3 VSM1 vswif0 vswitch0 vem03 eth3/2 vmnic1 vmnic0 vmware VMware 4 ESX Server 2 vCenter Server SRV2 VSM2(Standby) vswif0 vswitch0 vem04 eth4/2 vmnic1 vmnic0 Control (VLAN 700) Packet (VLAN 701) ER Span / NAS / VMotion (VLAN 702) VM Client / Server (VLAN 100) Management Console (VLAN 699) Figure 9-1 Cisco Nexus 1000V Series Architecture Note For more details on VMWare DRA and HA, please visit the following links: NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures The Cisco Nexus 1000V Series VSM controls multiple VEMs as one logical modular switch. Instead of physical line card modules, the VSM supports multiple VEMs running in software inside of the physical servers. Configuration is performed through the VSM and is automatically propagated to the VEMs. Instead of configuring soft switches inside the hypervisor on a host-by-host basis, administrators can define configurations for immediate use on all VEMs being managed by the VSM from a single interface. There are two distinct VLAN interfaces used for communication between the VSM and VEM. These two VLANs need L2 Adjacency between the VEM and VSM; these two VLANs are the Control and Packet VLANs. The Control VLAN is used for: ■ Extended Management communication between the VEM and VSM similar to con- trol communication of chassis-based solutions Nexus 7000, Catalyst 6500. ■ Carrying low-level messages to ensure proper configuration of the VEM. ■ Maintaining a 2-second heartbeat with the VSM to the VEM (timeout 6 seconds). ■ Maintaining synchronization between primary and secondary VSMs. The Packet VLAN is used for carrying network packets from the VEM to the VSM, such as Cisco Discovery Protocol (CDP) and Interior Gateway Management Protocol (IGMP). By using the capabilities of Cisco NX-OS, the Cisco Nexus 1000V Series provides the following benefits: ■ Flexibility and scalability: Port profiles provide configuration of ports by category enabling the solution to scale to a large number of ports. ■ High availability: Synchronized, redundant VSMs enable rapid, stateful failover and ensure an always available virtual machine network. ■ Manageability: The Cisco Nexus 1000V Series can be accessed through the XML Management interface, Cisco command-line interface (CLI), Simple Network Management Protocol (SNMP), and CiscoWorks LAN Management Solution (LMS). Note With the release of the Nexus 1000V software release 4.0(4)SV1(2), Layer 3 Control between the VSM and the VSM is supported. With Layer 3 Control, the spanning of the Control and Packets VLANs is no longer required; this is covered in more detail in the Layer 3 Control section.Chapter 9: Nexus 1000V 379 The VSM is also integrated with VMware vCenter Server so that the virtualization administrator can take advantage of the network configuration in the Cisco Nexus 1000V. The Cisco Nexus 1000V includes the port profile feature to address the dynamic nature of server virtualization. Port profiles allow you to define VM network policies for different types or classes of VMs from the Cisco Nexus 1000V VSM. The port profiles are applied to individual VM virtual network interface cards (NICs) through VMware’s vCenter GUI for transparent provisioning of network resources. Port profiles are a scala- ble mechanism to configure networks with large numbers of VMs. Network and security policies defined in the port profile follow the VM throughout its lifecycle whether it is being migrated from one server to another, suspended, hibernated, or restarted. In addition to migrating the policy, the Cisco Nexus 1000V VSM also moves the VM’s network state, such as the port counters and flow statistics. VMs partic- ipating in traffic monitoring activities, such as Cisco NetFlow or Encapsulated Remote Switched Port Analyzer (ERSPAN), can continue these activities uninterrupted by VMotion/migration operations. When a specific port profile is updated, the Cisco Nexus 1000V automatically provides live updates to all of the virtual ports using that same port profile. With the capability to migrate network and security policies through VMotion, regulatory compliance is much easier to enforce with the Cisco Nexus 1000V, because the security policy is defined in the same way as physical servers and con- stantly enforced by the switch. Nexus 1000V Switching Overview The VEM differentiates between the following interface types: VEM Virtual Ports and VEM Physical Ports. The Nexus 1000V supports the following scalability numbers: ■ 2 VSMs (High Availability) ■ 64 VEMs ■ 512 Active VLANs ■ 2048 Ports (Eth + vEth) ■ 256 Port channels Each VEM supports: ■ 216 Ports (vEths) ■ 32 Physical NICs ■ 8 Port channels380 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures VEM virtual ports are classified into the three port types: ■ Virtual NIC: There are three types of virtual NIC in VMware: ■ virtual NIC (vnic): Part of the VM, and represents the physical port of the host which is plugged into the switch. ■ virtual kernel NIC (vmknic): Used by the hypervisor for management, VMotion, iSCSI, NFS, and other network access needed by the kernel. This interface would carry the IP address of the hypervisor itself, and is also bound to a virtual Ethernet port. ■ vswif: The VMWare Service Console network interface, the Service Console network interface. The first Service Console / vswif interface is always refer- enced as “vwsif0”. The vswif interface is used as the VMware management port; these interface types map to a veth port within Nexus 1000V. ■ Virtual Ethernet (vEth) port: Represent a port on the Cisco Nexus 1000V Distributed Virtual Switch. These vEth ports are what the virtual “cable” plugs into, and are moved to the host that the VM is running on; Virtual Ethernet ports are assigned to port groups. ■ Local Virtual Ethernet (lvEth) port: Dynamically selected for vEth ports needed on the host. Note Local vEths do not move, and are addressable by the module/port number. VEM physical ports are classified into the three port types: ■ VMware NIC ■ Uplink port ■ Ethernet port Each physical NIC in VMware is represented by an interface called a VMNIC. The VMNIC number is allocated during VMware installation, or when a new physical NIC is installed, and remains the same for the life of the host. Each uplink port on the host rep- resents a physical interface. It acts like an lvEth port; however, because physical ports do not move between hosts, the mapping is 1:1 between an uplink port and a VMNIC. Each physical port added to Cisco Nexus 1000V appears as a physical Ethernet port, just as it would on a hardware-based switch. Note For more information on interface relationship mapping, refer to the follwing URL:

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.