IP Multicast Configuration

IP Multicast Configuration
Dr.MohitBansal Profile Pic
Dr.MohitBansal,Canada,Teacher
Published Date:25-10-2017
Your Website URL(Optional)
Comment
IP Multicast Configuration Cisco NX-OS was designed to be a data center-class operating system. A component of such an operating system includes support for a rich set of IP multicast features, includ- ing key capabilities such as Internet Group Management Protocol (IGMP), Protocol Independent Multicast (PIM) sparse mode, Source Specific Multicast (SSM), Multicast Source Discovery Protocol (MSDP), and Multiprotocol Border Gateway Protocol (MBGP). This chapter focuses on the following components: ■ Multicast Operation ■ PIM Configuration on Nexus 7000 ■ IGMP Operation ■ IGMP Configuration on Nexus 7000 ■ IGMP Configuration on Nexus 5000 ■ IGMP Configuration on Nexus 1000V ■ MSDP Configuration on Nexus 7000 Multicast Operation As a technology, IP multicast enables a single flow of traffic to be received by multiple destinations in an efficient manner. This provides an optimal use of network bandwidth wherein destinations that do not want to receive the traffic do not have it sent to them. There are multiple methodologies utilized to provide this functionality that cover aspects such as discovery of sources and receivers and delivery mechanisms. Multicast is a net- work-centric technology and as such, the network equipment between the sender of mul- ticast traffic and the receivers must be “multicast-aware” and understand the services and172 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures addressing used by multicast. In IPv4, multicast uses a block of addresses that has been set aside by the Internet Assigned Number Authority (IANA). This range of addresses is the Class D block 224.0.0.0 through 239.255.255.255, and within this block a range has been set aside for private intranet usage, similar to RFC 1918 addressing for unicast. RFC 2365, “Administratively Scoped IP Multicast,” documents this usage and allocates addresses in the 239.0.0.0 through 239.255.255.255 range for private use. Each individual multicast address can represent a group that the receivers then request to join. The network listens for receivers to signal their requirement to join a group. The network then begins to forward and replicate the data from the source to the receivers that join the group. This is significantly more efficient than generally flooding the traffic to all sys- tems on a network to only have the traffic discarded. Multicast Distribution Trees Multiple methods can be used to control and optimize the learning and forwarding of multicast traffic through the network. A key concept to understand is that of a distribution tree, which represents the path that multicast data takes across the network between the sources of traffic and the receivers. NX-OS cam build different multicast dis- tribution trees to support different multicast technologies such as Source Specific Multicast (SSM), Any Source Multicast (ASM), and Bidirectional (Bidir). The first multicast distribution tree is the source tree, which represents the shortest path that multicast traffic follows through the network from the source that is transmitting to a group address and the receivers that request the traffic from the group. This is referred to as the Shortest Path Tree (SPT). Figure 4-1 depicts a source tree for group 239.0.0.1 with a source on Host A and receivers on Hosts B and C. The next multicast distribution tree is the shared tree. The shared tree represents a shared distribution path that multicast traffic follows through the network from a net- work-centric function called the Rendezvous Point (RP) to the receivers that request the traffic from the group. The RP creates a source tree, or SPT, to the source. The shared tree can also be referred to as the RP tree or RPT. Figure 4-2 depicts a shared tree, or RPT, for group 239.0.0.10 with a source on Host A. Router C is the RP and the receivers are Hosts B and C. The final multicast distribution tree is the bidirectional shared tree or bidir, which repre- sents a shared distribution path that multicast traffic follows through the network from the RP or a shared root to the receivers that request the traffic from the group. The capa- bility to send multicast traffic from the shared root can provide a more efficient method of traffic delivery and optimize the amount of state the network must maintain.Chapter 4: IP Multicast Configuration 173 Source Host A 192.168.1.10 192.168.10.10 192.168.20.10 Receiver Receiver Host B Host C Figure 4-1 Source Tree or SPT for 239.0.0.1 Note You need to understand that one of the primary scalability considerations in a mul- ticast design is the amount of information the network needs to maintain for the multicast traffic to work. This information is referred to as state and contains the multicast routing table, information on senders and receivers, and other metrics on the traffic. Figure 4-3 depicts a shared tree, or RPT, for group 239.0.0.10 with a source on Host A. Router D is the RP and the receivers are Hosts B and C.174 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Source Host A 192.168.1.10 192.168.30.10 239.0.0.10 Receiver Host D A C D F B E 192.168.10.10 192.168.20.10 Receiver Receiver Host B Host C Figure 4-2 Shared Tree or RPT for 239.0.0.10 Reverse Path Forwarding An additional concept beyond multicast distribution trees that is important for multicast is that of Reverse Path Forwarding (RPF). Multicast, by design, is traffic not intended for every system on a network but rather sent only to receivers that request it. Routers in the network must form a path toward the source or RP. The path from the source to the receivers flows in the reverse direction from which the path was created when the receiver requested to join the group. Each incoming multicast packet undergoes an RPF check to verify it was received on an interface leading to the source. If the packet passes the RPF check, it is forwarded; if not, the packet is discarded. The RPF check is done to minimize the potential for duplicated packets and source integrity. Protocol Independent Multicast (PIM) With a solid understanding of the multicast distribution tree modes and the RPF, the next concept is that of Protocol Independent Multicast (PIM), which is an industry standard protocol developed to leverage any existing underlying Interior Gateway Routing (IGP)Chapter 4: IP Multicast Configuration 175 Source Host A 192.168.1.10 192.168.30.10 239.0.0.10 RP Receiver Host D A C D F B E 192.168.10.10 192.168.20.10 Receiver Receiver Host B Host C Figure 4-3 Bidirectional Shared Tree for 239.0.0.25 protocol, such as Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), or Routing Information Protocol (RIP), to determine the path of multi- cast packets in the network. PIM does not maintain its own routing table and as such has a much lower overhead when compared with other multicast routing technologies. In general, PIM can operate in two modes: dense mode and sparse mode. NX-OS sup- ports only sparse mode because most dense mode applications have been depreciated from networks, and the flood and prune behavior of dense mode is not efficient for mod- ern data centers. PIM sparse mode mechanics entail neighbor relationships between PIM-enabled devices. These neighbor relationships determine PIM-enabled paths in a network and enable the building of distribution trees through the network. As mentioned, PIM leverages the existing unicast routing table for path selection. This gives PIM a dynamic capability to adapt the multicast topology to match the unicast topology as the network changes due to link failures, system maintenance, or administrative policy.176 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures NX-OS implementation of PIM supports three modes: ■ Any Source Multicast (ASM): Uses RPs with a shared tree to discover sources and receivers. ASM also supports the capability for traffic to be switched from the shared tree to an SPT between the source and receiver if signaled to do so. ASM is the de- fault mode when an RP is configured. ■ Source Specific Multicast (SSM): Unlike ASM or Bidir, does not utilize RPs and instead builds SPTs between the source and receivers. SSM sends an immediate join to the source and reduces state in the network. SSM is also frequently used to facili- tate communication between PIM domains where sources are not dynamically learned via MSDP or other mechanisms. SSM relies on the receivers to use IGMPv3 that is discussed in the next section. ■ Bidirectional shared trees (Bidir): Similar to ASM in that it builds a shared tree be- tween receivers and the RP, but unlike ASM does not support switching over to a SPT. With Bidir, routers closest to the receivers take on a role called Designated Forwarder (DF). This allows the source to send traffic directly to the DF without passing through the RP that may be a significant benefit in some environments. A final option would be to configure static RPF routes that would force multicast traffic to not follow the unicast table. As with most static routes, the opportunity for dynamic failover and changes to the multicast routing might be compromised, so you should give careful consideration to employing this option. The ASM, Bidir, SSM, and static RPF modes are typically deployed within a single PIM domain. In cases where multicast traffic needs to cross multiple PIM domains and Border Gateway Protocol (BGP) is used to interconnect the networks, the Multicast Source Discovery Protocol (MSDP) is typically used. MSDP is used to advertise sources in each domain without needing to share domain-specific state and scales well. RPs Rendezvous Points (RP) are key to the successful forwarding of multicast traffic for ASM and Bidir configurations. Given this importance, there are multiple methods to configure RPs and learn about them in the network. RPs are routers that the network administrator selects to perform this role and are the shared root for RPTs. A network can have multiple RPs designated to service particular groups or have a single RP that services all groups. Following are four primary methods to configure RPs in NX-OS: ■ Static RPs: As their name implies, Static RPs are statically configured on every router in the PIM domain. This is the simplest method for RP configuration, though it re- quires configuration on every PIM device. ■ Bootstrap Routers (BSRs): Distribute the same RP cache information to every router in the PIM domain. This is done by the BSR when it sends BSR messages out all PIM-enabled interfaces. These messages are flooded hop by hop to all routers in theChapter 4: IP Multicast Configuration 177 network. The BSR candidate role is configured on the router, and an election is per- formed to determine the single BSR for the domain. When the BSR is elected, messages from candidate RPs are received, via unicast, for each multicast group. The BSR sends these candidate RPs out in BSR messages, and every router in the PIM domain runs the same algorithm against the list of candidate RPs. Because each router gas the same list of candidate RPs and each router runs the same algorithm to determine the RP, every router selects the same RP. Note BSR is described in RFC 5059 and provides a nonpropietary method for RP definition. Caution Do not configure both Auto-RP and BSR protocols in the same network. Auto- RP is the Cisco propietary implementation of what was standardized as BSR, and they serve the same purpose. As such, only one is needed. Inconsistent multicast routing can be observed if both are configured. ■ Auto-RP: A Cisco propietary protocol to define RPs in a network. Auto-RP was developed before BSR was standardized. Auto-RP uses candidate mapping agents and RPs to determine the RP for a group. A primary difference between Auto-RP and BSR is that Auto-RP uses multicast to deliver the candidate-RP group messages on multicast group 224.0.1.39. The mapping agent sends the group to the RP map- ping table on multicast group 224.0.1.40. ■ Anycast-RP: Another methdology to advertise RPs in a network. There are two im- plementations of Anycast-RP. One uses MSDP and the other is based on RFC 4610, PIM Anycast-RP. With Anycast-RP, the same IP address is configured on multiple routers, typically a loopback interface. Because PIM uses the underlying unicast rout- ing table, the closest RP will be used. Anycast is the only RP method that enables more than one RP to be active, which enables load-balancing and additional fault tol- erance options. Note Anycast-RP doesn’t advertise the RP to the multicast table but rather the unicast table. The RP still needs to be dynamically discovered using technologies such as BSR or Auto-RP or statically defined to make a complete and working configuration. PIM Configuration on Nexus 7000 PIM within NX-OS is compatible with PIM on IOS devices enabling a smooth integra- tion of Nexus equipment with existing gear.178 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Note PIM is a Layer 3 protocol and as such does not work on Nexus 5000, 2000, and 1000V switches because they are Layer 2 only. The first step to configure PIM is to enable it on global configuration mode using the feature command. With the modular nature of NX-OS, using the feature command loads the PIM modular code into memory for execution. Without the feature enabled, it would not be resident in memory. In Example 4-1, PIM is enabled as a feature on Router Greed and sparse mode is configured on interface e1/25 and Port-Channel 10 per the topology depicted in Figure 4-4, which serves as the topology for all the following PIM configura- tion examples. Denial 192.168.1.3/32 — Lo0 192.168.1.36/30 192.168.1.40/30 192.168.1.32/30 1/25 1/25 Po10 Jealousy Greed 192.168.1.1/32 — Lo0 192.168.1.2/32 — Lo0 Figure 4-4 Basic Multicast Topology Example 4-1 Enabling the PIM Feature and Basic Interface Configuration Greed confi t Enter configuration commands, one per line. End with CNTL/Z. Greed(config) feature pim Greed(config) int e1/25 Greed(config-if) ip pim sparse-mode Greed(config-if) int port-channel10 Greed(config-if) ip pim sparse-mode Greed(config-if) end Greed show run pim Command: show running-config pim Time: Sun Dec 20 20:01:34 2009Chapter 4: IP Multicast Configuration 179 version 4.2(2a) feature pim ip pim ssm range 232.0.0.0/8 interface port-channel10 ip pim sparse-mode interface Ethernet1/25 ip pim sparse-mode Note To add PIM to the configuration on an interface, it must be an L3 port. If L2 were selected as a default during intial setup, the no switchport command must be used before PIM can be configured. With PIM sparse mode enabled, more options can be configured per interface including authentication, priority, hello interval border, and neighbor policy. Note You can find additional information about these options at Cisco.com at http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_2/nx- os/multicast/command/reference/mcr_cmds_i.html. In Example 4-2, the hello timers are reduced from the default of 30,000 milliseconds to 10,000 milliseconds to improve convergence, and authentication is enabled on the PIM hellos for interface Port-Channel10. Note Changing PIM hello timers might increase the load on the router’s control plane, and you might see an increase in CPU utilization. Example 4-2 Configuration of PIM Hello Authentication and Tuning Hello Timers Enter configuration commands, one per line. End with CNTL/Z. Greed(config) int po10 Greed(config-if) ip pim hello-authentication ah-md5 cisco Greed(config-if) ip pim hello-interval 10000 Greed(config-if) end Greed show ip pim ne PIM Neighbor Status for VRF “default” Neighbor Interface Uptime Expires DR Bidir- Priority Capable180 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures 192.168.1.42 Ethernet1/25 00:24:16 00:01:37 1 no 192.168.1.33 port-channel10 00:00:11 00:00:32 1 yes Greed show ip pim int po10 PIM Interface Status for VRF “default” port-channel10, Interface status: protocol-up/link-up/admin-up IP address: 192.168.1.34, IP subnet: 192.168.1.32/30 PIM DR: 192.168.1.34, DR’s priority: 1 PIM neighbor count: 1 PIM hello interval: 10 secs, next hello sent in: 00:00:06 PIM neighbor holdtime: 35 secs PIM configured DR priority: 1 PIM border interface: no PIM GenID sent in Hellos: 0x1f081512 PIM Hello MD5-AH Authentication: enabled PIM Neighbor policy: none configured PIM Join-Prune policy: none configured PIM Interface Statistics, last reset: never General (sent/received): Hellos: 56/32, JPs: 0/0, Asserts: 0/0 Grafts: 0/0, Graft-Acks: 0/0 DF-Offers: 0/0, DF-Winners: 0/0, DF-Backoffs: 0/0, DF-Passes: 0/0 Errors: Checksum errors: 0, Invalid packet types/DF subtypes: 0/0 Authentication failed: 39 Packet length errors: 0, Bad version packets: 0, Packets from self: 0 Packets from non-neighbors: 0 JPs received on RPF-interface: 0 (,G) Joins received with no/wrong RP: 0/0 (,G)/(S,G) JPs received for SSM/Bidir groups: 0/0 JPs policy filtered: 0 Greed When PIM sparse mode is enabled on all interfaces that need to participate in multicast, the next step is to configure the RPs. As previously discussed, multiple methods of RP configuration exist. Configuring Static RPs The first configuration methodology for RPs is static RP. Example 4-3 illustrates the steps required to configure a static RP on Router Jealousy.Chapter 4: IP Multicast Configuration 181 Example 4-3 Configuration of a Static RP Jealousy show ip pim rp PIM RP Status Information for VRF “default” BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None Jealousy confi t Enter configuration commands, one per line. End with CNTL/Z. Jealousy(config) ip pim rp-address 192.168.1.1 Jealousy(config) end Jealousy show ip pim rp PIM RP Status Information for VRF “default” BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None RP: 192.168.1.1, (0), uptime: 00:00:05, expires: never, priority: 0, RP-source: (local), group ranges: 224.0.0.0/4 Jealousy In the output in Example 4-3, the static RP configured supports all multicast traffic for 224.0.0.0/4. NX-OS enables the configuration of multiple RP addresses to service differ- ent group ranges. In Example 4-4, Jealousy’s configuration is modified to use Denial as the RP for 239.0.0.0/8. Example 4-4 Configuring a Group Range Per RP Jealousy confi t Enter configuration commands, one per line. End with CNTL/Z. Jealousy(config) ip pim rp-address 192.168.1.3 group-list 238.0.0.0/8 Jealousy show ip pim rp PIM RP Status Information for VRF “default” BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None182 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Auto-RP Discovery policy: None RP: 192.168.1.1, (0), uptime: 00:03:36, expires: never, priority: 0, RP-source: (local), group ranges: 224.0.0.0/4 RP: 192.168.1.3, (0), uptime: 00:00:05, expires: never, priority: 0, RP-source: (local), group ranges: 238.0.0.0/8 Jealousy Jealousy(config) end Configuring BSRs The next RP configuration methodology is Bootstrap Router (BSR). In Example 4-5, Greed is configured with both a bsr-candidate and bsr rp-candidate policy for groups in the 239.0.0.0/8 range. This enables Greed to participate in BSR elections and, if elected as a RP, apply a policy determining which routes will be advertised. Jealousy is configured to listen to BSR messages as well. Example 4-5 BSR Base Configuration Greed show ip pim rp PIM RP Status Information for VRF “default” BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None Greed config t Enter configuration commands, one per line. End with CNTL/Z. Greed(config) ip pim bsr bsr-candidate port-channel10 Greed(config) ip pim bsr rp-candidate port-channel10 group-list 239.0.0.0/8 Greed(config) end Greed show ip pim rp PIM RP Status Information for VRF “default” BSR: 192.168.1.34, next Bootstrap message in: 00:00:54, priority: 64, hash-length: 30 Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: NoneChapter 4: IP Multicast Configuration 183 RP: 192.168.1.34, (0), uptime: 00:00:05, expires: 00:02:24, priority: 192, RP-source: 192.168.1.34 (B), group ranges: 239.0.0.0/8 Greed On Jealousy, the configuration is modified to add BSR listen support. Jealousy show ip pim rp PIM RP Status Information for VRF “default” BSR forward-only mode BSR: Not Operational Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None Jealousy confi t Enter configuration commands, one per line. End with CNTL/Z. Jealousy(config) ip pim bsr listen Jealousy(config) end Jealousy show ip pim rp PIM RP Status Information for VRF “default” BSR: 192.168.1.34, uptime: 0.033521, expires: 00:02:09, priority: 64, hash-length: 30 Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None RP: 192.168.1.34, (0), uptime: 0.033667, expires: 00:02:29, priority: 192, RP-source: 192.168.1.34 (B), group ranges: 239.0.0.0/8 Jealousy Configuring BSR for Bidir is simply a matter of adding bidir to the rp-candidate com- mand. Example 4-6 illustrates this, and you can see the change on Jealousy. Example 4-6 Configuring BSR and Bidir Greed config t Enter configuration commands, one per line. End with CNTL/Z. Greed(config) ip pim bsr bsr-candidate port-channel10184 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Greed(config) ip pim bsr rp-candidate port-channel10 group-list 239.0.0.0/8 bidir Greed(config) end Greed show ip pim rp PIM RP Status Information for VRF “default” BSR: 192.168.1.34, next Bootstrap message in: 00:00:56, priority: 64, hash-length: 30 Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None RP: 192.168.1.34, (1), uptime: 00:00:06, expires: 00:02:23, priority: 192, RP-source: 192.168.1.34 (B), group ranges: 239.0.0.0/8 (bidir) Greed show ip pim group PIM Group-Range Configuration for VRF “default” Group-range Mode RP-address Shared-tree-only range 232.0.0.0/8 SSM - - 239.0.0.0/8 Bidir 192.168.1.34 - Jealousy is configured to listen to BSR messages. Jealousy confi t Enter configuration commands, one per line. End with CNTL/Z. Jealousy(config) ip pim bsr listen Jealousy(config) end Jealousy show ip pim group PIM Group-Range Configuration for VRF “default” Group-range Mode RP-address Shared-tree-only range 232.0.0.0/8 SSM - - 239.0.0.0/8 Bidir 192.168.1.34 - Jealousy Configuring Auto-RP NX-OS also supports Auto-RP, a Cisco-specific precursor to BSR. In Example 4-7, Greed is configured as both a mapping agent and candidate RP. A mapping agent is a role a router can take in an Auto-RP network responsible for RP elections based on information sent from candidate RPs. A candidate RP in an Auto-RP network advertises its capability to serve as an RP to the mapping agent. This can be useful to help scale networks as they grow. Jealousy is configured to listen and forward Auto-RP messages.Chapter 4: IP Multicast Configuration 185 Example 4-7 Configuring an Auto-RP Mapping Agent and Candidate RP Greed confi t Enter configuration commands, one per line. End with CNTL/Z. Greed(config) ip pim auto-rp listen forward Greed(config) ip pim auto-rp rp-candidate port-channel10 group-list 239.0.0.0/8 Greed(config) ip pim auto-rp mapping-agent port-channel10 Greed(config) end Greed show ip pim rp PIM RP Status Information for VRF “default” BSR disabled Auto-RP RPA: 192.168.1.34, next Discovery message in: 00:00:50 BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None RP: 192.168.1.34, (0), uptime: 00:00:46, expires: 00:02:53, priority: 0, RP-source: 192.168.1.34 (A), group ranges: 239.0.0.0/8 - Jealousy is configured to listen and forward Auto-RP messages. Jealousy confi t Enter configuration commands, one per line. End with CNTL/Z. Jealousy(config) ip pim auto-rp forward listen Jealousy(config) end Jealousy show ip pim rp PIM RP Status Information for VRF “default” BSR forward-only mode BSR: Not Operational Auto-RP RPA: 192.168.1.34, uptime: 00:04:22, expires: 00:02:06 BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None RP: 192.168.1.34, (0), uptime: 00:03:28, expires: 00:02:06, priority: 0, RP-source: 192.168.1.34 (A), group ranges: 239.0.0.0/8 Jealousy186 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Note The commands ip pim send-rp-announce and ip pim auto-rp rp-candidate per- form the same function and can be used as alternatives for each other with no impact to functionality. The commands ip pim send-rp-discovery and ip pim auto-rp mapping agent perform the same function and can be used as alterntates for each other with no impact to functionality. Configuring Auto-RP for Bidir is simply a matter of adding bidir to the rp-candidate command. Example 4-8 illustrates this, and the change can be observed on Jealousy. Example 4-8 Configuring Auto-RP and Bidir Greed confi t Enter configuration commands, one per line. End with CNTL/Z. Greed(config) Greed(config) Greed(config) ip pim auto-rp rp-candidate port-channel10 group-list 239.0.0.0/8 bidir Greed(config) end Greed show ip pim group PIM Group-Range Configuration for VRF “default” Group-range Mode RP-address Shared-tree-only range 232.0.0.0/8 SSM - - 239.0.0.0/8 Bidir 192.168.1.34 - Configuring Anycast-RP An alternative configuration is PIM Anycast-RP in which the same IP address is config- ured on multiple devices. This capability enables receivers to follow the unicast routing table to find the best path to the RP. It is commonly used in large environments where the desire is to minimize the impact of being an RP on a device and provide rudimentary load-balancing. In Example 4-9, both Greed and Jealousy have Loopback1 added to their configuration and defined as the PIM Anycast-RP for the network shown in Figure 4-5. This additonal loopback is added to ease troubleshooting in the network and easily iden- tify anycast traffic. Example 4-9 Configuring PIM Anycast-RP Greed config t Enter configuration commands, one per line. End with CNTL/Z. Greed(config) int lo1 Greed(config-if) Desc Loopback or PIM Anycast-RP Greed(config-if) ip address 192.168.1.100/32 Greed(config-if) ip router eigrp 100Chapter 4: IP Multicast Configuration 187 Denial 192.168.1.3/32 — Lo0 192.168.1.36/30 192.168.1.40/30 192.168.1.32/30 1/25 1/25 Po10 Jealousy Greed 192.168.1.1/32 — Lo0 192.168.1.2/32 — Lo0 192.168.1.100/32 — Lo1 192.168.1.100/32 — Lo1 Figure 4-5 PIM Anycast-RP Topology Greed(config-if) no shut Greed(config-if) exit Greed(config) ip pim anycast-rp 192.168.1.100 192.168.1.33 Greed(config) end Greed show ip pim rp PIM RP Status Information for VRF “default” BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None - Anycast-RP 192.168.1.100 members: 192.168.1.33 The same configuration is applied to Jealousy with the exception of the address used by the pim anycast-rp command. Jealousy config t Enter configuration commands, one per line. End with CNTL/Z. Jealousy(config) int lo1 Jealousy(config-if) Desc Loopback or PIM Anycast-RP Jealousy(config-if) ip address 192.168.1.100/32 Jealousy(config-if) ip router eigrp 100 Jealousy(config-if) no shut Jealousy(config-if) exit188 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures Jealousy(config) ip pim anycast-rp 192.168.1.100 192.168.1.34 Jealousy(config) end Jealousy show ip pim rp PIM RP Status Information for VRF “default” BSR forward-only mode BSR: Not Operational Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None Anycast-RP 192.168.1.100 members: 192.168.1.34 Configuring SSM and Static RPF Two methods for configuring support of multicast traffic do not rely on an RP—Source Specific Multicast (SSM) and Static RPF entries. In Example 4-10, Greed is configured to support SSM on the 239.0.0.0/8 range of multicast addresses. SSM has the advantage of not requiring an RP to function, and in some topologies this can lend itself to more efficient routing through the network. The main considerations for using SSM include the requirement for the receivers to use IGMPv3 and support for SSM by the Internet working equipment. Static RPF, similar to static routing in unicast traffic, might be desirable where the topology is simple or lacks multiple paths where a dynamic routing protocol would be advantageous. Example 4-10 Configuration of SSM Greed config t Enter configuration commands, one per line. End with CNTL/Z. Greed(config) ip pim ssm range 239.0.0.0/8 This command overwrites default SSM route Greed(config) end Greed show ip pim group PIM Group-Range Configuration for VRF “default” Group-range Mode RP-address Shared-tree-only range 239.0.0.0/8 SSM - - GreedChapter 4: IP Multicast Configuration 189 Note NX-OS will display a warning about changing the default SSM configuration. NX- OS supports SSM on 232.0.0.0/8 by default. Finally, configuration of static RPF entries enables the network administrator to define multicast routes through the network that do not follow the unicast routing table via PIM. In Example 4-11, a static RPF entry is created on Greed to send 192.168.1.1/32 traf- fic through Denial via 192.168.1.42. Using static RPF entries can be desirable in networks where per usage fees are associated or where extremely high latency might be masked by the unicast routing protocol, such as satellite networks. Example 4-11 Configuring a Static RPF Entry Greed show ip route 192.168.1.1 IP Route Table for VRF “default” ‘’ denotes best ucast next-hop ‘’ denotes best mcast next-hop ‘x/y’ denotes preference/metric 192.168.1.1/32, ubest/mbest: 1/0 via 192.168.1.33, Po10, 90/128576, 03:25:55, eigrp-100, internal Greed config t Enter configuration commands, one per line. End with CNTL/Z. Greed(config) ip mroute 192.168.1.1/32 192.168.1.42 Greed(config) end Greed show ip route 192.168.1.1 IP Route Table for VRF “default” ‘’ denotes best ucast next-hop ‘’ denotes best mcast next-hop ‘x/y’ denotes preference/metric 192.168.1.1/32, ubest/mbest: 1/1 via 192.168.1.33, Po10, 90/128576, 03:26:21, eigrp-100, internal via 192.168.1.42, Eth1/25, 1/0, 00:00:06, mstatic IGMP Operation Internet Group Management Protocol (IGMP) is an important component of a multicast network. IGMP is the protocol used by a host to signal its desire to join a specific multi- cast group. The router that sees the IGMP join message begins to send the multicast traffic requested by the host to the receiver. IGMP has matured over time, and there currently are three versions specified, named appropriately enough: IGMPv1, IGMPv2, and IGMPv3.190 NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures IGMPv1 is defined in the Internet Engineering Task Force (IETF) Request for Comments (RFC) 1112, IGMPv2 is defined in RFC 2236, and IGMPv3 is defined in RFC 3376. Note Most modern operating systems use IGMPv3, though there are legacy systems that do not yet support it. IGMP works through membership reports, and at its most simple process, routers on a network receive an unsolicited membership report from hosts that want to receive multi- cast traffic. The router processes these requests and begins to send the multicast traffic to the host until either a timeout value or a leave message is received. IGMPv3 adds support for SSM, previously described in the chapter. Additionally, IGMPv3 hosts do not perform report suppression like IGMPv1 and IGMPv2 hosts. IGMP report suppression is a methodology in which the switch sends only one IGMP report per multicast router query to avoid duplicate IGMP reports and preserve CPU resources. IGMP works on routers that understand multicast traffic. In the case of switches that might operate only at Layer 2, a technology called IGMP snooping enables intelligent forwarding of multicast traffic without broadcast or flooding behaviors. IGMP snooping enables a Layer 2 switch to examine, or snoop, IGMP membership reports and send mul- ticast traffic only to ports with hosts that ask for the specific groups. Without IGMP snooping, a typical Layer 2 switch would flood all multicast traffic to every port. This could be quite a lot of traffic and can negatively impact network performance. IGMP Configuration on Nexus 7000 The Nexus 7000 is a Layer 3 switch and as such can do both full IGMP processing and IGMP snooping. By default, IGMP is enabled on an interface when the following condi- tions are met: ■ PIM is enabled on the interface. ■ A local multicast group is statically bound. ■ Link-local reports are enabled. Note IGMPv2 is enabled by default, though IGMPv3 can be specified to change the default. In Example 4-12, PIM is configured on interface VLAN10, and IGMP in turn is enabled for the network shown in Figure 4-6.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.