CCNP Equal Cost Multi Path (ECMP)

From Datateknik
(Difference between revisions)
Jump to: navigation, search
(Created page with "Intro =Routing= Verification of Configuration <pre> Leaf-1#show ip route Gateway of last resort is not set 10.0.0.0/8 is variably subnetted, 10 subnets, 2 masks C ...")

Revision as of 13:20, 2 December 2016

Intro

Routing

Verification of Configuration

Leaf-1#show ip route
Gateway of last resort is not set


      10.0.0.0/8 is variably subnetted, 10 subnets, 2 masks
C        10.1.1.0/24 is directly connected, gig0/1
L        10.1.1.1/32 is directly connected, gig0/1
C        10.2.2.0/24 is directly connected, gig0/2
L        10.2.2.1/32 is directly connected, gig0/2
C        10.3.3.0/24 is directly connected, gig0/3
L        10.3.3.1/32 is directly connected, gig0/3

Note above that we have one subnet for each spine switch


O        10.2.0.0/16  [110/2] via 10.1.1.3, 1d19h, gig0/1   ← Assuming we add summary
                      [110/2] via 10.2.2.3, 1d19h, gig0/2
                      [110/2] via 10.3.3.3, 1d19h, gig0/3
O        10.3.0.0/16  [110/2] via 10.1.1.3, 1d19h, gig0/1   ← Assuming we add summary
                      [110/2] via 10.2.2.3, 1d19h, gig0/2
                      [110/2] via 10.3.3.3, 1d19h, gig0/3
O        10.4.0.0/16  [110/2] via 10.1.1.3, 1d19h, gig0/1   ← Assuming we add summary
                      [110/2] via 10.2.2.3, 1d19h, gig0/2
                      [110/2] via 10.3.3.3, 1d19h, gig0/3

Note above that we have three equal cost paths to servers on other leafs, the same amount of parallel paths as we have spines.


C        10.1.10.0/30 is directly connected, gig0/10
L        10.1.10.1/32 is directly connected, gig0/10
C        10.1.11.0/30 is directly connected, gig0/11
L        10.1.11.1/32 is directly connected, gig0/11
C        10.1.12.0/30 is directly connected, gig0/12
L        10.1.12.1/32 is directly connected, gig0/12

Note above that every server is in its own LAN, with 10.$leaf.$down_port.1 as default-gateway. ($y ??)


Now we check the hash-buckets for load balancing over spines/between leafs.

Leaf-4#show ip cef 10.5.0.0 internal
10.5.0.0/16, epoch 3, RIB[I], refcount 6, per-destination sharing
  sources: RIB
  feature space:
   Broker: linked, distributed at 4th priority
  ifnums:
   FastEthernet0/1(469): 10.1.1.5
   FastEthernet0/2(470): 10.2.2.5
   FastEthernet0/2(470): 10.3.3.5
  path 0625780C, path list 053A00B0, share 1/1, type attached nexthop, for IPv4
  nexthop 10.1.1.5 gig0/1, adjacency IP adj out of gig0/1, addr 10.1.1.5 058EF420
  path 0625787C, path list 053A00B0, share 1/1, type attached nexthop, for IPv4
  nexthop 10.2.2.5 gig0/2, adjacency IP adj out of gig0/2, addr 10.2.2.5 058EF280
  path 0625787C, path list 053A00B0, share 1/1, type attached nexthop, for IPv4
  nexthop 10.3.3.5 gig0/2, adjacency IP adj out of gig0/2, addr 10.2.2.5 05EFAKE0
 

  output chain:
    loadinfo 0588EE68, per-session, 2 choices, flags 0003, 6 locks
    flags: Per-session, for-rx-IPv4
    16 hash buckets
     < 0 > IP adj out of FastEthernet0/1, addr 10.1.1.3 058EF420
     < 1 > IP adj out of FastEthernet0/2, addr 10.2.2.3 058EF280
     < 2 > IP adj out of FastEthernet0/3, addr 10.1.1.3 058FAKE0
     < 3 > IP adj out of FastEthernet0/1, addr 10.2.2.3 058EF420
     < 4 > IP adj out of FastEthernet0/2, addr 10.1.1.3 058EF280
     < 5 > IP adj out of FastEthernet0/3, addr 10.2.2.3 058FAKE0
     < 6 > IP adj out of FastEthernet0/1, addr 10.1.1.3 058EF420
     < 7 > IP adj out of FastEthernet0/2, addr 10.2.2.3 058EF280
     < 8 > IP adj out of FastEthernet0/3, addr 10.1.1.3 058FAKE0
     < 9 > IP adj out of FastEthernet0/1, addr 10.2.2.3 058EF420
     <10 > IP adj out of FastEthernet0/2, addr 10.1.1.3 058EF280
     <11 > IP adj out of FastEthernet0/3, addr 10.2.2.3 058FAKE0
     <12 > IP adj out of FastEthernet0/1, addr 10.1.1.3 058EF420
     <13 > IP adj out of FastEthernet0/2, addr 10.2.2.3 058EF280
     <14 > IP adj out of FastEthernet0/3, addr 10.1.1.3 058FAKE0
     <15 >  -- not used /Robert
   Subblocks:
    None
Leaf-4#

Sometimes you need to tweak the load balancing scheme with the command

 ip cef load-sharing algorithm include-ports source destination

Done! We now have an operational Flipped Network. Congratulations.

CEF with EtherChannel

Regular Interface (No Subinterfaces)

Step 1
View the Address Resolution Protocol (ARP).
RP/0/RSP0/CPU0:router# show arp 

Step 2
Verify that the lag table is programmed properly in the hardware.
RP/0/RSP0/CPU0:router# show interface bundle-ether bundle-id 

Step 3
View the running configuration information.
RP/0/RSP0/CPU0:router# show running-config 

Step 4
View information about packets forwarded by CEF.
RP/0/RSP0/CPU0:router# show cef 

Step 5
RP/0/RSP0/CPU0:router# show cef hardware ingress location node-id

Step 6
RP/0/RSP0/CPU0:router# show cef hardware egress location node-id 

'''Subinterface''' 

Step 1
Troubleshoot Layer 3 IPv4 traffic.
Step 2
Ensure that VLAN traffic coming in matches that on the incoming interface.

Ping Failed over Bundle
Step 1
View the ARP.
RP/0/RSP0/CPU0:router# show arp 
Step 2
View the ARP information on the particular LC or RSP.
RP/0/RSP0/CPU0:router# show arp location node-id
Step 3
RP/0/RSP0/CPU0:router# show cef hardware detail location node-id ingress 
Step 4
RP/0/RSP0/CPU0:router# show interface 
Step 5
Use the hash calculator to determine which bundle member (interface) to test.
Step 6
Remove the interface from the bundle.
Step 7
Assign an IP address to the interface.
Step 8
Ping the interface.
Step 9
Ensure that the ARP is resolved between the router and the node being pinged.
### END ###
=NEXT=
hej.
Personal tools
Namespaces

Variants
Actions
Navigation
Tools