● Configure IPv4 default routes on R4 and R6 pointing to R1's IPv4 address from the shared Ethernet segment.
● Configure IPv4 default route on R3 pointing to R1's IPv4 address from the shared Ethernet segment.
● Configure IPv4 default route on R5 pointing to R1's DMVPN cloud IPv4 address.
● Configure IPv4 static routes on R3 for R5’s Loopback0 prefix and on R5 for R3’s Loopback0 prefix through the DMVPN cloud.
● Configure R1 and R5 to run CDP over the DMVPN cloud with each other.
● Configure an IP SLA instance on R1 that pings R3’s connection to VLAN 13 every five seconds.
● Configure IPv4 policy-routing on R1 so that traffic from R4 is routed through R3 over the Ethernet link, and traffic from R6 is routed through R5 over the DMVPN cloud.
● Create two extended access-lists on R1, named FROM_R4 and FROM_R6:
● Access-list FROM_R4 should match all IPv4 traffic sourced from R4's Ethernet segment.
● Access-list FROM_R6 should match all IPv4 traffic sourced from R6's Ethernet segment.
● Use traceroute on R4 and R6 for R3's and R5’s Loopback0 prefixes to verify your configuration.
● Modify R1’s policy routing so that if R1 loses ICMP reachability to R3, traffic from R4 is rerouted to R5 over the DMVPN cloud.
● Modify R1’s policy routing so that if R1 loses R5 as a CDP neighbor, traffic from R6 is rerouted to R3 over the Ethernet link.
Verify the IP SLA configuration and its state, and also that R1 and R5 are CDP neighbors over the DMVPN cloud
R1
show ip sla configuration
show ip sla statistics
show track
show cdp neighbors tunnel0ae
Verify that traffic is policy-routed as requested.
R4
traceroute 10.1.3.3
traceroute 10.1.5.5
R6
traceroute 10.1.3.3
traceroute 10.1.5.5
Verify policy-routing configuration and that traffic has matched the ACL, and note the tracking object in the UP state.
R1
show ip policy
show ip interface GigabitEthernet0/0.146 | i Policy
show route-map
Because a regular policy routing configuration is only locally significant, network failures do not automatically update the routing policy of the router. To resolve this design problem, R1 needs some way to track end-to-end reachability on these links used for the outbound forwarding through policy routing.
The two ways illustrated in this example are through the IP SLA and Enhanced Object Tracking features, and through CDP. With IP SLA configured, R1 tracks the end-to-end circuit status of VLAN 13 through ICMP ping.
When R3’s connection to VLAN 13 goes down, R1’s SLA instance reports its status down, which in turn causes the tracked object to go down. The tracked object is called from the route-map syntax set ip next-hop verify-availability 172.16.13.3 1 track 1.
This means that if tracked object 1 goes down, do not use the next-hop 172.16.13.3. Instead, this route-map sequence fails over to the “default” next-hop of 172.16.0.5. Let's disable R3's Ethernet link on VLAN 13:
R1
debug track state
R3
configure terminal
interface GigabitEthernet0/0.13
shutdown
With debug track being enabled on R1, the following log message should be displayed; verify that tracking object state is down.
R1
%TRACK-6-STATE: 1 ip sla 1 state Up -> Down
show track
Verify that traffic received from R4 is now rerouted over the DMVPN cloud, based on the set ip default next-hop 172.16.0.5 route-map entry.
R4
traceroute 10.1.5.5
Note: Re-activate R3's Ethernet link on VLAN 13.
R3
configure terminal
configure terminal
no shutdown
With CDP tracking for policy routing, R1 looks into the CDP table to see if there is a neighbor installed with the IP address that matches the next-hop value being set in the route-map.
In this case, the syntax set ip next-hop 172.16.0.5, set ip next-hop verify-availability, and set ip default next-hop 172.16.13.3 means that if there is no CDP neighbor with the IP address 172.16.0.5, traffic that matches this sequence will be routed to 172.16.13.3. Let's disable R1's DMVPN interface to trigger CDP failure:
configure terminal
interface Tunnel0
shutdown
Note:
Normally, you would disable R5's DMVPN interface to trigger CDP failure on R1, but on CSR 1000v routers, it seems that CDP next-hop tracking does not work as expected. Slowly, after 180 seconds (the default CDP hold time), the CDP entry of R5 will time out from R1's CDP table.
R1
show cdp neighbors Tunnel0
R1
show cdp neighbors Tunnel0
Verify that traffic received from R6 is now rerouted over the Ethernet link to R3, based on the set ip default next-hop 172.16.13.3 route-map entry.
R6
traceroute 10.1.3.3
R1
ip sla 1
icmp-echo 172.16.13.3 source-interface GigabitEthernet0/0.13
frequency 5
!
ip sla schedule 1 start-time now life forever
track 1 ip sla 1 state
!
ip access-list extended FROM_R4
permit ip host 172.16.146.4 any
!
ip access-list extended FROM_R6
permit ip host 172.16.146.6 any
!
route-map POLICY_ROUTING permit 10
match ip address FROM_R4
set ip next-hop verify-availability 172.16.13.3 1 track 1
set ip default next-hop 172.16.0.5
!
route-map POLICY_ROUTING permit 20
match ip address FROM_R6
set ip next-hop 172.16.0.5
set ip next-hop verify-availability
set ip default next-hop 172.16.13.3
!
interface Eth0/0.146
ip policy route-map POLICY_ROUTING
!
interface Tunnel0
cdp enable
R3
ip route 0.0.0.0 0.0.0.0 172.16.13.1
ip route 10.1.5.5 255.255.255.255 172.16.0.5
R4
ip route 0.0.0.0 0.0.0.0 172.16.146.1
R5
ip route 0.0.0.0 0.0.0.0 172.16.0.1
ip route 10.16.3.3 255.255.255.255 172.16.0.3
!
interface Tunnel0
cdp enable
R6
ip route 0.0.0.0 0.0.0.0 172.16.146.1