The ldpd daemon is a standardised protocol that permits exchanging MPLS label information between MPLS devices. The LDP protocol creates peering between devices, so as to exchange that label information. This information is stored in MPLS table of zebra, and it injects that MPLS information in the underlying system (Linux kernel or OpenBSD system for instance). ldpd provides necessary options to create a Layer 2 VPN across MPLS network. For instance, it is possible to interconnect several sites that share the same broadcast domain.
FRR implements LDP as described in RFC 5036; other LDP standard are the following ones: RFC 6720, RFC 6667, RFC 5919, RFC 5561, RFC 7552, RFC 4447. Because MPLS is already available, FRR also supports RFC 3031.
The ldpd daemon can be invoked with any of the common options (Common Invocation Options).
The zebra daemon must be running before ldpd is invoked.
Configuration of ldpd is done in its configuration file ldpd.conf.
Let’s first introduce some definitions that permit understand better the LDP protocol:
LSR : Labeled Switch Router. Networking devices handling labels used to forward traffic between and through them.
an MPLS network, generally between an IP network and an MPLS network.
LDP aims at sharing label information across devices. It tries to establish peering with remote LDP capable devices, first by discovering using UDP port 646 , then by peering using TCP port 646. Once the TCP session is established, the label information is shared, through label advertisements.
There are different methods to send label advertisement modes. The implementation actually supports the following : Liberal Label Retention + Downstream Unsolicited + Independent Control. The other advertising modes are depicted below, and compared with the current implementation.
Enable or disable LDP daemon
The following command located under MPLS router node configures the MPLS router-id of the local device.
Configure LDP for IPv4 or IPv6 address-family. Located under MPLS route node, this subnode permits configuring the LDP neighbors.
Located under MPLS address-family node, use this command to enable or disable LDP discovery per interface. IFACE stands for the interface name where LDP is enabled. By default it is disabled. Once this command executed, the address-family interface node is configured.
Located under mpls address-family interface node, use this command to set the IPv4 or IPv6 transport-address used by the LDP protocol to talk on this interface.
The following command located under MPLS router node configures the router of a LDP device. This device, if found, will have to comply with the configured password. PASSWORD is a clear text password wit its digest sent through the network.
The following command located under MPLS router node configures the holdtime value in seconds of the LDP neighbor ID. Configuring it triggers a keepalive mechanism. That value can be configured between 15 and 65535 seconds. After this time of non response, the LDP established session will be considered as set to down. By default, no holdtime is configured for the LDP devices.
INTERVAL value ranges from 1 to 65535 seconds. Default value is 5 seconds. This is the value between each hello timer message sent. HOLDTIME value ranges from 1 to 65535 seconds. Default value is 15 seconds. That value is added as a TLV in the LDP messages.
These commands dump various parts of ldpd.
This command dumps the various neighbors discovered. Below example shows that local machine has an operation neighbor with ID set to 1.1.1.1.
west-vm# show mpls ldp neighbor
AF ID State Remote Address Uptime
ipv4 1.1.1.1 OPERATIONAL 1.1.1.1 00:01:37
west-vm#
Above commands dump other neighbor information.
Above commands dump discovery information.
Above command dumps the IPv4 or IPv6 interface per where LDP is enabled. Below output illustrates what is dumped for IPv4.
west-vm# show mpls ldp ipv4 interface
AF Interface State Uptime Hello Timers ac
ipv4 eth1 ACTIVE 00:08:35 5/15 0
ipv4 eth3 ACTIVE 00:08:35 5/15 1
Above command dumps the binding obtained through MPLS exchanges with LDP.
west-vm# show mpls ldp ipv4 binding
AF Destination Nexthop Local Label Remote Label In Use
ipv4 1.1.1.1/32 1.1.1.1 16 imp-null yes
ipv4 2.2.2.2/32 1.1.1.1 imp-null 16 no
ipv4 10.0.2.0/24 1.1.1.1 imp-null imp-null no
ipv4 10.115.0.0/24 1.1.1.1 imp-null 17 no
ipv4 10.135.0.0/24 1.1.1.1 imp-null imp-null no
ipv4 10.200.0.0/24 1.1.1.1 17 imp-null yes
west-vm#
Enable or disable debugging messages of a given kind. KIND can be one of:
Below configuration gives a typical MPLS configuration of a device located in a MPLS backbone. LDP is enabled on two interfaces and will attempt to peer with two neighbors with router-id set to either 1.1.1.1 or 3.3.3.3.
mpls ldp
router-id 2.2.2.2
neighbor 1.1.1.1 password test
neighbor 3.3.3.3 password test
!
address-family ipv4
discovery transport-address 2.2.2.2
!
interface eth1
!
interface eth3
!
exit-address-family
!
Deploying LDP across a backbone generally is done in a full mesh configuration topology. LDP is typically deployed with an IGP like OSPF, that helps discover the remote IPs. Below example is an OSPF configuration extract that goes with LDP configuration
router ospf
ospf router-id 2.2.2.2
network 0.0.0.0/0 area 0
!
Below output shows the routing entry on the LER side. The OSPF routing entry (10.200.0.0) is associated with Label entry (17), and shows that MPLS push action that traffic to that destination will be applied.
north-vm# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR,
> - selected route, * - FIB route
O>* 1.1.1.1/32 [110/120] via 10.115.0.1, eth2, label 16, 00:00:15
O>* 2.2.2.2/32 [110/20] via 10.115.0.1, eth2, label implicit-null, 00:00:15
O 3.3.3.3/32 [110/10] via 0.0.0.0, loopback1 onlink, 00:01:19
C>* 3.3.3.3/32 is directly connected, loopback1, 00:01:29
O>* 10.0.2.0/24 [110/11] via 10.115.0.1, eth2, label implicit-null, 00:00:15
O 10.100.0.0/24 [110/10] is directly connected, eth1, 00:00:32
C>* 10.100.0.0/24 is directly connected, eth1, 00:00:32
O 10.115.0.0/24 [110/10] is directly connected, eth2, 00:00:25
C>* 10.115.0.0/24 is directly connected, eth2, 00:00:32
O>* 10.135.0.0/24 [110/110] via 10.115.0.1, eth2, label implicit-null, 00:00:15
O>* 10.200.0.0/24 [110/210] via 10.115.0.1, eth2, label 17, 00:00:15
north-vm#