All of lore.kernel.org
 help / color / mirror / Atom feed
* bonding driver issue when configured for active/backup and using ARP monitoring
@ 2020-11-30 18:05 Finer, Howard
  2020-12-04 20:43 ` Jay Vosburgh
  0 siblings, 1 reply; 6+ messages in thread
From: Finer, Howard @ 2020-11-30 18:05 UTC (permalink / raw)
  To: j.vosburgh, andy, vfalico; +Cc: netdev

We use the bonding driver in an active-backup configuration with ARP monitoring. We also use the TIPC protocol which we run over the bond device. We are consistently seeing an issue in both the 3.16 and 4.19 kernels whereby when the bond slave is switched TIPC is being notified of the change rather than it happening silently.  The problem that we see is that when the active slave fails, a NETDEV_CHANGE event is being sent to the TIPC driver to notify it that the link is down. This causes the TIPC driver to reset its bearers and therefore break communication between the nodes that are clustered.
With some additional instrumentation in thee driver, I see this in /var/log/syslog:
<6> 1 2020-11-20T18:14:19.159524+01:00 LABNBS5B kernel - - - [65818.378287] bond0: link status definitely down for interface eth0, disabling it
<6> 1 2020-11-20T18:14:19.159536+01:00 LABNBS5B kernel - - - [65818.378296] bond0: now running without any active interface!
<6> 1 2020-11-20T18:14:19.159537+01:00 LABNBS5B kernel - - - [65818.378304] bond0: bond_activebackup_arp_mon: notify_rtnl, slave state notify/slave link notify
<6> 1 2020-11-20T18:14:19.159538+01:00 LABNBS5B kernel - - - [65818.378835] netdev change bearer <eth:bond0>
<6> 1 2020-11-20T18:14:19.263523+01:00 LABNBS5B kernel - - - [65818.482384] bond0: link status definitely up for interface eth1
<6> 1 2020-11-20T18:14:19.263534+01:00 LABNBS5B kernel - - - [65818.482387] bond0: making interface eth1 the new active one
<6> 1 2020-11-20T18:14:19.263536+01:00 LABNBS5B kernel - - - [65818.482633] bond0: first active interface up!
<6> 1 2020-11-20T18:14:19.263537+01:00 LABNBS5B kernel - - - [65818.482671] netdev change bearer <eth:bond0>
<6> 1 2020-11-20T18:14:19.367523+01:00 LABNBS5B kernel - - - [65818.586228] bond0: bond_activebackup_arp_mon: call_netdevice_notifiers NETDEV_NOTIFY_PEERS

There is no issue when using MII monitoring instead of ARP monitoring since when the slave is detected as down, it immediately switches to the backup as it sees that slave as being up and ready.    But when using ARP monitoring, only one of the slaves is 'up'.  So when the active slave goes down, the bonding driver will see no active slaves until it brings up the backup slave on the next call to bond_activebackup_arp_mon.  Bringing up that backup slave has to be attempted prior to notifying any peers of a change or else they will see the outage.  In this case it seems the should_notify_rtnl flag has to be set to false.    However, I also question if the switch to the backup slave should actually occur immediately like it does for MII and that the backup should be immediately 'brought up/switched to' without having to wait for the next iteration.

static void bond_activebackup_arp_mon(struct bonding *bond)
{
                bool should_notify_peers = false;
                bool should_notify_rtnl = false;
                int delta_in_ticks;

                delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval);

                if (!bond_has_slaves(bond))
                                goto re_arm;

                rcu_read_lock();

                should_notify_peers = bond_should_notify_peers(bond);

                if (bond_ab_arp_inspect(bond)) {
                                rcu_read_unlock();

                                /* Race avoidance with bond_close flush of workqueue */
                                if (!rtnl_trylock()) {
                                                delta_in_ticks = 1;
                                                should_notify_peers = false;
                                                goto re_arm;
                                }

                                bond_ab_arp_commit(bond);

                                rtnl_unlock();
                                rcu_read_lock();
                }

                should_notify_rtnl = bond_ab_arp_probe(bond);
                rcu_read_unlock();

re_arm:
                if (bond->params.arp_interval)
                                queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);

                if (should_notify_peers || should_notify_rtnl) {
                                if (!rtnl_trylock())
                                                return;

                                if (should_notify_peers)
        {
            netdev_info(bond->dev, "bond_activebackup_arp_mon: call_netdevice_notifiers NETDEV_NOTIFY_PEERS\n");

                                                call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
                                                                                                bond->dev);
        }
                                if (should_notify_rtnl) {

            netdev_info(bond->dev, "bond_activebackup_arp_mon: notify_rtnl, slave state notify/slave link notify\n");
                                                bond_slave_state_notify(bond);
                                                bond_slave_link_notify(bond);
                                }

                                rtnl_unlock();
                }
}

As it currently behaves there is no way to run TIPC over an active-backup ARP-monitored bond device.  I suspect there are other situations/uses that would likewise have an issue with the 'erroneous' NETDEV_CHANGE being issued.   Since TIPC (and others) have no idea what the dev is, it is not possible to ignore the event nor should it be ignored.  It therefore seems the event shouldn't be sent for this situation.   Please confirm the analysis above and provide a path forward since as currently implemented the functionality is broken.

Thanks,
Howard Finer
hfiner@rbbn.com



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: bonding driver issue when configured for active/backup and using ARP monitoring
  2020-11-30 18:05 bonding driver issue when configured for active/backup and using ARP monitoring Finer, Howard
@ 2020-12-04 20:43 ` Jay Vosburgh
       [not found]   ` <MN2PR03MB4752855F586F7CC5811E9B67B7CE0@MN2PR03MB4752.namprd03.prod.outlook.com>
  2021-01-04 23:08   ` Finer, Howard
  0 siblings, 2 replies; 6+ messages in thread
From: Jay Vosburgh @ 2020-12-04 20:43 UTC (permalink / raw)
  To: Finer, Howard; +Cc: andy, vfalico, netdev

Finer, Howard <hfiner@rbbn.com> wrote:

>We use the bonding driver in an active-backup configuration with ARP
>monitoring. We also use the TIPC protocol which we run over the bond
>device. We are consistently seeing an issue in both the 3.16 and 4.19
>kernels whereby when the bond slave is switched TIPC is being notified of
>the change rather than it happening silently. The problem that we see is
>that when the active slave fails, a NETDEV_CHANGE event is being sent to
>the TIPC driver to notify it that the link is down. This causes the TIPC
>driver to reset its bearers and therefore break communication between the
>nodes that are clustered.
>With some additional instrumentation in thee driver, I see this in
>/var/log/syslog:
><6> 1 2020-11-20T18:14:19.159524+01:00 LABNBS5B kernel - - -
>[65818.378287] bond0: link status definitely down for interface eth0,
>disabling it
><6> 1 2020-11-20T18:14:19.159536+01:00 LABNBS5B kernel - - -
>[65818.378296] bond0: now running without any active interface!
><6> 1 2020-11-20T18:14:19.159537+01:00 LABNBS5B kernel - - -
>[65818.378304] bond0: bond_activebackup_arp_mon: notify_rtnl, slave state
>notify/slave link notify
><6> 1 2020-11-20T18:14:19.159538+01:00 LABNBS5B kernel - - -
>[65818.378835] netdev change bearer <eth:bond0>
><6> 1 2020-11-20T18:14:19.263523+01:00 LABNBS5B kernel - - -
>[65818.482384] bond0: link status definitely up for interface eth1
><6> 1 2020-11-20T18:14:19.263534+01:00 LABNBS5B kernel - - -
>[65818.482387] bond0: making interface eth1 the new active one
><6> 1 2020-11-20T18:14:19.263536+01:00 LABNBS5B kernel - - -
>[65818.482633] bond0: first active interface up!
><6> 1 2020-11-20T18:14:19.263537+01:00 LABNBS5B kernel - - -
>[65818.482671] netdev change bearer <eth:bond0>
><6> 1 2020-11-20T18:14:19.367523+01:00 LABNBS5B kernel - - -
>[65818.586228] bond0: bond_activebackup_arp_mon: call_netdevice_notifiers
>NETDEV_NOTIFY_PEERS
>
>There is no issue when using MII monitoring instead of ARP monitoring
>since when the slave is detected as down, it immediately switches to the
>backup as it sees that slave as being up and ready. But when using ARP
>monitoring, only one of the slaves is 'up'. So when the active slave goes
>down, the bonding driver will see no active slaves until it brings up the
>backup slave on the next call to bond_activebackup_arp_mon. Bringing up
>that backup slave has to be attempted prior to notifying any peers of a
>change or else they will see the outage. In this case it seems the
>should_notify_rtnl flag has to be set to false. However, I also question
>if the switch to the backup slave should actually occur immediately like
>it does for MII and that the backup should be immediately 'brought
>up/switched to' without having to wait for the next iteration.

	I see what you're describing; I'm watching "ip monitor" while
doing failovers and comparing the behavior of the miimon vs the ARP
monitor.  The bond device itself goes down during the course of an ARP
failover, which doesn't happen during the miimon failover.

	This does cause some churn of even the IPv4 multicast addresses
and such, so it would be ideal if the backup interfaces could be kept
track of and switched to immediately as you suggest.

	I don't think it's simply a matter of not doing a notification,
however.  I haven't instrumented it completely yet to see the complete
behavior, but the backup interface has to be in a bonding-internal down
state, otherwise the bond_ab_arp_commit call to bond_select_active_slave
would select a new active slave, and the bond itself would not go
NO-CARRIER (which is likely where the NETDEV_CHANGE event comes from,
via linkwatch doing netdev_state_change).

[...]

>As it currently behaves there is no way to run TIPC over an active-backup
>ARP-monitored bond device. I suspect there are other situations/uses that
>would likewise have an issue with the 'erroneous' NETDEV_CHANGE being
>issued. Since TIPC (and others) have no idea what the dev is, it is not
>possible to ignore the event nor should it be ignored. It therefore seems
>the event shouldn't be sent for this situation. Please confirm the
>analysis above and provide a path forward since as currently implemented
>the functionality is broken.

	As I said above, I don't think it's just about notifications.

	-J

---
	-Jay Vosburgh, jay.vosburgh@canonical.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: bonding driver issue when configured for active/backup and using ARP monitoring
       [not found]   ` <MN2PR03MB4752855F586F7CC5811E9B67B7CE0@MN2PR03MB4752.namprd03.prod.outlook.com>
@ 2020-12-07 15:49     ` Finer, Howard
  0 siblings, 0 replies; 6+ messages in thread
From: Finer, Howard @ 2020-12-07 15:49 UTC (permalink / raw)
  To: Jay Vosburgh; +Cc: andy, vfalico, netdev

Thanks for confirming what I am seeing.    I agree it is not just a notification issue, and that “it would be ideal if the backup interfaces could be kept
track of and switched to immediately as you suggest.”

Given that this is a defect in the driver and that an active/backup configuration with ARP monitoring is broken, is this something that would receive some attention and get fixed in the near term?

Thanks again,
Howard


From: Jay Vosburgh <mailto:jay.vosburgh@canonical.com>
Sent: Friday, December 4, 2020 3:43 PM
To: Finer, Howard <mailto:hfiner@rbbn.com>
Cc: mailto:andy@greyhouse.net; mailto:vfalico@gmail.com; mailto:netdev@vger.kernel.org
Subject: Re: bonding driver issue when configured for active/backup and using ARP monitoring

________________________________________
NOTICE: This email was received from an EXTERNAL sender
________________________________________

Finer, Howard <mailto:hfiner@rbbn.com> wrote:

>We use the bonding driver in an active-backup configuration with ARP
>monitoring. We also use the TIPC protocol which we run over the bond
>device. We are consistently seeing an issue in both the 3.16 and 4.19
>kernels whereby when the bond slave is switched TIPC is being notified of
>the change rather than it happening silently. The problem that we see is
>that when the active slave fails, a NETDEV_CHANGE event is being sent to
>the TIPC driver to notify it that the link is down. This causes the TIPC
>driver to reset its bearers and therefore break communication between the
>nodes that are clustered.
>With some additional instrumentation in thee driver, I see this in
>/var/log/syslog:
><6> 1 2020-11-20T18:14:19.159524+01:00 LABNBS5B kernel - - -
>[65818.378287] bond0: link status definitely down for interface eth0,
>disabling it
><6> 1 2020-11-20T18:14:19.159536+01:00 LABNBS5B kernel - - -
>[65818.378296] bond0: now running without any active interface!
><6> 1 2020-11-20T18:14:19.159537+01:00 LABNBS5B kernel - - -
>[65818.378304] bond0: bond_activebackup_arp_mon: notify_rtnl, slave state
>notify/slave link notify
><6> 1 2020-11-20T18:14:19.159538+01:00 LABNBS5B kernel - - -
>[65818.378835] netdev change bearer <eth:bond0>
><6> 1 2020-11-20T18:14:19.263523+01:00 LABNBS5B kernel - - -
>[65818.482384] bond0: link status definitely up for interface eth1
><6> 1 2020-11-20T18:14:19.263534+01:00 LABNBS5B kernel - - -
>[65818.482387] bond0: making interface eth1 the new active one
><6> 1 2020-11-20T18:14:19.263536+01:00 LABNBS5B kernel - - -
>[65818.482633] bond0: first active interface up!
><6> 1 2020-11-20T18:14:19.263537+01:00 LABNBS5B kernel - - -
>[65818.482671] netdev change bearer <eth:bond0>
><6> 1 2020-11-20T18:14:19.367523+01:00 LABNBS5B kernel - - -
>[65818.586228] bond0: bond_activebackup_arp_mon: call_netdevice_notifiers
>NETDEV_NOTIFY_PEERS
>
>There is no issue when using MII monitoring instead of ARP monitoring
>since when the slave is detected as down, it immediately switches to the
>backup as it sees that slave as being up and ready. But when using ARP
>monitoring, only one of the slaves is 'up'. So when the active slave goes
>down, the bonding driver will see no active slaves until it brings up the
>backup slave on the next call to bond_activebackup_arp_mon. Bringing up
>that backup slave has to be attempted prior to notifying any peers of a
>change or else they will see the outage. In this case it seems the
>should_notify_rtnl flag has to be set to false. However, I also question
>if the switch to the backup slave should actually occur immediately like
>it does for MII and that the backup should be immediately 'brought
>up/switched to' without having to wait for the next iteration.

I see what you're describing; I'm watching "ip monitor" while
doing failovers and comparing the behavior of the miimon vs the ARP
monitor. The bond device itself goes down during the course of an ARP
failover, which doesn't happen during the miimon failover.

This does cause some churn of even the IPv4 multicast addresses
and such, so it would be ideal if the backup interfaces could be kept
track of and switched to immediately as you suggest.

I don't think it's simply a matter of not doing a notification,
however. I haven't instrumented it completely yet to see the complete
behavior, but the backup interface has to be in a bonding-internal down
state, otherwise the bond_ab_arp_commit call to bond_select_active_slave
would select a new active slave, and the bond itself would not go
NO-CARRIER (which is likely where the NETDEV_CHANGE event comes from,
via linkwatch doing netdev_state_change).

[...]

>As it currently behaves there is no way to run TIPC over an active-backup
>ARP-monitored bond device. I suspect there are other situations/uses that
>would likewise have an issue with the 'erroneous' NETDEV_CHANGE being
>issued. Since TIPC (and others) have no idea what the dev is, it is not
>possible to ignore the event nor should it be ignored. It therefore seems
>the event shouldn't be sent for this situation. Please confirm the
>analysis above and provide a path forward since as currently implemented
>the functionality is broken.

As I said above, I don't think it's just about notifications.

-J

---
-Jay Vosburgh, mailto:jay.vosburgh@canonical.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: bonding driver issue when configured for active/backup and using ARP monitoring
  2020-12-04 20:43 ` Jay Vosburgh
       [not found]   ` <MN2PR03MB4752855F586F7CC5811E9B67B7CE0@MN2PR03MB4752.namprd03.prod.outlook.com>
@ 2021-01-04 23:08   ` Finer, Howard
  2021-01-05  2:51     ` Jay Vosburgh
  1 sibling, 1 reply; 6+ messages in thread
From: Finer, Howard @ 2021-01-04 23:08 UTC (permalink / raw)
  To: Jay Vosburgh; +Cc: andy, vfalico, netdev

Please advise if there is any update here, and if not how we can go about getting an update to the driver to rectify the issue.

Thanks,
Howard


From: Jay Vosburgh <jay.vosburgh@canonical.com>
Sent: Friday, December 4, 2020 3:43 PM
To: Finer, Howard <hfiner@rbbn.com>
Cc: andy@greyhouse.net; vfalico@gmail.com; netdev@vger.kernel.org
Subject: Re: bonding driver issue when configured for active/backup and using ARP monitoring

________________________________________
NOTICE: This email was received from an EXTERNAL sender
________________________________________

Finer, Howard <mailto:hfiner@rbbn.com> wrote:

>We use the bonding driver in an active-backup configuration with ARP
>monitoring. We also use the TIPC protocol which we run over the bond
>device. We are consistently seeing an issue in both the 3.16 and 4.19
>kernels whereby when the bond slave is switched TIPC is being notified of
>the change rather than it happening silently. The problem that we see is
>that when the active slave fails, a NETDEV_CHANGE event is being sent to
>the TIPC driver to notify it that the link is down. This causes the TIPC
>driver to reset its bearers and therefore break communication between the
>nodes that are clustered.
>With some additional instrumentation in thee driver, I see this in
>/var/log/syslog:
><6> 1 2020-11-20T18:14:19.159524+01:00 LABNBS5B kernel - - -
>[65818.378287] bond0: link status definitely down for interface eth0,
>disabling it
><6> 1 2020-11-20T18:14:19.159536+01:00 LABNBS5B kernel - - -
>[65818.378296] bond0: now running without any active interface!
><6> 1 2020-11-20T18:14:19.159537+01:00 LABNBS5B kernel - - -
>[65818.378304] bond0: bond_activebackup_arp_mon: notify_rtnl, slave state
>notify/slave link notify
><6> 1 2020-11-20T18:14:19.159538+01:00 LABNBS5B kernel - - -
>[65818.378835] netdev change bearer <eth:bond0>
><6> 1 2020-11-20T18:14:19.263523+01:00 LABNBS5B kernel - - -
>[65818.482384] bond0: link status definitely up for interface eth1
><6> 1 2020-11-20T18:14:19.263534+01:00 LABNBS5B kernel - - -
>[65818.482387] bond0: making interface eth1 the new active one
><6> 1 2020-11-20T18:14:19.263536+01:00 LABNBS5B kernel - - -
>[65818.482633] bond0: first active interface up!
><6> 1 2020-11-20T18:14:19.263537+01:00 LABNBS5B kernel - - -
>[65818.482671] netdev change bearer <eth:bond0>
><6> 1 2020-11-20T18:14:19.367523+01:00 LABNBS5B kernel - - -
>[65818.586228] bond0: bond_activebackup_arp_mon: call_netdevice_notifiers
>NETDEV_NOTIFY_PEERS
>
>There is no issue when using MII monitoring instead of ARP monitoring
>since when the slave is detected as down, it immediately switches to the
>backup as it sees that slave as being up and ready. But when using ARP
>monitoring, only one of the slaves is 'up'. So when the active slave goes
>down, the bonding driver will see no active slaves until it brings up the
>backup slave on the next call to bond_activebackup_arp_mon. Bringing up
>that backup slave has to be attempted prior to notifying any peers of a
>change or else they will see the outage. In this case it seems the
>should_notify_rtnl flag has to be set to false. However, I also question
>if the switch to the backup slave should actually occur immediately like
>it does for MII and that the backup should be immediately 'brought
>up/switched to' without having to wait for the next iteration.

I see what you're describing; I'm watching "ip monitor" while
doing failovers and comparing the behavior of the miimon vs the ARP
monitor. The bond device itself goes down during the course of an ARP
failover, which doesn't happen during the miimon failover.

This does cause some churn of even the IPv4 multicast addresses
and such, so it would be ideal if the backup interfaces could be kept
track of and switched to immediately as you suggest.

I don't think it's simply a matter of not doing a notification,
however. I haven't instrumented it completely yet to see the complete
behavior, but the backup interface has to be in a bonding-internal down
state, otherwise the bond_ab_arp_commit call to bond_select_active_slave
would select a new active slave, and the bond itself would not go
NO-CARRIER (which is likely where the NETDEV_CHANGE event comes from,
via linkwatch doing netdev_state_change).

[...]

>As it currently behaves there is no way to run TIPC over an active-backup
>ARP-monitored bond device. I suspect there are other situations/uses that
>would likewise have an issue with the 'erroneous' NETDEV_CHANGE being
>issued. Since TIPC (and others) have no idea what the dev is, it is not
>possible to ignore the event nor should it be ignored. It therefore seems
>the event shouldn't be sent for this situation. Please confirm the
>analysis above and provide a path forward since as currently implemented
>the functionality is broken.

As I said above, I don't think it's just about notifications.

-J

---
-Jay Vosburgh, mailto:jay.vosburgh@canonical.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: bonding driver issue when configured for active/backup and using ARP monitoring
  2021-01-04 23:08   ` Finer, Howard
@ 2021-01-05  2:51     ` Jay Vosburgh
  2021-01-05 16:49       ` Finer, Howard
  0 siblings, 1 reply; 6+ messages in thread
From: Jay Vosburgh @ 2021-01-05  2:51 UTC (permalink / raw)
  To: Finer, Howard; +Cc: andy, vfalico, netdev

Finer, Howard <hfiner@rbbn.com> wrote:

>Please advise if there is any update here, and if not how we can go about
>getting an update to the driver to rectify the issue.

	As it happens, I've been looking at this today, and have a
couple of questions about your configuration:

	- Is there an IP address on the same subnet as the arp_ip_target
configured directly on the bond, or on a VLAN logically above the bond?

	- Is the "arp_ip_target" address reachable via an interface
other than the bond (or VLAN above it)?  This can be checked via "ip
route get [arp_ip_target]", i.e., if the target address for bond0 is
1.2.3.4, the command "ip route get 1.2.3.4" will return something like

1.2.3.4 dev bond0 src [...]

	If an interface other than bond0 (or a VLAN above it) is listed,
then there's a path to the arp_ip_target that doesn't go through the
bond.

	The ARP monitor logic can only handle a limited set of
configurations, so if your configuration is outside of that it can
misbehave in some ways.

	-J

---
	-Jay Vosburgh, jay.vosburgh@canonical.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: bonding driver issue when configured for active/backup and using ARP monitoring
  2021-01-05  2:51     ` Jay Vosburgh
@ 2021-01-05 16:49       ` Finer, Howard
  0 siblings, 0 replies; 6+ messages in thread
From: Finer, Howard @ 2021-01-05 16:49 UTC (permalink / raw)
  To: Jay Vosburgh; +Cc: andy, vfalico, netdev

Thanks Jay.

 This is a dedicated link between our two machines.  The only thing provisioned on it is the bond device and the IP for that bond device.  There are no other addresses on it and no VLANs above it.

The arp_ip_target is not reachable via any other interface.  For example:
  ip route get 169.254.88.1
      169.254.88.1 dev bond0  src 169.254.99.1


Thanks,
Howard



From: Jay Vosburgh <jay.vosburgh@canonical.com>
Sent: Monday, January 4, 2021 9:51 PM
To: Finer, Howard <hfiner@rbbn.com>
Cc: andy@greyhouse.net; vfalico@gmail.com; netdev@vger.kernel.org
Subject: Re: bonding driver issue when configured for active/backup and using ARP monitoring

________________________________________
NOTICE: This email was received from an EXTERNAL sender
________________________________________

Finer, Howard <mailto:hfiner@rbbn.com> wrote:

>Please advise if there is any update here, and if not how we can go about
>getting an update to the driver to rectify the issue.

As it happens, I've been looking at this today, and have a
couple of questions about your configuration:

- Is there an IP address on the same subnet as the arp_ip_target
configured directly on the bond, or on a VLAN logically above the bond?

- Is the "arp_ip_target" address reachable via an interface
other than the bond (or VLAN above it)? This can be checked via "ip
route get [arp_ip_target]", i.e., if the target address for bond0 is
http://1.2.3.4, the command "ip route get http://1.2.3.4" will return something like

http://1.2.3.4 dev bond0 src [...]

If an interface other than bond0 (or a VLAN above it) is listed,
then there's a path to the arp_ip_target that doesn't go through the
bond.

The ARP monitor logic can only handle a limited set of
configurations, so if your configuration is outside of that it can
misbehave in some ways.

-J

---
-Jay Vosburgh, mailto:jay.vosburgh@canonical.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-01-05 17:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-30 18:05 bonding driver issue when configured for active/backup and using ARP monitoring Finer, Howard
2020-12-04 20:43 ` Jay Vosburgh
     [not found]   ` <MN2PR03MB4752855F586F7CC5811E9B67B7CE0@MN2PR03MB4752.namprd03.prod.outlook.com>
2020-12-07 15:49     ` Finer, Howard
2021-01-04 23:08   ` Finer, Howard
2021-01-05  2:51     ` Jay Vosburgh
2021-01-05 16:49       ` Finer, Howard

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.