All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
@ 2011-10-19  8:17 Mitsuo Hayasaka
  2011-10-19 18:01 ` Jay Vosburgh
  0 siblings, 1 reply; 13+ messages in thread
From: Mitsuo Hayasaka @ 2011-10-19  8:17 UTC (permalink / raw)
  To: Jay Vosburgh, Andy Gospodarek
  Cc: netdev, linux-kernel, yrl.pp-manager.tt, Mitsuo Hayasaka,
	Jay Vosburgh, Andy Gospodarek, WANG Cong

The bond_close() calls cancel_delayed_work() to cancel delayed works.
It, however, cannot cancel works that were already queued in workqueue.
The bond_open() initializes work->data, and proccess_one_work() refers
get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
work->data has been initialized. Thus, a panic occurs.

This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
in bond_close(). It cancels delayed timer and waits for work to finish
execution. So, it can avoid the null pointer dereference due to the
parallel executions of proccess_one_work() and initializing proccess
of bond_open().

Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Jay Vosburgh <fubar@us.ibm.com>
Cc: Andy Gospodarek <andy@greyhouse.net>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
---

 drivers/net/bonding/bond_main.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index de3d351..a4353f9 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -3504,27 +3504,27 @@ static int bond_close(struct net_device *bond_dev)
 	write_unlock_bh(&bond->lock);
 
 	if (bond->params.miimon) {  /* link check interval, in milliseconds. */
-		cancel_delayed_work(&bond->mii_work);
+		flush_delayed_work_sync(&bond->mii_work);
 	}
 
 	if (bond->params.arp_interval) {  /* arp interval, in milliseconds. */
-		cancel_delayed_work(&bond->arp_work);
+		flush_delayed_work_sync(&bond->arp_work);
 	}
 
 	switch (bond->params.mode) {
 	case BOND_MODE_8023AD:
-		cancel_delayed_work(&bond->ad_work);
+		flush_delayed_work_sync(&bond->ad_work);
 		break;
 	case BOND_MODE_TLB:
 	case BOND_MODE_ALB:
-		cancel_delayed_work(&bond->alb_work);
+		flush_delayed_work_sync(&bond->alb_work);
 		break;
 	default:
 		break;
 	}
 
 	if (delayed_work_pending(&bond->mcast_work))
-		cancel_delayed_work(&bond->mcast_work);
+		flush_delayed_work_sync(&bond->mcast_work);
 
 	if (bond_is_lb(bond)) {
 		/* Must be called only after all


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-19  8:17 [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close Mitsuo Hayasaka
@ 2011-10-19 18:01 ` Jay Vosburgh
  2011-10-19 18:41   ` Stephen Hemminger
  0 siblings, 1 reply; 13+ messages in thread
From: Jay Vosburgh @ 2011-10-19 18:01 UTC (permalink / raw)
  To: Mitsuo Hayasaka
  Cc: Andy Gospodarek, netdev, linux-kernel, yrl.pp-manager.tt, WANG Cong

Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:

>The bond_close() calls cancel_delayed_work() to cancel delayed works.
>It, however, cannot cancel works that were already queued in workqueue.
>The bond_open() initializes work->data, and proccess_one_work() refers
>get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>work->data has been initialized. Thus, a panic occurs.
>
>This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>in bond_close(). It cancels delayed timer and waits for work to finish
>execution. So, it can avoid the null pointer dereference due to the
>parallel executions of proccess_one_work() and initializing proccess
>of bond_open().

	I'm setting up to test this.  I have a dim recollection that we
tried this some years ago, and there was a different deadlock that
manifested through the flush path.  Perhaps changes since then have
removed that problem.

	-J

>Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
>Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com>
>Cc: Jay Vosburgh <fubar@us.ibm.com>
>Cc: Andy Gospodarek <andy@greyhouse.net>
>Cc: WANG Cong <xiyou.wangcong@gmail.com>
>---
>
> drivers/net/bonding/bond_main.c |   10 +++++-----
> 1 files changed, 5 insertions(+), 5 deletions(-)
>
>diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>index de3d351..a4353f9 100644
>--- a/drivers/net/bonding/bond_main.c
>+++ b/drivers/net/bonding/bond_main.c
>@@ -3504,27 +3504,27 @@ static int bond_close(struct net_device *bond_dev)
> 	write_unlock_bh(&bond->lock);
>
> 	if (bond->params.miimon) {  /* link check interval, in milliseconds. */
>-		cancel_delayed_work(&bond->mii_work);
>+		flush_delayed_work_sync(&bond->mii_work);
> 	}
>
> 	if (bond->params.arp_interval) {  /* arp interval, in milliseconds. */
>-		cancel_delayed_work(&bond->arp_work);
>+		flush_delayed_work_sync(&bond->arp_work);
> 	}
>
> 	switch (bond->params.mode) {
> 	case BOND_MODE_8023AD:
>-		cancel_delayed_work(&bond->ad_work);
>+		flush_delayed_work_sync(&bond->ad_work);
> 		break;
> 	case BOND_MODE_TLB:
> 	case BOND_MODE_ALB:
>-		cancel_delayed_work(&bond->alb_work);
>+		flush_delayed_work_sync(&bond->alb_work);
> 		break;
> 	default:
> 		break;
> 	}
>
> 	if (delayed_work_pending(&bond->mcast_work))
>-		cancel_delayed_work(&bond->mcast_work);
>+		flush_delayed_work_sync(&bond->mcast_work);
>
> 	if (bond_is_lb(bond)) {
> 		/* Must be called only after all
>

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-19 18:01 ` Jay Vosburgh
@ 2011-10-19 18:41   ` Stephen Hemminger
  2011-10-19 19:09     ` Jay Vosburgh
  0 siblings, 1 reply; 13+ messages in thread
From: Stephen Hemminger @ 2011-10-19 18:41 UTC (permalink / raw)
  To: Jay Vosburgh
  Cc: Mitsuo Hayasaka, Andy Gospodarek, netdev, linux-kernel,
	yrl.pp-manager.tt, WANG Cong

On Wed, 19 Oct 2011 11:01:02 -0700
Jay Vosburgh <fubar@us.ibm.com> wrote:

> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
> 
> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
> >It, however, cannot cancel works that were already queued in workqueue.
> >The bond_open() initializes work->data, and proccess_one_work() refers
> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
> >work->data has been initialized. Thus, a panic occurs.
> >
> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
> >in bond_close(). It cancels delayed timer and waits for work to finish
> >execution. So, it can avoid the null pointer dereference due to the
> >parallel executions of proccess_one_work() and initializing proccess
> >of bond_open().
> 
> 	I'm setting up to test this.  I have a dim recollection that we
> tried this some years ago, and there was a different deadlock that
> manifested through the flush path.  Perhaps changes since then have
> removed that problem.
> 
> 	-J

Won't this deadlock on RTNL.  The problem is that:

   CPU0                            CPU1
  rtnl_lock
      bond_close
                                 delayed_work
                                   mii_work
                                     read_lock(bond->lock);
                                     read_unlock(bond->lock);
                                     rtnl_lock... waiting for CPU0
      flush_delayed_work_sync
          waiting for delayed_work to finish...
              
                                    

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-19 18:41   ` Stephen Hemminger
@ 2011-10-19 19:09     ` Jay Vosburgh
  2011-10-21  5:45       ` Américo Wang
  0 siblings, 1 reply; 13+ messages in thread
From: Jay Vosburgh @ 2011-10-19 19:09 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Mitsuo Hayasaka, Andy Gospodarek, netdev, linux-kernel,
	yrl.pp-manager.tt, WANG Cong

Stephen Hemminger <shemminger@vyatta.com> wrote:

>On Wed, 19 Oct 2011 11:01:02 -0700
>Jay Vosburgh <fubar@us.ibm.com> wrote:
>
>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>> 
>> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
>> >It, however, cannot cancel works that were already queued in workqueue.
>> >The bond_open() initializes work->data, and proccess_one_work() refers
>> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>> >work->data has been initialized. Thus, a panic occurs.
>> >
>> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>> >in bond_close(). It cancels delayed timer and waits for work to finish
>> >execution. So, it can avoid the null pointer dereference due to the
>> >parallel executions of proccess_one_work() and initializing proccess
>> >of bond_open().
>> 
>> 	I'm setting up to test this.  I have a dim recollection that we
>> tried this some years ago, and there was a different deadlock that
>> manifested through the flush path.  Perhaps changes since then have
>> removed that problem.
>> 
>> 	-J
>
>Won't this deadlock on RTNL.  The problem is that:
>
>   CPU0                            CPU1
>  rtnl_lock
>      bond_close
>                                 delayed_work
>                                   mii_work
>                                     read_lock(bond->lock);
>                                     read_unlock(bond->lock);
>                                     rtnl_lock... waiting for CPU0
>      flush_delayed_work_sync
>          waiting for delayed_work to finish...

	Yah, that was it.  We discussed this a couple of years ago in
regards to a similar patch:

http://lists.openwall.net/netdev/2009/12/17/3

	The short version is that we could rework the rtnl_lock inside
the montiors to be conditional and retry on failure (where "retry" means
"reschedule the work and try again later," not "spin retrying on rtnl").
That should permit the use of flush or cancel to terminate the work
items.

	I'll fiddle with it some later today and see if that seems
viable.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-19 19:09     ` Jay Vosburgh
@ 2011-10-21  5:45       ` Américo Wang
  2011-10-21  6:26         ` Jay Vosburgh
  0 siblings, 1 reply; 13+ messages in thread
From: Américo Wang @ 2011-10-21  5:45 UTC (permalink / raw)
  To: Jay Vosburgh
  Cc: Stephen Hemminger, Mitsuo Hayasaka, Andy Gospodarek, netdev,
	linux-kernel, yrl.pp-manager.tt

On Thu, Oct 20, 2011 at 3:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
> Stephen Hemminger <shemminger@vyatta.com> wrote:
>
>>On Wed, 19 Oct 2011 11:01:02 -0700
>>Jay Vosburgh <fubar@us.ibm.com> wrote:
>>
>>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>>>
>>> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
>>> >It, however, cannot cancel works that were already queued in workqueue.
>>> >The bond_open() initializes work->data, and proccess_one_work() refers
>>> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>>> >work->data has been initialized. Thus, a panic occurs.
>>> >
>>> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>>> >in bond_close(). It cancels delayed timer and waits for work to finish
>>> >execution. So, it can avoid the null pointer dereference due to the
>>> >parallel executions of proccess_one_work() and initializing proccess
>>> >of bond_open().
>>>
>>>      I'm setting up to test this.  I have a dim recollection that we
>>> tried this some years ago, and there was a different deadlock that
>>> manifested through the flush path.  Perhaps changes since then have
>>> removed that problem.
>>>
>>>      -J
>>
>>Won't this deadlock on RTNL.  The problem is that:
>>
>>   CPU0                            CPU1
>>  rtnl_lock
>>      bond_close
>>                                 delayed_work
>>                                   mii_work
>>                                     read_lock(bond->lock);
>>                                     read_unlock(bond->lock);
>>                                     rtnl_lock... waiting for CPU0
>>      flush_delayed_work_sync
>>          waiting for delayed_work to finish...
>
>        Yah, that was it.  We discussed this a couple of years ago in
> regards to a similar patch:
>
> http://lists.openwall.net/netdev/2009/12/17/3
>
>        The short version is that we could rework the rtnl_lock inside
> the montiors to be conditional and retry on failure (where "retry" means
> "reschedule the work and try again later," not "spin retrying on rtnl").
> That should permit the use of flush or cancel to terminate the work
> items.

Yes? Even if we use rtnl_trylock(), doesn't flush_delayed_work_sync()
still queue the pending delayed work and wait for it to be finished?

Maybe I am too blind, why do we need rtnl_lock for cancel_delayed_work()
inside bond_close()?

Thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-21  5:45       ` Américo Wang
@ 2011-10-21  6:26         ` Jay Vosburgh
  2011-10-22  0:59           ` Jay Vosburgh
  0 siblings, 1 reply; 13+ messages in thread
From: Jay Vosburgh @ 2011-10-21  6:26 UTC (permalink / raw)
  To: =?UTF-8?Q?Am=C3=A9rico_Wang?=
  Cc: Stephen Hemminger, Mitsuo Hayasaka, Andy Gospodarek, netdev,
	linux-kernel, yrl.pp-manager.tt

Américo Wang <xiyou.wangcong@gmail.com> wrote:

>On Thu, Oct 20, 2011 at 3:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
>> Stephen Hemminger <shemminger@vyatta.com> wrote:
>>
>>>On Wed, 19 Oct 2011 11:01:02 -0700
>>>Jay Vosburgh <fubar@us.ibm.com> wrote:
>>>
>>>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>>>>
>>>> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
>>>> >It, however, cannot cancel works that were already queued in workqueue.
>>>> >The bond_open() initializes work->data, and proccess_one_work() refers
>>>> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>>>> >work->data has been initialized. Thus, a panic occurs.
>>>> >
>>>> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>>>> >in bond_close(). It cancels delayed timer and waits for work to finish
>>>> >execution. So, it can avoid the null pointer dereference due to the
>>>> >parallel executions of proccess_one_work() and initializing proccess
>>>> >of bond_open().
>>>>
>>>>      I'm setting up to test this.  I have a dim recollection that we
>>>> tried this some years ago, and there was a different deadlock that
>>>> manifested through the flush path.  Perhaps changes since then have
>>>> removed that problem.
>>>>
>>>>      -J
>>>
>>>Won't this deadlock on RTNL.  The problem is that:
>>>
>>>   CPU0                            CPU1
>>>  rtnl_lock
>>>      bond_close
>>>                                 delayed_work
>>>                                   mii_work
>>>                                     read_lock(bond->lock);
>>>                                     read_unlock(bond->lock);
>>>                                     rtnl_lock... waiting for CPU0
>>>      flush_delayed_work_sync
>>>          waiting for delayed_work to finish...
>>
>>        Yah, that was it.  We discussed this a couple of years ago in
>> regards to a similar patch:
>>
>> http://lists.openwall.net/netdev/2009/12/17/3
>>
>>        The short version is that we could rework the rtnl_lock inside
>> the montiors to be conditional and retry on failure (where "retry" means
>> "reschedule the work and try again later," not "spin retrying on rtnl").
>> That should permit the use of flush or cancel to terminate the work
>> items.
>
>Yes? Even if we use rtnl_trylock(), doesn't flush_delayed_work_sync()
>still queue the pending delayed work and wait for it to be finished?

	Yes, it does.  The original patch wants to use flush instead of
cancel to wait for the work to finish, because there's evidently a
possibility of getting back into bond_open before the work item
executes, and bond_open would reinitialize the work queue and corrupt
the queued work item.

	The original patch series, and recipe for destruction, is here:

	http://www.spinics.net/lists/netdev/msg176382.html

	I've been unable to reproduce the work queue panic locally,
although it sounds plausible.

	Mitsuo: can you provide the precise bonding configuration you're
using to induce the problem?  Driver options, number and type of slaves,
etc.

>Maybe I am too blind, why do we need rtnl_lock for cancel_delayed_work()
>inside bond_close()?

	We don't need RTNL for cancel/flush.  However, bond_close is an
ndo_stop operation, and is called in the dev_close path, which always
occurs under RTNL.  The mii / arp monitor work functions separately
acquire RTNL if they need to perform various failover related
operations.

	I'm working on a patch that should resolve the mii / arp monitor
RTNL problem as I described above (if rtnl_trylock fails, punt and
reschedule the work).  I need to rearrange the netdev_bonding_change
stuff a bit as well, since it acquires RTNL separately.

	Once these changes are made to mii / arp monitor, then
bond_close can call flush instead of cancel, which should eliminate the
original problem described at the top.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-21  6:26         ` Jay Vosburgh
@ 2011-10-22  0:59           ` Jay Vosburgh
  2011-10-24  4:00             ` HAYASAKA Mitsuo
  0 siblings, 1 reply; 13+ messages in thread
From: Jay Vosburgh @ 2011-10-22  0:59 UTC (permalink / raw)
  To: netdev
  Cc: =?us-ascii?Q?=3D=3FUTF-8=3FQ=3FAm=3DC3=3DA9rico=5FWang=3F=3D?=,
	Stephen Hemminger, Mitsuo Hayasaka, Andy Gospodarek,
	linux-kernel, yrl.pp-manager.tt

Jay Vosburgh <fubar@us.ibm.com> wrote:

>Américo Wang <xiyou.wangcong@gmail.com> wrote:
>
>>On Thu, Oct 20, 2011 at 3:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
>>> Stephen Hemminger <shemminger@vyatta.com> wrote:
>>>
>>>>On Wed, 19 Oct 2011 11:01:02 -0700
>>>>Jay Vosburgh <fubar@us.ibm.com> wrote:
>>>>
>>>>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>>>>>
>>>>> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
>>>>> >It, however, cannot cancel works that were already queued in workqueue.
>>>>> >The bond_open() initializes work->data, and proccess_one_work() refers
>>>>> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>>>>> >work->data has been initialized. Thus, a panic occurs.
>>>>> >
>>>>> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>>>>> >in bond_close(). It cancels delayed timer and waits for work to finish
>>>>> >execution. So, it can avoid the null pointer dereference due to the
>>>>> >parallel executions of proccess_one_work() and initializing proccess
>>>>> >of bond_open().
>>>>>
>>>>>      I'm setting up to test this.  I have a dim recollection that we
>>>>> tried this some years ago, and there was a different deadlock that
>>>>> manifested through the flush path.  Perhaps changes since then have
>>>>> removed that problem.
>>>>>
>>>>>      -J
>>>>
>>>>Won't this deadlock on RTNL.  The problem is that:
>>>>
>>>>   CPU0                            CPU1
>>>>  rtnl_lock
>>>>      bond_close
>>>>                                 delayed_work
>>>>                                   mii_work
>>>>                                     read_lock(bond->lock);
>>>>                                     read_unlock(bond->lock);
>>>>                                     rtnl_lock... waiting for CPU0
>>>>      flush_delayed_work_sync
>>>>          waiting for delayed_work to finish...
>>>
>>>        Yah, that was it.  We discussed this a couple of years ago in
>>> regards to a similar patch:
>>>
>>> http://lists.openwall.net/netdev/2009/12/17/3
>>>
>>>        The short version is that we could rework the rtnl_lock inside
>>> the montiors to be conditional and retry on failure (where "retry" means
>>> "reschedule the work and try again later," not "spin retrying on rtnl").
>>> That should permit the use of flush or cancel to terminate the work
>>> items.
>>
>>Yes? Even if we use rtnl_trylock(), doesn't flush_delayed_work_sync()
>>still queue the pending delayed work and wait for it to be finished?
>
>	Yes, it does.  The original patch wants to use flush instead of
>cancel to wait for the work to finish, because there's evidently a
>possibility of getting back into bond_open before the work item
>executes, and bond_open would reinitialize the work queue and corrupt
>the queued work item.
>
>	The original patch series, and recipe for destruction, is here:
>
>	http://www.spinics.net/lists/netdev/msg176382.html
>
>	I've been unable to reproduce the work queue panic locally,
>although it sounds plausible.
>
>	Mitsuo: can you provide the precise bonding configuration you're
>using to induce the problem?  Driver options, number and type of slaves,
>etc.
>
>>Maybe I am too blind, why do we need rtnl_lock for cancel_delayed_work()
>>inside bond_close()?
>
>	We don't need RTNL for cancel/flush.  However, bond_close is an
>ndo_stop operation, and is called in the dev_close path, which always
>occurs under RTNL.  The mii / arp monitor work functions separately
>acquire RTNL if they need to perform various failover related
>operations.
>
>	I'm working on a patch that should resolve the mii / arp monitor
>RTNL problem as I described above (if rtnl_trylock fails, punt and
>reschedule the work).  I need to rearrange the netdev_bonding_change
>stuff a bit as well, since it acquires RTNL separately.
>
>	Once these changes are made to mii / arp monitor, then
>bond_close can call flush instead of cancel, which should eliminate the
>original problem described at the top.

	Just an update: there are three functions that may deadlock if
the cancel work calls are changed to flush_sync.  There are two
rtnl_lock calls in each of the bond_mii_monitor and
bond_activebackup_arp_mon functions, and one more in the
bond_alb_monitor.

	Still testing to make sure I haven't missed anything, and I
still haven't been able to reproduce Mitsuo's original failure.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-22  0:59           ` Jay Vosburgh
@ 2011-10-24  4:00             ` HAYASAKA Mitsuo
  2011-10-26 17:31               ` Jay Vosburgh
  0 siblings, 1 reply; 13+ messages in thread
From: HAYASAKA Mitsuo @ 2011-10-24  4:00 UTC (permalink / raw)
  To: Jay Vosburgh
  Cc: netdev, Américo Wang, Stephen Hemminger, Andy Gospodarek,
	linux-kernel, yrl.pp-manager.tt

(2011/10/22 9:59), Jay Vosburgh wrote:
> Jay Vosburgh <fubar@us.ibm.com> wrote:
> 
>> Américo Wang <xiyou.wangcong@gmail.com> wrote:
>>
>>> On Thu, Oct 20, 2011 at 3:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
>>>> Stephen Hemminger <shemminger@vyatta.com> wrote:
>>>>
>>>>> On Wed, 19 Oct 2011 11:01:02 -0700
>>>>> Jay Vosburgh <fubar@us.ibm.com> wrote:
>>>>>
>>>>>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>>>>>>
>>>>>>> The bond_close() calls cancel_delayed_work() to cancel delayed works.
>>>>>>> It, however, cannot cancel works that were already queued in workqueue.
>>>>>>> The bond_open() initializes work->data, and proccess_one_work() refers
>>>>>>> get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>>>>>>> work->data has been initialized. Thus, a panic occurs.
>>>>>>>
>>>>>>> This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>>>>>>> in bond_close(). It cancels delayed timer and waits for work to finish
>>>>>>> execution. So, it can avoid the null pointer dereference due to the
>>>>>>> parallel executions of proccess_one_work() and initializing proccess
>>>>>>> of bond_open().
>>>>>>
>>>>>>      I'm setting up to test this.  I have a dim recollection that we
>>>>>> tried this some years ago, and there was a different deadlock that
>>>>>> manifested through the flush path.  Perhaps changes since then have
>>>>>> removed that problem.
>>>>>>
>>>>>>      -J
>>>>>
>>>>> Won't this deadlock on RTNL.  The problem is that:
>>>>>
>>>>>   CPU0                            CPU1
>>>>>  rtnl_lock
>>>>>      bond_close
>>>>>                                 delayed_work
>>>>>                                   mii_work
>>>>>                                     read_lock(bond->lock);
>>>>>                                     read_unlock(bond->lock);
>>>>>                                     rtnl_lock... waiting for CPU0
>>>>>      flush_delayed_work_sync
>>>>>          waiting for delayed_work to finish...
>>>>
>>>>        Yah, that was it.  We discussed this a couple of years ago in
>>>> regards to a similar patch:
>>>>
>>>> http://lists.openwall.net/netdev/2009/12/17/3
>>>>
>>>>        The short version is that we could rework the rtnl_lock inside
>>>> the montiors to be conditional and retry on failure (where "retry" means
>>>> "reschedule the work and try again later," not "spin retrying on rtnl").
>>>> That should permit the use of flush or cancel to terminate the work
>>>> items.
>>>
>>> Yes? Even if we use rtnl_trylock(), doesn't flush_delayed_work_sync()
>>> still queue the pending delayed work and wait for it to be finished?
>>
>> 	Yes, it does.  The original patch wants to use flush instead of
>> cancel to wait for the work to finish, because there's evidently a
>> possibility of getting back into bond_open before the work item
>> executes, and bond_open would reinitialize the work queue and corrupt
>> the queued work item.
>>
>> 	The original patch series, and recipe for destruction, is here:
>>
>> 	http://www.spinics.net/lists/netdev/msg176382.html
>>
>> 	I've been unable to reproduce the work queue panic locally,
>> although it sounds plausible.
>>
>> 	Mitsuo: can you provide the precise bonding configuration you're
>> using to induce the problem?  Driver options, number and type of slaves,
>> etc.
>>
>>> Maybe I am too blind, why do we need rtnl_lock for cancel_delayed_work()
>>> inside bond_close()?
>>
>> 	We don't need RTNL for cancel/flush.  However, bond_close is an
>> ndo_stop operation, and is called in the dev_close path, which always
>> occurs under RTNL.  The mii / arp monitor work functions separately
>> acquire RTNL if they need to perform various failover related
>> operations.
>>
>> 	I'm working on a patch that should resolve the mii / arp monitor
>> RTNL problem as I described above (if rtnl_trylock fails, punt and
>> reschedule the work).  I need to rearrange the netdev_bonding_change
>> stuff a bit as well, since it acquires RTNL separately.
>>
>> 	Once these changes are made to mii / arp monitor, then
>> bond_close can call flush instead of cancel, which should eliminate the
>> original problem described at the top.
> 
> 	Just an update: there are three functions that may deadlock if
> the cancel work calls are changed to flush_sync.  There are two
> rtnl_lock calls in each of the bond_mii_monitor and
> bond_activebackup_arp_mon functions, and one more in the
> bond_alb_monitor.
> 
> 	Still testing to make sure I haven't missed anything, and I
> still haven't been able to reproduce Mitsuo's original failure.


The interval of mii_mon was set to 1 to reproduce this bug easily and 
the 802.3ad mode was used. Then, I executed the following command.

# while true; do ifconfig bond0 down; done &
# while true; do ifconfig bond0 up; done &

This bug rarely occurs since it is the severe timing problem.
I found that it is more easily to reproduce this bug when using guest OS.

For example, it took one to three days for me to reproduce it on host OS, 
but some hours on guest OS.

Thanks.


> 
> 	-J
> 
> ---
> 	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com
> 
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-24  4:00             ` HAYASAKA Mitsuo
@ 2011-10-26 17:31               ` Jay Vosburgh
  2011-10-28  1:52                 ` HAYASAKA Mitsuo
  0 siblings, 1 reply; 13+ messages in thread
From: Jay Vosburgh @ 2011-10-26 17:31 UTC (permalink / raw)
  To: HAYASAKA Mitsuo
  Cc: netdev, Américo Wang, Stephen Hemminger, Andy Gospodarek,
	linux-kernel, yrl.pp-manager.tt

HAYASAKA Mitsuo <mitsuo.hayasaka.hu@hitachi.com> wrote:
[...]
>The interval of mii_mon was set to 1 to reproduce this bug easily and 
>the 802.3ad mode was used. Then, I executed the following command.
>
># while true; do ifconfig bond0 down; done &
># while true; do ifconfig bond0 up; done &
>
>This bug rarely occurs since it is the severe timing problem.
>I found that it is more easily to reproduce this bug when using guest OS.
>
>For example, it took one to three days for me to reproduce it on host OS, 
>but some hours on guest OS.

	Could you test this patch and see if it resolves the problem?

	This patch does a few things:

	All of the monitor functions that run on work queues are
modified to never unconditionally acquire RTNL; all will reschedule the
work and return if rtnl_trylock fails.  This covers bond_mii_monitor,
bond_activebackup_arp_mon, and bond_alb_monitor.

	The "clear out the work queues" calls in bond_close and
bond_uninit now call cancel_delayed_work_sync, which should either
delete a pending work item, or wait for an executing item to complete.
I chose cancel_ over the original patch's flush_ because we just want
the work queue stopped.  We don't need to have any pending items execute
if they're not already running.

	Also in reference to the previous, I'm not sure if we still need
to check for delayed_work_pending, but I've left those checks in place.

	Remove the "kill_timers" field and all references to it.  If
cancel_delayed_work_sync is safe to use, we do not need an extra
sentinel.

	Lastly, for testing purposes only, the bond_alb_monitor in this
patch includes an unconditional call to rtnl_trylock(); this is an
artifical way to make the race in that function easier to test for,
because the real race is very difficult to hit.

	This patch is against net-next as of yesterday.

	Comments?

	-J


diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
index b33c099..0ae0d7c 100644
--- a/drivers/net/bonding/bond_3ad.c
+++ b/drivers/net/bonding/bond_3ad.c
@@ -2110,9 +2110,6 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
 
 	read_lock(&bond->lock);
 
-	if (bond->kill_timers)
-		goto out;
-
 	//check if there are any slaves
 	if (bond->slave_cnt == 0)
 		goto re_arm;
@@ -2161,9 +2158,8 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
 	}
 
 re_arm:
-	if (!bond->kill_timers)
-		queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
-out:
+	queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
+
 	read_unlock(&bond->lock);
 }
 
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
index d4fbd2e..13d1bf9 100644
--- a/drivers/net/bonding/bond_alb.c
+++ b/drivers/net/bonding/bond_alb.c
@@ -1343,10 +1343,6 @@ void bond_alb_monitor(struct work_struct *work)
 
 	read_lock(&bond->lock);
 
-	if (bond->kill_timers) {
-		goto out;
-	}
-
 	if (bond->slave_cnt == 0) {
 		bond_info->tx_rebalance_counter = 0;
 		bond_info->lp_counter = 0;
@@ -1394,6 +1390,14 @@ void bond_alb_monitor(struct work_struct *work)
 		bond_info->tx_rebalance_counter = 0;
 	}
 
+	/* XXX - unconditional attempt at RTNL for testing purposes, as
+	 * normal case, below, is difficult to induce.
+	 */
+	read_unlock(&bond->lock);
+	if (rtnl_trylock())
+		rtnl_unlock();
+	read_lock(&bond->lock);
+
 	/* handle rlb stuff */
 	if (bond_info->rlb_enabled) {
 		if (bond_info->primary_is_promisc &&
@@ -1404,7 +1408,10 @@ void bond_alb_monitor(struct work_struct *work)
 			 * nothing else.
 			 */
 			read_unlock(&bond->lock);
-			rtnl_lock();
+			if (!rtnl_trylock()) {
+				read_lock(&bond->lock);
+				goto re_arm;
+			}
 
 			bond_info->rlb_promisc_timeout_counter = 0;
 
@@ -1440,9 +1447,8 @@ void bond_alb_monitor(struct work_struct *work)
 	}
 
 re_arm:
-	if (!bond->kill_timers)
-		queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
-out:
+	queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
+
 	read_unlock(&bond->lock);
 }
 
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 71efff3..e6fefff 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -774,9 +774,6 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
 
 	read_lock(&bond->lock);
 
-	if (bond->kill_timers)
-		goto out;
-
 	/* rejoin all groups on bond device */
 	__bond_resend_igmp_join_requests(bond->dev);
 
@@ -790,9 +787,9 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
 			__bond_resend_igmp_join_requests(vlan_dev);
 	}
 
-	if ((--bond->igmp_retrans > 0) && !bond->kill_timers)
+	if (--bond->igmp_retrans > 0)
 		queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5);
-out:
+
 	read_unlock(&bond->lock);
 }
 
@@ -2518,19 +2515,26 @@ void bond_mii_monitor(struct work_struct *work)
 	struct bonding *bond = container_of(work, struct bonding,
 					    mii_work.work);
 	bool should_notify_peers = false;
+	unsigned long delay;
 
 	read_lock(&bond->lock);
-	if (bond->kill_timers)
-		goto out;
+
+	delay = msecs_to_jiffies(bond->params.miimon);
 
 	if (bond->slave_cnt == 0)
 		goto re_arm;
 
-	should_notify_peers = bond_should_notify_peers(bond);
-
 	if (bond_miimon_inspect(bond)) {
 		read_unlock(&bond->lock);
-		rtnl_lock();
+
+		/* Race avoidance with bond_close flush of workqueue */
+		if (!rtnl_trylock()) {
+			read_lock(&bond->lock);
+			delay = 1;
+			should_notify_peers = false;
+			goto re_arm;
+		}
+
 		read_lock(&bond->lock);
 
 		bond_miimon_commit(bond);
@@ -2540,15 +2544,21 @@ void bond_mii_monitor(struct work_struct *work)
 		read_lock(&bond->lock);
 	}
 
+	should_notify_peers = bond_should_notify_peers(bond);
+
 re_arm:
-	if (bond->params.miimon && !bond->kill_timers)
-		queue_delayed_work(bond->wq, &bond->mii_work,
-				   msecs_to_jiffies(bond->params.miimon));
-out:
+	if (bond->params.miimon)
+		queue_delayed_work(bond->wq, &bond->mii_work, delay);
+
 	read_unlock(&bond->lock);
 
 	if (should_notify_peers) {
-		rtnl_lock();
+		if (!rtnl_trylock()) {
+			read_lock(&bond->lock);
+			bond->send_peer_notif++;
+			read_unlock(&bond->lock);
+			return;
+		}
 		netdev_bonding_change(bond->dev, NETDEV_NOTIFY_PEERS);
 		rtnl_unlock();
 	}
@@ -2790,9 +2800,6 @@ void bond_loadbalance_arp_mon(struct work_struct *work)
 
 	delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval);
 
-	if (bond->kill_timers)
-		goto out;
-
 	if (bond->slave_cnt == 0)
 		goto re_arm;
 
@@ -2889,9 +2896,9 @@ void bond_loadbalance_arp_mon(struct work_struct *work)
 	}
 
 re_arm:
-	if (bond->params.arp_interval && !bond->kill_timers)
+	if (bond->params.arp_interval)
 		queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
-out:
+
 	read_unlock(&bond->lock);
 }
 
@@ -3132,9 +3139,6 @@ void bond_activebackup_arp_mon(struct work_struct *work)
 
 	read_lock(&bond->lock);
 
-	if (bond->kill_timers)
-		goto out;
-
 	delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval);
 
 	if (bond->slave_cnt == 0)
@@ -3144,7 +3148,15 @@ void bond_activebackup_arp_mon(struct work_struct *work)
 
 	if (bond_ab_arp_inspect(bond, delta_in_ticks)) {
 		read_unlock(&bond->lock);
-		rtnl_lock();
+
+		/* Race avoidance with bond_close flush of workqueue */
+		if (!rtnl_trylock()) {
+			read_lock(&bond->lock);
+			delta_in_ticks = 1;
+			should_notify_peers = false;
+			goto re_arm;
+		}
+
 		read_lock(&bond->lock);
 
 		bond_ab_arp_commit(bond, delta_in_ticks);
@@ -3157,13 +3169,18 @@ void bond_activebackup_arp_mon(struct work_struct *work)
 	bond_ab_arp_probe(bond);
 
 re_arm:
-	if (bond->params.arp_interval && !bond->kill_timers)
+	if (bond->params.arp_interval)
 		queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
-out:
+
 	read_unlock(&bond->lock);
 
 	if (should_notify_peers) {
-		rtnl_lock();
+		if (!rtnl_trylock()) {
+			read_lock(&bond->lock);
+			bond->send_peer_notif++;
+			read_unlock(&bond->lock);
+			return;
+		}
 		netdev_bonding_change(bond->dev, NETDEV_NOTIFY_PEERS);
 		rtnl_unlock();
 	}
@@ -3425,8 +3442,6 @@ static int bond_open(struct net_device *bond_dev)
 	struct slave *slave;
 	int i;
 
-	bond->kill_timers = 0;
-
 	/* reset slave->backup and slave->inactive */
 	read_lock(&bond->lock);
 	if (bond->slave_cnt > 0) {
@@ -3495,33 +3510,30 @@ static int bond_close(struct net_device *bond_dev)
 
 	bond->send_peer_notif = 0;
 
-	/* signal timers not to re-arm */
-	bond->kill_timers = 1;
-
 	write_unlock_bh(&bond->lock);
 
 	if (bond->params.miimon) {  /* link check interval, in milliseconds. */
-		cancel_delayed_work(&bond->mii_work);
+		cancel_delayed_work_sync(&bond->mii_work);
 	}
 
 	if (bond->params.arp_interval) {  /* arp interval, in milliseconds. */
-		cancel_delayed_work(&bond->arp_work);
+		cancel_delayed_work_sync(&bond->arp_work);
 	}
 
 	switch (bond->params.mode) {
 	case BOND_MODE_8023AD:
-		cancel_delayed_work(&bond->ad_work);
+		cancel_delayed_work_sync(&bond->ad_work);
 		break;
 	case BOND_MODE_TLB:
 	case BOND_MODE_ALB:
-		cancel_delayed_work(&bond->alb_work);
+		cancel_delayed_work_sync(&bond->alb_work);
 		break;
 	default:
 		break;
 	}
 
 	if (delayed_work_pending(&bond->mcast_work))
-		cancel_delayed_work(&bond->mcast_work);
+		cancel_delayed_work_sync(&bond->mcast_work);
 
 	if (bond_is_lb(bond)) {
 		/* Must be called only after all
@@ -4368,26 +4380,22 @@ static void bond_setup(struct net_device *bond_dev)
 
 static void bond_work_cancel_all(struct bonding *bond)
 {
-	write_lock_bh(&bond->lock);
-	bond->kill_timers = 1;
-	write_unlock_bh(&bond->lock);
-
 	if (bond->params.miimon && delayed_work_pending(&bond->mii_work))
-		cancel_delayed_work(&bond->mii_work);
+		cancel_delayed_work_sync(&bond->mii_work);
 
 	if (bond->params.arp_interval && delayed_work_pending(&bond->arp_work))
-		cancel_delayed_work(&bond->arp_work);
+		cancel_delayed_work_sync(&bond->arp_work);
 
 	if (bond->params.mode == BOND_MODE_ALB &&
 	    delayed_work_pending(&bond->alb_work))
-		cancel_delayed_work(&bond->alb_work);
+		cancel_delayed_work_sync(&bond->alb_work);
 
 	if (bond->params.mode == BOND_MODE_8023AD &&
 	    delayed_work_pending(&bond->ad_work))
-		cancel_delayed_work(&bond->ad_work);
+		cancel_delayed_work_sync(&bond->ad_work);
 
 	if (delayed_work_pending(&bond->mcast_work))
-		cancel_delayed_work(&bond->mcast_work);
+		cancel_delayed_work_sync(&bond->mcast_work);
 }
 
 /*
diff --git a/drivers/net/bonding/bonding.h b/drivers/net/bonding/bonding.h
index 82fec5f..1aecc37 100644
--- a/drivers/net/bonding/bonding.h
+++ b/drivers/net/bonding/bonding.h
@@ -222,7 +222,6 @@ struct bonding {
 			       struct slave *);
 	rwlock_t lock;
 	rwlock_t curr_slave_lock;
-	s8       kill_timers;
 	u8	 send_peer_notif;
 	s8	 setup_by_slave;
 	s8       igmp_retrans;


---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-26 17:31               ` Jay Vosburgh
@ 2011-10-28  1:52                 ` HAYASAKA Mitsuo
  2011-10-28  3:15                   ` David Miller
  0 siblings, 1 reply; 13+ messages in thread
From: HAYASAKA Mitsuo @ 2011-10-28  1:52 UTC (permalink / raw)
  To: Jay Vosburgh
  Cc: netdev, Américo Wang, Stephen Hemminger, Andy Gospodarek,
	linux-kernel, yrl.pp-manager.tt

Hi Joy,

I checked your patch, and any problems have not been observed for
almost one day although I could reproduce the BUG in the latest net-next
without it. 

I think this patch works well.

Tested-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>


(2011/10/27 2:31), Jay Vosburgh wrote:
> HAYASAKA Mitsuo <mitsuo.hayasaka.hu@hitachi.com> wrote:
> [...]
>> The interval of mii_mon was set to 1 to reproduce this bug easily and 
>> the 802.3ad mode was used. Then, I executed the following command.
>>
>> # while true; do ifconfig bond0 down; done &
>> # while true; do ifconfig bond0 up; done &
>>
>> This bug rarely occurs since it is the severe timing problem.
>> I found that it is more easily to reproduce this bug when using guest OS.
>>
>> For example, it took one to three days for me to reproduce it on host OS, 
>> but some hours on guest OS.
> 
> 	Could you test this patch and see if it resolves the problem?
> 
> 	This patch does a few things:
> 
> 	All of the monitor functions that run on work queues are
> modified to never unconditionally acquire RTNL; all will reschedule the
> work and return if rtnl_trylock fails.  This covers bond_mii_monitor,
> bond_activebackup_arp_mon, and bond_alb_monitor.
> 
> 	The "clear out the work queues" calls in bond_close and
> bond_uninit now call cancel_delayed_work_sync, which should either
> delete a pending work item, or wait for an executing item to complete.
> I chose cancel_ over the original patch's flush_ because we just want
> the work queue stopped.  We don't need to have any pending items execute
> if they're not already running.
> 
> 	Also in reference to the previous, I'm not sure if we still need
> to check for delayed_work_pending, but I've left those checks in place.
> 
> 	Remove the "kill_timers" field and all references to it.  If
> cancel_delayed_work_sync is safe to use, we do not need an extra
> sentinel.
> 
> 	Lastly, for testing purposes only, the bond_alb_monitor in this
> patch includes an unconditional call to rtnl_trylock(); this is an
> artifical way to make the race in that function easier to test for,
> because the real race is very difficult to hit.
> 
> 	This patch is against net-next as of yesterday.
> 
> 	Comments?
> 
> 	-J
> 
> 
> diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
> index b33c099..0ae0d7c 100644
> --- a/drivers/net/bonding/bond_3ad.c
> +++ b/drivers/net/bonding/bond_3ad.c
> @@ -2110,9 +2110,6 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
>  
>  	read_lock(&bond->lock);
>  
> -	if (bond->kill_timers)
> -		goto out;
> -
>  	//check if there are any slaves
>  	if (bond->slave_cnt == 0)
>  		goto re_arm;
> @@ -2161,9 +2158,8 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
>  	}
>  
>  re_arm:
> -	if (!bond->kill_timers)
> -		queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
> -out:
> +	queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
> +
>  	read_unlock(&bond->lock);
>  }
>  
> diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
> index d4fbd2e..13d1bf9 100644
> --- a/drivers/net/bonding/bond_alb.c
> +++ b/drivers/net/bonding/bond_alb.c
> @@ -1343,10 +1343,6 @@ void bond_alb_monitor(struct work_struct *work)
>  
>  	read_lock(&bond->lock);
>  
> -	if (bond->kill_timers) {
> -		goto out;
> -	}
> -
>  	if (bond->slave_cnt == 0) {
>  		bond_info->tx_rebalance_counter = 0;
>  		bond_info->lp_counter = 0;
> @@ -1394,6 +1390,14 @@ void bond_alb_monitor(struct work_struct *work)
>  		bond_info->tx_rebalance_counter = 0;
>  	}
>  
> +	/* XXX - unconditional attempt at RTNL for testing purposes, as
> +	 * normal case, below, is difficult to induce.
> +	 */
> +	read_unlock(&bond->lock);
> +	if (rtnl_trylock())
> +		rtnl_unlock();
> +	read_lock(&bond->lock);
> +
>  	/* handle rlb stuff */
>  	if (bond_info->rlb_enabled) {
>  		if (bond_info->primary_is_promisc &&
> @@ -1404,7 +1408,10 @@ void bond_alb_monitor(struct work_struct *work)
>  			 * nothing else.
>  			 */
>  			read_unlock(&bond->lock);
> -			rtnl_lock();
> +			if (!rtnl_trylock()) {
> +				read_lock(&bond->lock);
> +				goto re_arm;
> +			}
>  
>  			bond_info->rlb_promisc_timeout_counter = 0;
>  
> @@ -1440,9 +1447,8 @@ void bond_alb_monitor(struct work_struct *work)
>  	}
>  
>  re_arm:
> -	if (!bond->kill_timers)
> -		queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
> -out:
> +	queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
> +
>  	read_unlock(&bond->lock);
>  }
>  
> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> index 71efff3..e6fefff 100644
> --- a/drivers/net/bonding/bond_main.c
> +++ b/drivers/net/bonding/bond_main.c
> @@ -774,9 +774,6 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
>  
>  	read_lock(&bond->lock);
>  
> -	if (bond->kill_timers)
> -		goto out;
> -
>  	/* rejoin all groups on bond device */
>  	__bond_resend_igmp_join_requests(bond->dev);
>  
> @@ -790,9 +787,9 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
>  			__bond_resend_igmp_join_requests(vlan_dev);
>  	}
>  
> -	if ((--bond->igmp_retrans > 0) && !bond->kill_timers)
> +	if (--bond->igmp_retrans > 0)
>  		queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5);
> -out:
> +
>  	read_unlock(&bond->lock);
>  }
>  
> @@ -2518,19 +2515,26 @@ void bond_mii_monitor(struct work_struct *work)
>  	struct bonding *bond = container_of(work, struct bonding,
>  					    mii_work.work);
>  	bool should_notify_peers = false;
> +	unsigned long delay;
>  
>  	read_lock(&bond->lock);
> -	if (bond->kill_timers)
> -		goto out;
> +
> +	delay = msecs_to_jiffies(bond->params.miimon);
>  
>  	if (bond->slave_cnt == 0)
>  		goto re_arm;
>  
> -	should_notify_peers = bond_should_notify_peers(bond);
> -
>  	if (bond_miimon_inspect(bond)) {
>  		read_unlock(&bond->lock);
> -		rtnl_lock();
> +
> +		/* Race avoidance with bond_close flush of workqueue */
> +		if (!rtnl_trylock()) {
> +			read_lock(&bond->lock);
> +			delay = 1;
> +			should_notify_peers = false;
> +			goto re_arm;
> +		}
> +
>  		read_lock(&bond->lock);
>  
>  		bond_miimon_commit(bond);
> @@ -2540,15 +2544,21 @@ void bond_mii_monitor(struct work_struct *work)
>  		read_lock(&bond->lock);
>  	}
>  
> +	should_notify_peers = bond_should_notify_peers(bond);
> +
>  re_arm:
> -	if (bond->params.miimon && !bond->kill_timers)
> -		queue_delayed_work(bond->wq, &bond->mii_work,
> -				   msecs_to_jiffies(bond->params.miimon));
> -out:
> +	if (bond->params.miimon)
> +		queue_delayed_work(bond->wq, &bond->mii_work, delay);
> +
>  	read_unlock(&bond->lock);
>  
>  	if (should_notify_peers) {
> -		rtnl_lock();
> +		if (!rtnl_trylock()) {
> +			read_lock(&bond->lock);
> +			bond->send_peer_notif++;
> +			read_unlock(&bond->lock);
> +			return;
> +		}
>  		netdev_bonding_change(bond->dev, NETDEV_NOTIFY_PEERS);
>  		rtnl_unlock();
>  	}
> @@ -2790,9 +2800,6 @@ void bond_loadbalance_arp_mon(struct work_struct *work)
>  
>  	delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval);
>  
> -	if (bond->kill_timers)
> -		goto out;
> -
>  	if (bond->slave_cnt == 0)
>  		goto re_arm;
>  
> @@ -2889,9 +2896,9 @@ void bond_loadbalance_arp_mon(struct work_struct *work)
>  	}
>  
>  re_arm:
> -	if (bond->params.arp_interval && !bond->kill_timers)
> +	if (bond->params.arp_interval)
>  		queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
> -out:
> +
>  	read_unlock(&bond->lock);
>  }
>  
> @@ -3132,9 +3139,6 @@ void bond_activebackup_arp_mon(struct work_struct *work)
>  
>  	read_lock(&bond->lock);
>  
> -	if (bond->kill_timers)
> -		goto out;
> -
>  	delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval);
>  
>  	if (bond->slave_cnt == 0)
> @@ -3144,7 +3148,15 @@ void bond_activebackup_arp_mon(struct work_struct *work)
>  
>  	if (bond_ab_arp_inspect(bond, delta_in_ticks)) {
>  		read_unlock(&bond->lock);
> -		rtnl_lock();
> +
> +		/* Race avoidance with bond_close flush of workqueue */
> +		if (!rtnl_trylock()) {
> +			read_lock(&bond->lock);
> +			delta_in_ticks = 1;
> +			should_notify_peers = false;
> +			goto re_arm;
> +		}
> +
>  		read_lock(&bond->lock);
>  
>  		bond_ab_arp_commit(bond, delta_in_ticks);
> @@ -3157,13 +3169,18 @@ void bond_activebackup_arp_mon(struct work_struct *work)
>  	bond_ab_arp_probe(bond);
>  
>  re_arm:
> -	if (bond->params.arp_interval && !bond->kill_timers)
> +	if (bond->params.arp_interval)
>  		queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
> -out:
> +
>  	read_unlock(&bond->lock);
>  
>  	if (should_notify_peers) {
> -		rtnl_lock();
> +		if (!rtnl_trylock()) {
> +			read_lock(&bond->lock);
> +			bond->send_peer_notif++;
> +			read_unlock(&bond->lock);
> +			return;
> +		}
>  		netdev_bonding_change(bond->dev, NETDEV_NOTIFY_PEERS);
>  		rtnl_unlock();
>  	}
> @@ -3425,8 +3442,6 @@ static int bond_open(struct net_device *bond_dev)
>  	struct slave *slave;
>  	int i;
>  
> -	bond->kill_timers = 0;
> -
>  	/* reset slave->backup and slave->inactive */
>  	read_lock(&bond->lock);
>  	if (bond->slave_cnt > 0) {
> @@ -3495,33 +3510,30 @@ static int bond_close(struct net_device *bond_dev)
>  
>  	bond->send_peer_notif = 0;
>  
> -	/* signal timers not to re-arm */
> -	bond->kill_timers = 1;
> -
>  	write_unlock_bh(&bond->lock);
>  
>  	if (bond->params.miimon) {  /* link check interval, in milliseconds. */
> -		cancel_delayed_work(&bond->mii_work);
> +		cancel_delayed_work_sync(&bond->mii_work);
>  	}
>  
>  	if (bond->params.arp_interval) {  /* arp interval, in milliseconds. */
> -		cancel_delayed_work(&bond->arp_work);
> +		cancel_delayed_work_sync(&bond->arp_work);
>  	}
>  
>  	switch (bond->params.mode) {
>  	case BOND_MODE_8023AD:
> -		cancel_delayed_work(&bond->ad_work);
> +		cancel_delayed_work_sync(&bond->ad_work);
>  		break;
>  	case BOND_MODE_TLB:
>  	case BOND_MODE_ALB:
> -		cancel_delayed_work(&bond->alb_work);
> +		cancel_delayed_work_sync(&bond->alb_work);
>  		break;
>  	default:
>  		break;
>  	}
>  
>  	if (delayed_work_pending(&bond->mcast_work))
> -		cancel_delayed_work(&bond->mcast_work);
> +		cancel_delayed_work_sync(&bond->mcast_work);
>  
>  	if (bond_is_lb(bond)) {
>  		/* Must be called only after all
> @@ -4368,26 +4380,22 @@ static void bond_setup(struct net_device *bond_dev)
>  
>  static void bond_work_cancel_all(struct bonding *bond)
>  {
> -	write_lock_bh(&bond->lock);
> -	bond->kill_timers = 1;
> -	write_unlock_bh(&bond->lock);
> -
>  	if (bond->params.miimon && delayed_work_pending(&bond->mii_work))
> -		cancel_delayed_work(&bond->mii_work);
> +		cancel_delayed_work_sync(&bond->mii_work);
>  
>  	if (bond->params.arp_interval && delayed_work_pending(&bond->arp_work))
> -		cancel_delayed_work(&bond->arp_work);
> +		cancel_delayed_work_sync(&bond->arp_work);
>  
>  	if (bond->params.mode == BOND_MODE_ALB &&
>  	    delayed_work_pending(&bond->alb_work))
> -		cancel_delayed_work(&bond->alb_work);
> +		cancel_delayed_work_sync(&bond->alb_work);
>  
>  	if (bond->params.mode == BOND_MODE_8023AD &&
>  	    delayed_work_pending(&bond->ad_work))
> -		cancel_delayed_work(&bond->ad_work);
> +		cancel_delayed_work_sync(&bond->ad_work);
>  
>  	if (delayed_work_pending(&bond->mcast_work))
> -		cancel_delayed_work(&bond->mcast_work);
> +		cancel_delayed_work_sync(&bond->mcast_work);
>  }
>  
>  /*
> diff --git a/drivers/net/bonding/bonding.h b/drivers/net/bonding/bonding.h
> index 82fec5f..1aecc37 100644
> --- a/drivers/net/bonding/bonding.h
> +++ b/drivers/net/bonding/bonding.h
> @@ -222,7 +222,6 @@ struct bonding {
>  			       struct slave *);
>  	rwlock_t lock;
>  	rwlock_t curr_slave_lock;
> -	s8       kill_timers;
>  	u8	 send_peer_notif;
>  	s8	 setup_by_slave;
>  	s8       igmp_retrans;
> 
> 
> ---
> 	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close
  2011-10-28  1:52                 ` HAYASAKA Mitsuo
@ 2011-10-28  3:15                   ` David Miller
  2011-10-29  1:42                     ` [PATCH net-next] bonding: eliminate bond_close race conditions Jay Vosburgh
  0 siblings, 1 reply; 13+ messages in thread
From: David Miller @ 2011-10-28  3:15 UTC (permalink / raw)
  To: mitsuo.hayasaka.hu
  Cc: fubar, netdev, xiyou.wangcong, shemminger, andy, linux-kernel,
	yrl.pp-manager.tt

From: HAYASAKA Mitsuo <mitsuo.hayasaka.hu@hitachi.com>
Date: Fri, 28 Oct 2011 10:52:31 +0900

> I checked your patch, and any problems have not been observed for
> almost one day although I could reproduce the BUG in the latest net-next
> without it. 
> 
> I think this patch works well.
> 
> Tested-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>

Jay, please formally submit this patch with proper signoffs etc.

Thanks!

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH net-next] bonding: eliminate bond_close race conditions
  2011-10-28  3:15                   ` David Miller
@ 2011-10-29  1:42                     ` Jay Vosburgh
  2011-10-30  7:13                       ` David Miller
  0 siblings, 1 reply; 13+ messages in thread
From: Jay Vosburgh @ 2011-10-29  1:42 UTC (permalink / raw)
  To: David Miller
  Cc: mitsuo.hayasaka.hu, netdev, xiyou.wangcong, shemminger, andy,
	linux-kernel, yrl.pp-manager.tt


	This patch resolves two sets of race conditions.

	Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> reported the
first, as follows:

The bond_close() calls cancel_delayed_work() to cancel delayed works.
It, however, cannot cancel works that were already queued in workqueue.
The bond_open() initializes work->data, and proccess_one_work() refers
get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
work->data has been initialized. Thus, a panic occurs.

	He included a patch that converted the cancel_delayed_work calls
in bond_close to flush_delayed_work_sync, which eliminated the above
problem.

	His patch is incorporated, at least in principle, into this
patch.  In this patch, we use cancel_delayed_work_sync in place of
flush_delayed_work_sync, and also convert bond_uninit in addition to
bond_close.

	This conversion to _sync, however, opens new races between
bond_close and three periodically executing workqueue functions:
bond_mii_monitor, bond_alb_monitor and bond_activebackup_arp_mon.

	The race occurs because bond_close and bond_uninit are always
called with RTNL held, and these workqueue functions may acquire RTNL to
perform failover-related activities.  If bond_close or bond_uninit is
waiting in cancel_delayed_work_sync, deadlock occurs.

	These deadlocks are resolved by having the workqueue functions
acquire RTNL conditionally.  If the rtnl_trylock() fails, the functions
reschedule and return immediately.  For the cases that are attempting to
perform link failover, a delay of 1 is used; for the other cases, the
normal interval is used (as those activities are not as time critical).

	Additionally, the bond_mii_monitor function now stores the delay
in a variable (mimicing the structure of activebackup_arp_mon).

	Lastly, all of the above renders the kill_timers sentinel moot,
and therefore it has been removed.

Tested-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>

---
 drivers/net/bonding/bond_3ad.c  |    8 +---
 drivers/net/bonding/bond_alb.c  |   16 +++----
 drivers/net/bonding/bond_main.c |   96 +++++++++++++++++++++------------------
 drivers/net/bonding/bonding.h   |    1 -
 4 files changed, 61 insertions(+), 60 deletions(-)

diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
index b33c099..0ae0d7c 100644
--- a/drivers/net/bonding/bond_3ad.c
+++ b/drivers/net/bonding/bond_3ad.c
@@ -2110,9 +2110,6 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
 
 	read_lock(&bond->lock);
 
-	if (bond->kill_timers)
-		goto out;
-
 	//check if there are any slaves
 	if (bond->slave_cnt == 0)
 		goto re_arm;
@@ -2161,9 +2158,8 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
 	}
 
 re_arm:
-	if (!bond->kill_timers)
-		queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
-out:
+	queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
+
 	read_unlock(&bond->lock);
 }
 
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
index d4fbd2e..106b88a 100644
--- a/drivers/net/bonding/bond_alb.c
+++ b/drivers/net/bonding/bond_alb.c
@@ -1343,10 +1343,6 @@ void bond_alb_monitor(struct work_struct *work)
 
 	read_lock(&bond->lock);
 
-	if (bond->kill_timers) {
-		goto out;
-	}
-
 	if (bond->slave_cnt == 0) {
 		bond_info->tx_rebalance_counter = 0;
 		bond_info->lp_counter = 0;
@@ -1401,10 +1397,13 @@ void bond_alb_monitor(struct work_struct *work)
 
 			/*
 			 * dev_set_promiscuity requires rtnl and
-			 * nothing else.
+			 * nothing else.  Avoid race with bond_close.
 			 */
 			read_unlock(&bond->lock);
-			rtnl_lock();
+			if (!rtnl_trylock()) {
+				read_lock(&bond->lock);
+				goto re_arm;
+			}
 
 			bond_info->rlb_promisc_timeout_counter = 0;
 
@@ -1440,9 +1439,8 @@ void bond_alb_monitor(struct work_struct *work)
 	}
 
 re_arm:
-	if (!bond->kill_timers)
-		queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
-out:
+	queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
+
 	read_unlock(&bond->lock);
 }
 
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 71efff3..9931a16 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -774,9 +774,6 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
 
 	read_lock(&bond->lock);
 
-	if (bond->kill_timers)
-		goto out;
-
 	/* rejoin all groups on bond device */
 	__bond_resend_igmp_join_requests(bond->dev);
 
@@ -790,9 +787,9 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
 			__bond_resend_igmp_join_requests(vlan_dev);
 	}
 
-	if ((--bond->igmp_retrans > 0) && !bond->kill_timers)
+	if (--bond->igmp_retrans > 0)
 		queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5);
-out:
+
 	read_unlock(&bond->lock);
 }
 
@@ -2518,10 +2515,11 @@ void bond_mii_monitor(struct work_struct *work)
 	struct bonding *bond = container_of(work, struct bonding,
 					    mii_work.work);
 	bool should_notify_peers = false;
+	unsigned long delay;
 
 	read_lock(&bond->lock);
-	if (bond->kill_timers)
-		goto out;
+
+	delay = msecs_to_jiffies(bond->params.miimon);
 
 	if (bond->slave_cnt == 0)
 		goto re_arm;
@@ -2530,7 +2528,15 @@ void bond_mii_monitor(struct work_struct *work)
 
 	if (bond_miimon_inspect(bond)) {
 		read_unlock(&bond->lock);
-		rtnl_lock();
+
+		/* Race avoidance with bond_close cancel of workqueue */
+		if (!rtnl_trylock()) {
+			read_lock(&bond->lock);
+			delay = 1;
+			should_notify_peers = false;
+			goto re_arm;
+		}
+
 		read_lock(&bond->lock);
 
 		bond_miimon_commit(bond);
@@ -2541,14 +2547,18 @@ void bond_mii_monitor(struct work_struct *work)
 	}
 
 re_arm:
-	if (bond->params.miimon && !bond->kill_timers)
-		queue_delayed_work(bond->wq, &bond->mii_work,
-				   msecs_to_jiffies(bond->params.miimon));
-out:
+	if (bond->params.miimon)
+		queue_delayed_work(bond->wq, &bond->mii_work, delay);
+
 	read_unlock(&bond->lock);
 
 	if (should_notify_peers) {
-		rtnl_lock();
+		if (!rtnl_trylock()) {
+			read_lock(&bond->lock);
+			bond->send_peer_notif++;
+			read_unlock(&bond->lock);
+			return;
+		}
 		netdev_bonding_change(bond->dev, NETDEV_NOTIFY_PEERS);
 		rtnl_unlock();
 	}
@@ -2790,9 +2800,6 @@ void bond_loadbalance_arp_mon(struct work_struct *work)
 
 	delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval);
 
-	if (bond->kill_timers)
-		goto out;
-
 	if (bond->slave_cnt == 0)
 		goto re_arm;
 
@@ -2889,9 +2896,9 @@ void bond_loadbalance_arp_mon(struct work_struct *work)
 	}
 
 re_arm:
-	if (bond->params.arp_interval && !bond->kill_timers)
+	if (bond->params.arp_interval)
 		queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
-out:
+
 	read_unlock(&bond->lock);
 }
 
@@ -3132,9 +3139,6 @@ void bond_activebackup_arp_mon(struct work_struct *work)
 
 	read_lock(&bond->lock);
 
-	if (bond->kill_timers)
-		goto out;
-
 	delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval);
 
 	if (bond->slave_cnt == 0)
@@ -3144,7 +3148,15 @@ void bond_activebackup_arp_mon(struct work_struct *work)
 
 	if (bond_ab_arp_inspect(bond, delta_in_ticks)) {
 		read_unlock(&bond->lock);
-		rtnl_lock();
+
+		/* Race avoidance with bond_close flush of workqueue */
+		if (!rtnl_trylock()) {
+			read_lock(&bond->lock);
+			delta_in_ticks = 1;
+			should_notify_peers = false;
+			goto re_arm;
+		}
+
 		read_lock(&bond->lock);
 
 		bond_ab_arp_commit(bond, delta_in_ticks);
@@ -3157,13 +3169,18 @@ void bond_activebackup_arp_mon(struct work_struct *work)
 	bond_ab_arp_probe(bond);
 
 re_arm:
-	if (bond->params.arp_interval && !bond->kill_timers)
+	if (bond->params.arp_interval)
 		queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
-out:
+
 	read_unlock(&bond->lock);
 
 	if (should_notify_peers) {
-		rtnl_lock();
+		if (!rtnl_trylock()) {
+			read_lock(&bond->lock);
+			bond->send_peer_notif++;
+			read_unlock(&bond->lock);
+			return;
+		}
 		netdev_bonding_change(bond->dev, NETDEV_NOTIFY_PEERS);
 		rtnl_unlock();
 	}
@@ -3425,8 +3442,6 @@ static int bond_open(struct net_device *bond_dev)
 	struct slave *slave;
 	int i;
 
-	bond->kill_timers = 0;
-
 	/* reset slave->backup and slave->inactive */
 	read_lock(&bond->lock);
 	if (bond->slave_cnt > 0) {
@@ -3495,33 +3510,30 @@ static int bond_close(struct net_device *bond_dev)
 
 	bond->send_peer_notif = 0;
 
-	/* signal timers not to re-arm */
-	bond->kill_timers = 1;
-
 	write_unlock_bh(&bond->lock);
 
 	if (bond->params.miimon) {  /* link check interval, in milliseconds. */
-		cancel_delayed_work(&bond->mii_work);
+		cancel_delayed_work_sync(&bond->mii_work);
 	}
 
 	if (bond->params.arp_interval) {  /* arp interval, in milliseconds. */
-		cancel_delayed_work(&bond->arp_work);
+		cancel_delayed_work_sync(&bond->arp_work);
 	}
 
 	switch (bond->params.mode) {
 	case BOND_MODE_8023AD:
-		cancel_delayed_work(&bond->ad_work);
+		cancel_delayed_work_sync(&bond->ad_work);
 		break;
 	case BOND_MODE_TLB:
 	case BOND_MODE_ALB:
-		cancel_delayed_work(&bond->alb_work);
+		cancel_delayed_work_sync(&bond->alb_work);
 		break;
 	default:
 		break;
 	}
 
 	if (delayed_work_pending(&bond->mcast_work))
-		cancel_delayed_work(&bond->mcast_work);
+		cancel_delayed_work_sync(&bond->mcast_work);
 
 	if (bond_is_lb(bond)) {
 		/* Must be called only after all
@@ -4368,26 +4380,22 @@ static void bond_setup(struct net_device *bond_dev)
 
 static void bond_work_cancel_all(struct bonding *bond)
 {
-	write_lock_bh(&bond->lock);
-	bond->kill_timers = 1;
-	write_unlock_bh(&bond->lock);
-
 	if (bond->params.miimon && delayed_work_pending(&bond->mii_work))
-		cancel_delayed_work(&bond->mii_work);
+		cancel_delayed_work_sync(&bond->mii_work);
 
 	if (bond->params.arp_interval && delayed_work_pending(&bond->arp_work))
-		cancel_delayed_work(&bond->arp_work);
+		cancel_delayed_work_sync(&bond->arp_work);
 
 	if (bond->params.mode == BOND_MODE_ALB &&
 	    delayed_work_pending(&bond->alb_work))
-		cancel_delayed_work(&bond->alb_work);
+		cancel_delayed_work_sync(&bond->alb_work);
 
 	if (bond->params.mode == BOND_MODE_8023AD &&
 	    delayed_work_pending(&bond->ad_work))
-		cancel_delayed_work(&bond->ad_work);
+		cancel_delayed_work_sync(&bond->ad_work);
 
 	if (delayed_work_pending(&bond->mcast_work))
-		cancel_delayed_work(&bond->mcast_work);
+		cancel_delayed_work_sync(&bond->mcast_work);
 }
 
 /*
diff --git a/drivers/net/bonding/bonding.h b/drivers/net/bonding/bonding.h
index 82fec5f..1aecc37 100644
--- a/drivers/net/bonding/bonding.h
+++ b/drivers/net/bonding/bonding.h
@@ -222,7 +222,6 @@ struct bonding {
 			       struct slave *);
 	rwlock_t lock;
 	rwlock_t curr_slave_lock;
-	s8       kill_timers;
 	u8	 send_peer_notif;
 	s8	 setup_by_slave;
 	s8       igmp_retrans;
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next] bonding: eliminate bond_close race conditions
  2011-10-29  1:42                     ` [PATCH net-next] bonding: eliminate bond_close race conditions Jay Vosburgh
@ 2011-10-30  7:13                       ` David Miller
  0 siblings, 0 replies; 13+ messages in thread
From: David Miller @ 2011-10-30  7:13 UTC (permalink / raw)
  To: fubar
  Cc: mitsuo.hayasaka.hu, netdev, xiyou.wangcong, shemminger, andy,
	linux-kernel, yrl.pp-manager.tt

From: Jay Vosburgh <fubar@us.ibm.com>
Date: Fri, 28 Oct 2011 18:42:50 -0700

> 
> 	This patch resolves two sets of race conditions.
 ...
> Tested-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>

Applied, thanks a lot Jay.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-10-30  7:13 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-10-19  8:17 [PATCH net -v2] [BUGFIX] bonding: use flush_delayed_work_sync in bond_close Mitsuo Hayasaka
2011-10-19 18:01 ` Jay Vosburgh
2011-10-19 18:41   ` Stephen Hemminger
2011-10-19 19:09     ` Jay Vosburgh
2011-10-21  5:45       ` Américo Wang
2011-10-21  6:26         ` Jay Vosburgh
2011-10-22  0:59           ` Jay Vosburgh
2011-10-24  4:00             ` HAYASAKA Mitsuo
2011-10-26 17:31               ` Jay Vosburgh
2011-10-28  1:52                 ` HAYASAKA Mitsuo
2011-10-28  3:15                   ` David Miller
2011-10-29  1:42                     ` [PATCH net-next] bonding: eliminate bond_close race conditions Jay Vosburgh
2011-10-30  7:13                       ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.