* [PATCH] workqueue: lock cwq access in drain_workqueue
@ 2011-09-09 15:22 Thomas Tuttle
2011-09-09 23:00 ` [PATCH v2] " Thomas Tuttle
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Tuttle @ 2011-09-09 15:22 UTC (permalink / raw)
To: lkml
Take cwq->gcwq->lock to avoid racing between drain_workqueue checking
to make sure the workqueues are empty and cwq_dec_nr_in_flight
decrementing and then incrementing nr_active when it activates a
delayed work.
We discovered this when a corner case in one of our drivers resulted in
us trying to destroy a workqueue in which the remaining work would
always requeue itself again in the same workqueue. We would hit this
race condition and trip the BUG_ON on workqueue.c:3080.
Patch is against HEAD as of Fri Sep 9 15:16:09 UTC 2011
(e4e436e0bd480668834fe6849a52c5397b7be4fb).
Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
---
kernel/workqueue.c | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 25fb1b0..d610ced 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2412,8 +2412,14 @@ reflush:
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
+ int cwq_flushed;
- if (!cwq->nr_active && list_empty(&cwq->delayed_works))
+ spin_lock_irq(&cwq->gcwq->lock);
+ cwq_flushed = !cwq->nr_active
+ && list_empty(&cwq->delayed_works);
+ spin_unlock_irq(&cwq->gcwq->lock);
+
+ if (cwq_flushed)
continue;
if (++flush_cnt == 10 ||
--
1.7.3.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2] workqueue: lock cwq access in drain_workqueue
2011-09-09 15:22 [PATCH] workqueue: lock cwq access in drain_workqueue Thomas Tuttle
@ 2011-09-09 23:00 ` Thomas Tuttle
2011-09-11 1:35 ` Tejun Heo
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Tuttle @ 2011-09-09 23:00 UTC (permalink / raw)
To: lkml; +Cc: Tejun Heo
Take cwq->gcwq->lock to avoid racing between drain_workqueue checking
to make sure the workqueues are empty and cwq_dec_nr_in_flight
decrementing and then incrementing nr_active when it activates a
delayed work.
We discovered this when a corner case in one of our drivers resulted in
us trying to destroy a workqueue in which the remaining work would
always requeue itself again in the same workqueue. We would hit this
race condition and trip the BUG_ON on workqueue.c:3080.
Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
---
Updated to use bool instead of int (d'oh), and CCed maintainer.
kernel/workqueue.c | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 25fb1b0..0c2e585 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2412,8 +2412,14 @@ reflush:
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
+ bool cwq_flushed;
- if (!cwq->nr_active && list_empty(&cwq->delayed_works))
+ spin_lock_irq(&cwq->gcwq->lock);
+ cwq_flushed = !cwq->nr_active
+ && list_empty(&cwq->delayed_works);
+ spin_unlock_irq(&cwq->gcwq->lock);
+
+ if (cwq_flushed)
continue;
if (++flush_cnt == 10 ||
--
1.7.3.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2] workqueue: lock cwq access in drain_workqueue
2011-09-09 23:00 ` [PATCH v2] " Thomas Tuttle
@ 2011-09-11 1:35 ` Tejun Heo
2011-09-11 3:26 ` [PATCH v3] " Thomas Tuttle
0 siblings, 1 reply; 7+ messages in thread
From: Tejun Heo @ 2011-09-11 1:35 UTC (permalink / raw)
To: Thomas Tuttle; +Cc: lkml
Hello,
On Fri, Sep 09, 2011 at 07:00:53PM -0400, Thomas Tuttle wrote:
> Take cwq->gcwq->lock to avoid racing between drain_workqueue checking
> to make sure the workqueues are empty and cwq_dec_nr_in_flight
> decrementing and then incrementing nr_active when it activates a
> delayed work.
Nice catch. Just few minor nits below.
> We discovered this when a corner case in one of our drivers resulted in
> us trying to destroy a workqueue in which the remaining work would
> always requeue itself again in the same workqueue. We would hit this
> race condition and trip the BUG_ON on workqueue.c:3080.
>
> Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
> ---
> Updated to use bool instead of int (d'oh), and CCed maintainer.
>
> kernel/workqueue.c | 8 +++++++-
> 1 files changed, 7 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 25fb1b0..0c2e585 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2412,8 +2412,14 @@ reflush:
>
> for_each_cwq_cpu(cpu, wq) {
> struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
> + bool cwq_flushed;
Maybe "drained" would be better?
> - if (!cwq->nr_active && list_empty(&cwq->delayed_works))
> + spin_lock_irq(&cwq->gcwq->lock);
> + cwq_flushed = !cwq->nr_active
> + && list_empty(&cwq->delayed_works);
and then this should fit inside 80 column, right?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v3] workqueue: lock cwq access in drain_workqueue
2011-09-11 1:35 ` Tejun Heo
@ 2011-09-11 3:26 ` Thomas Tuttle
2011-09-11 3:30 ` Tejun Heo
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Tuttle @ 2011-09-11 3:26 UTC (permalink / raw)
To: Tejun Heo, lkml
Take cwq->gcwq->lock to avoid racing between drain_workqueue checking
to make sure the workqueues are empty and cwq_dec_nr_in_flight
decrementing and then incrementing nr_active when it activates a
delayed work.
We discovered this when a corner case in one of our drivers resulted in
us trying to destroy a workqueue in which the remaining work would
always requeue itself again in the same workqueue. We would hit this
race condition and trip the BUG_ON on workqueue.c:3080.
Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
---
Renamed "cwq_flushed" to "drained" as requested and rebased against
current HEAD (d0a77454c70d0449a5f87087deb8f0cb15145e90).
kernel/workqueue.c | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 25fb1b0..1783aab 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2412,8 +2412,13 @@ reflush:
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
+ bool drained;
- if (!cwq->nr_active && list_empty(&cwq->delayed_works))
+ spin_lock_irq(&cwq->gcwq->lock);
+ drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
+ spin_unlock_irq(&cwq->gcwq->lock);
+
+ if (drained)
continue;
if (++flush_cnt == 10 ||
--
1.7.3.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v3] workqueue: lock cwq access in drain_workqueue
2011-09-11 3:26 ` [PATCH v3] " Thomas Tuttle
@ 2011-09-11 3:30 ` Tejun Heo
2011-09-11 3:39 ` Thomas Tuttle
0 siblings, 1 reply; 7+ messages in thread
From: Tejun Heo @ 2011-09-11 3:30 UTC (permalink / raw)
To: Thomas Tuttle; +Cc: lkml
Hello,
On Sat, Sep 10, 2011 at 11:26:41PM -0400, Thomas Tuttle wrote:
> Take cwq->gcwq->lock to avoid racing between drain_workqueue checking
> to make sure the workqueues are empty and cwq_dec_nr_in_flight
> decrementing and then incrementing nr_active when it activates a
> delayed work.
>
> We discovered this when a corner case in one of our drivers resulted in
> us trying to destroy a workqueue in which the remaining work would
> always requeue itself again in the same workqueue. We would hit this
> race condition and trip the BUG_ON on workqueue.c:3080.
>
> Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
Acked-by: Tejun Heo <tj@kernel.org>
Can you please also add "Cc: stable@kernel.org" and send it to Andrew
Morton <akpm@linux-foundation.org>?
Thank you very much.
--
tejun
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v3] workqueue: lock cwq access in drain_workqueue
2011-09-11 3:30 ` Tejun Heo
@ 2011-09-11 3:39 ` Thomas Tuttle
2011-09-11 3:48 ` Tejun Heo
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Tuttle @ 2011-09-11 3:39 UTC (permalink / raw)
To: linux-kernel, akpm; +Cc: stable, tj
Take cwq->gcwq->lock to avoid racing between drain_workqueue checking
to make sure the workqueues are empty and cwq_dec_nr_in_flight
decrementing and then incrementing nr_active when it activates a
delayed work.
We discovered this when a corner case in one of our drivers resulted in
us trying to destroy a workqueue in which the remaining work would
always requeue itself again in the same workqueue. We would hit this
race condition and trip the BUG_ON on workqueue.c:3080.
Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: stable@kernel.org
---
kernel/workqueue.c | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 25fb1b0..1783aab 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2412,8 +2412,13 @@ reflush:
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
+ bool drained;
- if (!cwq->nr_active && list_empty(&cwq->delayed_works))
+ spin_lock_irq(&cwq->gcwq->lock);
+ drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
+ spin_unlock_irq(&cwq->gcwq->lock);
+
+ if (drained)
continue;
if (++flush_cnt == 10 ||
--
1.7.3.1
----- End forwarded message -----
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v3] workqueue: lock cwq access in drain_workqueue
2011-09-11 3:39 ` Thomas Tuttle
@ 2011-09-11 3:48 ` Tejun Heo
0 siblings, 0 replies; 7+ messages in thread
From: Tejun Heo @ 2011-09-11 3:48 UTC (permalink / raw)
To: Thomas Tuttle; +Cc: linux-kernel, akpm, stable
On Sat, Sep 10, 2011 at 11:39:53PM -0400, Thomas Tuttle wrote:
> Take cwq->gcwq->lock to avoid racing between drain_workqueue checking
> to make sure the workqueues are empty and cwq_dec_nr_in_flight
> decrementing and then incrementing nr_active when it activates a
> delayed work.
>
> We discovered this when a corner case in one of our drivers resulted in
> us trying to destroy a workqueue in which the remaining work would
> always requeue itself again in the same workqueue. We would hit this
> race condition and trip the BUG_ON on workqueue.c:3080.
>
> Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
> Acked-by: Tejun Heo <tj@kernel.org>
> Cc: stable@kernel.org
Andrew, can you please route this patch through -mm? korg is still
down and wq is unlikely to receive many more patches in this cycle.
Thank you.
--
tejun
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2011-09-11 3:48 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-09 15:22 [PATCH] workqueue: lock cwq access in drain_workqueue Thomas Tuttle
2011-09-09 23:00 ` [PATCH v2] " Thomas Tuttle
2011-09-11 1:35 ` Tejun Heo
2011-09-11 3:26 ` [PATCH v3] " Thomas Tuttle
2011-09-11 3:30 ` Tejun Heo
2011-09-11 3:39 ` Thomas Tuttle
2011-09-11 3:48 ` Tejun Heo
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.