From: Adrian Hunter <adrian.hunter@intel.com>
To: Tejun Heo <tj@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Cc: Ulrich Obergfell <uobergfe@redhat.com>,
Ingo Molnar <mingo@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH] workqueue: warn if memory reclaim tries to flush !WQ_MEM_RECLAIM workqueue
Date: Thu, 10 Mar 2016 17:12:54 +0200 [thread overview]
Message-ID: <56E18EF6.1010006@intel.com> (raw)
In-Reply-To: <20151203192616.GJ27463@mtj.duckdns.org>
On 03/12/15 21:26, Tejun Heo wrote:
> Task or work item involved in memory reclaim trying to flush a
> non-WQ_MEM_RECLAIM workqueue or one of its work items can lead to
> deadlock. Trigger WARN_ONCE() if such conditions are detected.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> ---
> Hello,
>
> So, something like this. Seems to work fine here. If there's no
> objection, I'm gonna push it through wq/for-4.5.
>
> Thanks.
>
> kernel/workqueue.c | 35 +++++++++++++++++++++++++++++++++++
> 1 file changed, 35 insertions(+)
>
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2330,6 +2330,37 @@ repeat:
> goto repeat;
> }
>
> +/**
> + * check_flush_dependency - check for flush dependency sanity
> + * @target_wq: workqueue being flushed
> + * @target_work: work item being flushed (NULL for workqueue flushes)
> + *
> + * %current is trying to flush the whole @target_wq or @target_work on it.
> + * If @target_wq doesn't have %WQ_MEM_RECLAIM, verify that %current is not
> + * reclaiming memory or running on a workqueue which doesn't have
> + * %WQ_MEM_RECLAIM as that can break forward-progress guarantee leading to
> + * a deadlock.
> + */
> +static void check_flush_dependency(struct workqueue_struct *target_wq,
> + struct work_struct *target_work)
> +{
> + work_func_t target_func = target_work ? target_work->func : NULL;
> + struct worker *worker;
> +
> + if (target_wq->flags & WQ_MEM_RECLAIM)
> + return;
> +
> + worker = current_wq_worker();
> +
> + WARN_ONCE(current->flags & PF_MEMALLOC,
> + "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%pf",
> + current->pid, current->comm, target_wq->name, target_func);
> + WARN_ONCE(worker && (worker->current_pwq->wq->flags & WQ_MEM_RECLAIM),
> + "workqueue: WQ_MEM_RECLAIM %s:%pf is flushing !WQ_MEM_RECLAIM %s:%pf",
> + worker->current_pwq->wq->name, worker->current_func,
> + target_wq->name, target_func);
> +}
> +
> struct wq_barrier {
> struct work_struct work;
> struct completion done;
> @@ -2539,6 +2570,8 @@ void flush_workqueue(struct workqueue_st
> list_add_tail(&this_flusher.list, &wq->flusher_overflow);
> }
>
> + check_flush_dependency(wq, NULL);
> +
> mutex_unlock(&wq->mutex);
>
> wait_for_completion(&this_flusher.done);
> @@ -2711,6 +2744,8 @@ static bool start_flush_work(struct work
> pwq = worker->current_pwq;
> }
>
> + check_flush_dependency(pwq->wq, work);
> +
> insert_wq_barrier(pwq, barr, work, worker);
> spin_unlock_irq(&pool->lock);
>
>
I am hitting the warnings when using cancel_delayed_work_sync(). I would
have thought that forward progress would still be guaranteed in that case.
Is it true that it is not?
next prev parent reply other threads:[~2016-03-10 15:16 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-03 0:28 [PATCH 1/2] watchdog: introduce touch_softlockup_watchdog_sched() Tejun Heo
2015-12-03 0:28 ` [PATCH 2/2] workqueue: implement lockup detector Tejun Heo
2015-12-03 14:49 ` Tejun Heo
2015-12-03 17:50 ` Don Zickus
2015-12-03 19:43 ` Tejun Heo
2015-12-03 20:12 ` Ulrich Obergfell
2015-12-03 20:54 ` Tejun Heo
2015-12-04 8:02 ` Ingo Molnar
2015-12-04 16:52 ` Don Zickus
2015-12-04 13:19 ` Ulrich Obergfell
2015-12-07 19:06 ` [PATCH v2 " Tejun Heo
2015-12-07 21:38 ` Don Zickus
2015-12-07 21:39 ` Tejun Heo
2015-12-08 16:00 ` Don Zickus
2015-12-08 16:31 ` Tejun Heo
2015-12-03 9:33 ` [PATCH 1/2] watchdog: introduce touch_softlockup_watchdog_sched() Peter Zijlstra
2015-12-03 10:00 ` Peter Zijlstra
2015-12-03 14:48 ` Tejun Heo
2015-12-03 15:04 ` Peter Zijlstra
2015-12-03 15:06 ` Tejun Heo
2015-12-03 19:26 ` [PATCH] workqueue: warn if memory reclaim tries to flush !WQ_MEM_RECLAIM workqueue Tejun Heo
2015-12-03 20:43 ` Peter Zijlstra
2015-12-03 20:56 ` Tejun Heo
2015-12-03 21:09 ` Peter Zijlstra
2015-12-03 22:04 ` Tejun Heo
2015-12-04 12:51 ` Peter Zijlstra
2015-12-07 15:58 ` Tejun Heo
2016-01-26 17:38 ` Thierry Reding
2016-01-28 10:12 ` Peter Zijlstra
2016-01-28 12:47 ` Thierry Reding
2016-01-28 12:48 ` Thierry Reding
2016-01-29 11:09 ` Tejun Heo
2016-01-29 15:17 ` Peter Zijlstra
2016-01-29 18:28 ` Tejun Heo
2016-01-29 10:59 ` [PATCH wq/for-4.5-fixes] workqueue: skip flush dependency checks for legacy workqueues Tejun Heo
2016-01-29 15:07 ` Thierry Reding
2016-01-29 18:32 ` Tejun Heo
2016-02-02 6:54 ` Archit Taneja
2016-03-10 15:12 ` Adrian Hunter [this message]
2016-03-11 17:52 ` [PATCH] workqueue: warn if memory reclaim tries to flush !WQ_MEM_RECLAIM workqueue Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56E18EF6.1010006@intel.com \
--to=adrian.hunter@intel.com \
--cc=akpm@linux-foundation.org \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=tj@kernel.org \
--cc=uobergfe@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).