From: Stephen Boyd <sboyd@codeaurora.org>
To: Yong Zhang <yong.zhang0@gmail.com>
Cc: linux-kernel@vger.kernel.org, Tejun Heo <tj@kernel.org>,
netdev@vger.kernel.org, Ben Dooks <ben-linux@fluff.org>
Subject: Re: [PATCH 1/2] workqueue: Catch more locking problems with flush_work()
Date: Thu, 19 Apr 2012 23:26:47 -0700 [thread overview]
Message-ID: <4F9101A7.5010100@codeaurora.org> (raw)
In-Reply-To: <20120420060101.GA16563@zhy>
On 4/19/2012 11:01 PM, Yong Zhang wrote:
> On Fri, Apr 20, 2012 at 01:26:33PM +0800, Yong Zhang wrote:
>> On Thu, Apr 19, 2012 at 11:36:32AM -0700, Stephen Boyd wrote:
>>> Does looking at the second patch help? Basically schedule_work() can run
>>> the callback right between the time the mutex is acquired and
>>> flush_work() is called:
>>>
>>> CPU0 CPU1
>>>
>>> <irq>
>>> schedule_work() mutex_lock(&mutex)
>>> <irq return>
>>> my_work() flush_work()
>>> mutex_lock(&mutex)
>>> <deadlock>
>> Get you point. It is a problem. But your patch could introduece false
>> positive since when flush_work() is called that very work may finish
>> running already.
>>
>> So I think we need the lock_map_acquire()/lock_map_release() only when
>> the work is under processing, no?
> But start_flush_work() has tried take care of this issue except it
> doesn't add work->lockdep_map into the chain.
>
> So does below patch help?
>
[snip]
> @@ -2461,6 +2461,8 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
> lock_map_acquire(&cwq->wq->lockdep_map);
> else
> lock_map_acquire_read(&cwq->wq->lockdep_map);
> + lock_map_acquire(&work->lockdep_map);
> + lock_map_release(&work->lockdep_map);
> lock_map_release(&cwq->wq->lockdep_map);
>
> return true;
No this doesn't help. The whole point of the patch is to get lockdep to
complain in the case where the work is not queued. That case is not a
false positive. We will get a lockdep warning if the work is running
(when start_flush_work() returns true) solely with the
lock_map_acquire() on cwq->wq->lockdep_map.
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.
next prev parent reply other threads:[~2012-04-20 6:26 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-19 3:25 [PATCH 1/2] workqueue: Catch more locking problems with flush_work() Stephen Boyd
2012-04-19 3:25 ` [PATCH 2/2] ks8851: Fix mutex deadlock in ks8851_net_stop() Stephen Boyd
2012-04-21 19:34 ` David Miller
2012-04-19 8:10 ` [PATCH 1/2] workqueue: Catch more locking problems with flush_work() Yong Zhang
2012-04-19 18:36 ` Stephen Boyd
2012-04-20 5:26 ` Yong Zhang
2012-04-20 6:01 ` Yong Zhang
2012-04-20 6:26 ` Stephen Boyd [this message]
2012-04-20 7:18 ` Yong Zhang
2012-04-20 8:18 ` Stephen Boyd
2012-04-20 8:32 ` Yong Zhang
2012-04-21 0:32 ` Yong Zhang
2012-04-19 15:28 ` Tejun Heo
2012-04-19 18:10 ` Stephen Boyd
2012-04-20 17:35 ` Tejun Heo
2012-04-20 23:15 ` Stephen Boyd
2012-04-21 0:28 ` [PATCHv2] " Stephen Boyd
2012-04-21 0:34 ` Yong Zhang
2012-04-23 18:07 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F9101A7.5010100@codeaurora.org \
--to=sboyd@codeaurora.org \
--cc=ben-linux@fluff.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=tj@kernel.org \
--cc=yong.zhang0@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).