linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>,
	Pavel Begunkov <asml.silence@gmail.com>,
	Nadav Amit <nadav.amit@gmail.com>
Cc: linux-fsdevel@vger.kernel.org, io-uring@vger.kernel.org,
	LKML <linux-kernel@vger.kernel.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>
Subject: Re: Lockdep warning on io_file_data_ref_zero() with 5.10-rc5
Date: Tue, 15 Dec 2020 09:18:00 -0700	[thread overview]
Message-ID: <8bb82057-1fdf-cb99-0549-2a1a27600d15@kernel.dk> (raw)
In-Reply-To: <e9f232a9-f28e-a987-c971-d8185c0060f5@linux.alibaba.com>

On 12/14/20 11:58 PM, Xiaoguang Wang wrote:
> hi,
> 
>> On 11/28/20 5:13 PM, Pavel Begunkov wrote:
>>> On 28/11/2020 23:59, Nadav Amit wrote:
>>>> Hello Pavel,
>>>>
>>>> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
>>>> kernel (based on 5.10-rc5+).
>>>>
>>>> I did not actually confirm that the problem is triggered without my changes,
>>>> as my iouring workload requires some kernel changes (not iouring changes),
>>>> yet IMHO it seems pretty clear that this is a result of your commit
>>>> e297822b20e7f ("io_uring: order refnode recycling”), that acquires a lock in
>>>> io_file_data_ref_zero() inside a softirq context.
>>>
>>> Yeah, that's true. It was already reported by syzkaller and fixed by Jens, but
>>> queued for 5.11. Thanks for letting know anyway!
>>>
>>> https://lore.kernel.org/io-uring/948d2d3b-5f36-034d-28e6-7490343a5b59@kernel.dk/T/#t
>>>
>>>
>>> Jens, I think it's for the best to add it for 5.10, at least so that lockdep
>>> doesn't complain.
>>
>> Yeah maybe, though it's "just" a lockdep issue, it can't trigger any
>> deadlocks. I'd rather just keep it in 5.11 and ensure it goes to stable.
>> This isn't new in this series.
> Sorry, I'm not familiar with lockdep implementation, here I wonder why you say
> it can't trigger any deadlocks, looking at that the syzbot report, seems that
> the deadlock may happen.

Because the only time the lock is actually grabbed in bh context is when
it has dropped to zero and is no longer used. The classic deadlock for this
is if regular use has both contexts, so you can get:

CPU0			CPU1
grab_lock()
			bh context, grab_lock()

deadlock. But this simply cannot happen here, as by the time we get to
grabbing it from bh context, there can by definition be no other users
of it left (or new ones).

-- 
Jens Axboe


      reply	other threads:[~2020-12-15 16:21 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-28 23:59 Lockdep warning on io_file_data_ref_zero() with 5.10-rc5 Nadav Amit
2020-11-29  0:13 ` Pavel Begunkov
2020-11-29  0:21   ` Nadav Amit
2020-11-30 18:16   ` Jens Axboe
2020-12-15  6:58     ` Xiaoguang Wang
2020-12-15 16:18       ` Jens Axboe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8bb82057-1fdf-cb99-0549-2a1a27600d15@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=asml.silence@gmail.com \
    --cc=io-uring@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nadav.amit@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=xiaoguang.wang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).