From: Alan Stern <stern@rowland.harvard.edu>
To: Pete Zaitcev <zaitcev@redhat.com>
Cc: syzbot <syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com>,
<arnd@arndb.de>, <gregkh@linuxfoundation.org>,
<jrdr.linux@gmail.com>, <keescook@chromium.org>,
<kstewart@linuxfoundation.org>,
Kernel development list <linux-kernel@vger.kernel.org>,
USB list <linux-usb@vger.kernel.org>,
<syzkaller-bugs@googlegroups.com>, <tglx@linutronix.de>,
<viro@zeniv.linux.org.uk>
Subject: Re: possible deadlock in mon_bin_vma_fault
Date: Wed, 20 Nov 2019 13:18:07 -0500 (EST) [thread overview]
Message-ID: <Pine.LNX.4.44L0.1911201313480.1498-100000@iolanthe.rowland.org> (raw)
In-Reply-To: <20191120113314.761fce32@suzdal.zaitcev.lan>
On Wed, 20 Nov 2019, Pete Zaitcev wrote:
> On Wed, 20 Nov 2019 11:14:05 -0500 (EST)
> Alan Stern <stern@rowland.harvard.edu> wrote:
>
> > As it happens, I spent a little time investigating this bug report just
> > yesterday. It seems to me that the easiest fix would be to disallow
> > resizing the buffer while it is mapped by any users. (Besides,
> > allowing that seems like a bad idea in any case.)
> >
> > Pete, does that seem reasonable to you?
>
> Actually, please hold on a little, I think to think more about this.
> The deadlock is between mon_bin_read and mon_bin_vma_fault.
> To disallow resizing isn't going to fix _that_.
As I understand it (and my understanding is pretty limited, since I
only started to look seriously at the code one day ago), the reason why
mon_bin_vma_fault acquires fetch_lock is to prevent a resize from
happening while the fault is being handled. Is there another reason?
If you disallow resizing while the buffer is mapped then
mon_bin_vma_fault won't need to hold fetch_lock at all. That would fix
the deadlock, right?
Alan Stern
prev parent reply other threads:[~2019-11-20 18:18 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-03 22:01 possible deadlock in mon_bin_vma_fault syzbot
2019-11-20 12:01 ` syzbot
2019-11-20 16:14 ` Alan Stern
2019-11-20 17:12 ` Pete Zaitcev
2019-11-20 18:47 ` Alan Stern
2019-11-21 14:48 ` Pete Zaitcev
2019-11-21 16:20 ` Alan Stern
2019-11-21 16:46 ` Pete Zaitcev
2019-11-21 23:38 ` Pete Zaitcev
2019-11-22 7:18 ` Dmitry Vyukov
2019-11-22 15:27 ` Alan Stern
2019-11-22 20:52 ` Pete Zaitcev
2019-11-22 22:13 ` Alan Stern
2019-11-22 22:13 ` syzbot
2019-11-23 17:18 ` Alan Stern
2019-11-23 17:18 ` syzbot
2019-11-24 15:59 ` Alan Stern
2019-11-24 19:10 ` syzbot
2019-11-24 20:55 ` Alan Stern
2019-11-24 23:24 ` syzbot
2019-11-25 0:10 ` Pete Zaitcev
2019-11-25 2:12 ` Alan Stern
2019-11-23 17:18 ` Re: " syzbot
2019-11-22 22:13 ` syzbot
2019-11-20 17:33 ` Pete Zaitcev
2019-11-20 18:18 ` Alan Stern [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.44L0.1911201313480.1498-100000@iolanthe.rowland.org \
--to=stern@rowland.harvard.edu \
--cc=arnd@arndb.de \
--cc=gregkh@linuxfoundation.org \
--cc=jrdr.linux@gmail.com \
--cc=keescook@chromium.org \
--cc=kstewart@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-usb@vger.kernel.org \
--cc=syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com \
--cc=syzkaller-bugs@googlegroups.com \
--cc=tglx@linutronix.de \
--cc=viro@zeniv.linux.org.uk \
--cc=zaitcev@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).