linux-usb.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: possible deadlock in mon_bin_vma_fault
       [not found] <0000000000006ad6030574fead2e@google.com>
@ 2019-11-20 12:01 ` syzbot
  2019-11-20 16:14   ` Alan Stern
  0 siblings, 1 reply; 25+ messages in thread
From: syzbot @ 2019-11-20 12:01 UTC (permalink / raw)
  To: arnd, gregkh, jrdr.linux, keescook, kstewart, linux-kernel,
	linux-usb, syzkaller-bugs, tglx, viro, zaitcev

syzbot has bisected this bug to:

commit 46eb14a6e1585d99c1b9f58d0e7389082a5f466b
Author: Pete Zaitcev <zaitcev@redhat.com>
Date:   Mon Jan 8 21:46:41 2018 +0000

     USB: fix usbmon BUG trigger

bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=14deb012e00000
start commit:   58c3f14f Merge tag 'riscv-for-linus-4.19-rc2' of git://git..
git tree:       upstream
final crash:    https://syzkaller.appspot.com/x/report.txt?x=16deb012e00000
console output: https://syzkaller.appspot.com/x/log.txt?x=12deb012e00000
kernel config:  https://syzkaller.appspot.com/x/.config?x=531a917630d2a492
dashboard link: https://syzkaller.appspot.com/bug?extid=56f9673bb4cdcbeb0e92
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=13dca13e400000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=17cbe492400000

Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com
Fixes: 46eb14a6e158 ("USB: fix usbmon BUG trigger")

For information about bisection process see: https://goo.gl/tpsmEJ#bisection

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-20 12:01 ` possible deadlock in mon_bin_vma_fault syzbot
@ 2019-11-20 16:14   ` Alan Stern
  2019-11-20 17:12     ` Pete Zaitcev
  2019-11-20 17:33     ` Pete Zaitcev
  0 siblings, 2 replies; 25+ messages in thread
From: Alan Stern @ 2019-11-20 16:14 UTC (permalink / raw)
  To: Pete Zaitcev, syzbot
  Cc: arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro

On Wed, 20 Nov 2019, syzbot wrote:

> syzbot has bisected this bug to:
> 
> commit 46eb14a6e1585d99c1b9f58d0e7389082a5f466b
> Author: Pete Zaitcev <zaitcev@redhat.com>
> Date:   Mon Jan 8 21:46:41 2018 +0000
> 
>      USB: fix usbmon BUG trigger

Here's part of the commit description:

    USB: fix usbmon BUG trigger
    
    Automated tests triggered this by opening usbmon and accessing the
    mmap while simultaneously resizing the buffers. This bug was with
    us since 2006, because typically applications only size the buffers
    once and thus avoid racing. Reported by Kirill A. Shutemov.

As it happens, I spent a little time investigating this bug report just
yesterday.  It seems to me that the easiest fix would be to disallow
resizing the buffer while it is mapped by any users.  (Besides,
allowing that seems like a bad idea in any case.)

Pete, does that seem reasonable to you?

Alan Stern


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-20 16:14   ` Alan Stern
@ 2019-11-20 17:12     ` Pete Zaitcev
  2019-11-20 18:47       ` Alan Stern
  2019-11-20 17:33     ` Pete Zaitcev
  1 sibling, 1 reply; 25+ messages in thread
From: Pete Zaitcev @ 2019-11-20 17:12 UTC (permalink / raw)
  To: Alan Stern
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro,
	zaitcev

On Wed, 20 Nov 2019 11:14:05 -0500 (EST)
Alan Stern <stern@rowland.harvard.edu> wrote:

> As it happens, I spent a little time investigating this bug report just
> yesterday.  It seems to me that the easiest fix would be to disallow
> resizing the buffer while it is mapped by any users.  (Besides,
> allowing that seems like a bad idea in any case.)
> 
> Pete, does that seem reasonable to you?

Yes, it does seem reasonable.

I think I understand it now. My fallacy was thinking that since everything
is nailed down as long as fetch_lock is held, it was okay to grab whatever
page from our pagemap. What happens later is an attempt to get pages of the
new buffer while looking at them through the old VMA, in mon_bin_vma_fault.

It seems to me that the use counter, mmap_active, is correct and sufficient
to check in the ioctl.

-- Pete

P.S. One thing that vaguely bothers me on this is that the bot
bisected to the commit that clearly fixed worse issues.

P.P.S. Like this?

diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index ac2b4fcc265f..e27d99606adb 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1020,6 +1020,9 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
                int size;
                struct mon_pgmap *vec;
 
+               if (rp->mmap_active)
+                       return -EBUSY;
+
                if (arg < BUFF_MIN || arg > BUFF_MAX)
                        return -EINVAL;
 


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-20 16:14   ` Alan Stern
  2019-11-20 17:12     ` Pete Zaitcev
@ 2019-11-20 17:33     ` Pete Zaitcev
  2019-11-20 18:18       ` Alan Stern
  1 sibling, 1 reply; 25+ messages in thread
From: Pete Zaitcev @ 2019-11-20 17:33 UTC (permalink / raw)
  To: Alan Stern
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro,
	zaitcev

On Wed, 20 Nov 2019 11:14:05 -0500 (EST)
Alan Stern <stern@rowland.harvard.edu> wrote:

> As it happens, I spent a little time investigating this bug report just
> yesterday.  It seems to me that the easiest fix would be to disallow
> resizing the buffer while it is mapped by any users.  (Besides,
> allowing that seems like a bad idea in any case.)
> 
> Pete, does that seem reasonable to you?

Actually, please hold on a little, I think to think more about this.
The deadlock is between mon_bin_read and mon_bin_vma_fault.
To disallow resizing isn't going to fix _that_.

-- Pete


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-20 17:33     ` Pete Zaitcev
@ 2019-11-20 18:18       ` Alan Stern
  0 siblings, 0 replies; 25+ messages in thread
From: Alan Stern @ 2019-11-20 18:18 UTC (permalink / raw)
  To: Pete Zaitcev
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro

On Wed, 20 Nov 2019, Pete Zaitcev wrote:

> On Wed, 20 Nov 2019 11:14:05 -0500 (EST)
> Alan Stern <stern@rowland.harvard.edu> wrote:
> 
> > As it happens, I spent a little time investigating this bug report just
> > yesterday.  It seems to me that the easiest fix would be to disallow
> > resizing the buffer while it is mapped by any users.  (Besides,
> > allowing that seems like a bad idea in any case.)
> > 
> > Pete, does that seem reasonable to you?
> 
> Actually, please hold on a little, I think to think more about this.
> The deadlock is between mon_bin_read and mon_bin_vma_fault.
> To disallow resizing isn't going to fix _that_.

As I understand it (and my understanding is pretty limited, since I
only started to look seriously at the code one day ago), the reason why
mon_bin_vma_fault acquires fetch_lock is to prevent a resize from
happening while the fault is being handled.  Is there another reason?

If you disallow resizing while the buffer is mapped then 
mon_bin_vma_fault won't need to hold fetch_lock at all.  That would fix 
the deadlock, right?

Alan Stern


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-20 17:12     ` Pete Zaitcev
@ 2019-11-20 18:47       ` Alan Stern
  2019-11-21 14:48         ` Pete Zaitcev
  0 siblings, 1 reply; 25+ messages in thread
From: Alan Stern @ 2019-11-20 18:47 UTC (permalink / raw)
  To: Pete Zaitcev
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro

On Wed, 20 Nov 2019, Pete Zaitcev wrote:

> On Wed, 20 Nov 2019 11:14:05 -0500 (EST)
> Alan Stern <stern@rowland.harvard.edu> wrote:
> 
> > As it happens, I spent a little time investigating this bug report just
> > yesterday.  It seems to me that the easiest fix would be to disallow
> > resizing the buffer while it is mapped by any users.  (Besides,
> > allowing that seems like a bad idea in any case.)
> > 
> > Pete, does that seem reasonable to you?
> 
> Yes, it does seem reasonable.
> 
> I think I understand it now. My fallacy was thinking that since everything
> is nailed down as long as fetch_lock is held, it was okay to grab whatever
> page from our pagemap. What happens later is an attempt to get pages of the
> new buffer while looking at them through the old VMA, in mon_bin_vma_fault.
> 
> It seems to me that the use counter, mmap_active, is correct and sufficient
> to check in the ioctl.
> 
> -- Pete
> 
> P.S. One thing that vaguely bothers me on this is that the bot
> bisected to the commit that clearly fixed worse issues.
> 
> P.P.S. Like this?
> 
> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..e27d99606adb 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1020,6 +1020,9 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
>                 int size;
>                 struct mon_pgmap *vec;
>  
> +               if (rp->mmap_active)
> +                       return -EBUSY;
> +
>                 if (arg < BUFF_MIN || arg > BUFF_MAX)
>                         return -EINVAL;

Like that, yes, but the test has to be made while fetch_lock is held.  
Otherwise there's still a race: One thread could pass the test and then
do the resize, and in between another thread could map the buffer and
incur a fault.

Incidentally, the comment for fetch_lock says that it protects b_read 
and b_out, but mon_bin_vma_fault doesn't use either of those fields.

Alan Stern


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-20 18:47       ` Alan Stern
@ 2019-11-21 14:48         ` Pete Zaitcev
  2019-11-21 16:20           ` Alan Stern
  0 siblings, 1 reply; 25+ messages in thread
From: Pete Zaitcev @ 2019-11-21 14:48 UTC (permalink / raw)
  To: Alan Stern
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro,
	zaitcev

On Wed, 20 Nov 2019 13:47:00 -0500 (EST)
Alan Stern <stern@rowland.harvard.edu> wrote:

> > +               if (rp->mmap_active)
> > +                       return -EBUSY;

> Like that, yes, but the test has to be made while fetch_lock is held.  

Certainly, thanks. I was rushing just to add a postscriptum.

> Incidentally, the comment for fetch_lock says that it protects b_read 
> and b_out, but mon_bin_vma_fault doesn't use either of those fields.

I probably should change that comment to "protect the integrity of the
circular buffer, such as b_out".

Anyway... If you are looking at it too, what do you think about not using
any locks in mon_bin_vma_fault() at all? Isn't it valid? I think I tried
to be "safe", but it only uses things that are constants unless we're
opening and closing; a process cannot make page faults unless it has
some thing mapped; and that is only possible if device is open and stays
open. Can you find a hole in this reasoning?

-- Pete


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-21 14:48         ` Pete Zaitcev
@ 2019-11-21 16:20           ` Alan Stern
  2019-11-21 16:46             ` Pete Zaitcev
  2019-11-21 23:38             ` Pete Zaitcev
  0 siblings, 2 replies; 25+ messages in thread
From: Alan Stern @ 2019-11-21 16:20 UTC (permalink / raw)
  To: Pete Zaitcev
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro

On Thu, 21 Nov 2019, Pete Zaitcev wrote:

> Anyway... If you are looking at it too, what do you think about not using
> any locks in mon_bin_vma_fault() at all? Isn't it valid? I think I tried
> to be "safe", but it only uses things that are constants unless we're
> opening and closing; a process cannot make page faults unless it has
> some thing mapped; and that is only possible if device is open and stays
> open. Can you find a hole in this reasoning?

I think you're right.  But one thing concerns me: What happens if the 
same buffer is mapped by more than one process?  Do you allow that?  I 
haven't read the code in enough detail to see.

Alan Stern


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-21 16:20           ` Alan Stern
@ 2019-11-21 16:46             ` Pete Zaitcev
  2019-11-21 23:38             ` Pete Zaitcev
  1 sibling, 0 replies; 25+ messages in thread
From: Pete Zaitcev @ 2019-11-21 16:46 UTC (permalink / raw)
  To: Alan Stern
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro,
	zaitcev

On Thu, 21 Nov 2019 11:20:20 -0500 (EST)
Alan Stern <stern@rowland.harvard.edu> wrote:

> On Thu, 21 Nov 2019, Pete Zaitcev wrote:
> 
> > Anyway... If you are looking at it too, what do you think about not using
> > any locks in mon_bin_vma_fault() at all? Isn't it valid? I think I tried
> > to be "safe", but it only uses things that are constants unless we're
> > opening and closing; a process cannot make page faults unless it has
> > some thing mapped; and that is only possible if device is open and stays
> > open. Can you find a hole in this reasoning?  
> 
> I think you're right.  But one thing concerns me: What happens if the 
> same buffer is mapped by more than one process?  Do you allow that?

Yes, we allow 2 processes reading from mmap in the same time.
They may miss events, but there should be no issue to the internal
consistency of any pointers in usbmon, and no crashes or deadlocks.
Also, we cannot prohibit that. Imagine a process that does open(),
mmap(), fork()/clone().

-- Pete


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-21 16:20           ` Alan Stern
  2019-11-21 16:46             ` Pete Zaitcev
@ 2019-11-21 23:38             ` Pete Zaitcev
  2019-11-22  7:18               ` Dmitry Vyukov
  2019-11-22 15:27               ` Alan Stern
  1 sibling, 2 replies; 25+ messages in thread
From: Pete Zaitcev @ 2019-11-21 23:38 UTC (permalink / raw)
  To: Alan Stern
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro,
	zaitcev

On Thu, 21 Nov 2019 11:20:20 -0500 (EST)
Alan Stern <stern@rowland.harvard.edu> wrote:

> On Thu, 21 Nov 2019, Pete Zaitcev wrote:
> 
> > Anyway... If you are looking at it too, what do you think about not using
> > any locks in mon_bin_vma_fault() at all? Isn't it valid? I think I tried
> > to be "safe", but it only uses things that are constants unless we're
> > opening and closing; a process cannot make page faults unless it has
> > some thing mapped; and that is only possible if device is open and stays
> > open. Can you find a hole in this reasoning?  
> 
> I think you're right. [...]

How about the appended patch, then? You like?

Do you happen to know how to refer to a syzbot report in a commit message?

-- Pete

commit 628f3bbf37eee21cce4cfbfaa6a796b129d7736d
Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
Date:   Thu Nov 21 17:24:00 2019 -0600

    usb: Fix a deadlock in usbmon between mmap and read
    
    Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>

diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index ac2b4fcc265f..fb7df9810bad 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
 
 		mutex_lock(&rp->fetch_lock);
 		spin_lock_irqsave(&rp->b_lock, flags);
-		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
-		kfree(rp->b_vec);
-		rp->b_vec  = vec;
-		rp->b_size = size;
-		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
-		rp->cnt_lost = 0;
+		if (rp->mmap_active) {
+			mon_free_buff(vec, size/CHUNK_SIZE);
+			kfree(vec);
+			ret = -EBUSY;
+		} else {
+			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+			kfree(rp->b_vec);
+			rp->b_vec  = vec;
+			rp->b_size = size;
+			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
+			rp->cnt_lost = 0;
+		}
 		spin_unlock_irqrestore(&rp->b_lock, flags);
 		mutex_unlock(&rp->fetch_lock);
 		}
@@ -1093,11 +1099,11 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
 			return ret;
 		if (put_user(ret, &uptr->nfetch))
 			return -EFAULT;
-		ret = 0;
 		}
 		break;
 
-	case MON_IOCG_STATS: {
+	case MON_IOCG_STATS:
+		{
 		struct mon_bin_stats __user *sp;
 		unsigned int nevents;
 		unsigned int ndropped;
@@ -1113,7 +1119,6 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
 			return -EFAULT;
 		if (put_user(nevents, &sp->queued))
 			return -EFAULT;
-
 		}
 		break;
 
@@ -1216,13 +1221,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
 static void mon_bin_vma_open(struct vm_area_struct *vma)
 {
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active++;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 static void mon_bin_vma_close(struct vm_area_struct *vma)
 {
+	unsigned long flags;
+
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active--;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 /*
@@ -1234,16 +1247,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
 	unsigned long offset, chunk_idx;
 	struct page *pageptr;
 
-	mutex_lock(&rp->fetch_lock);
 	offset = vmf->pgoff << PAGE_SHIFT;
-	if (offset >= rp->b_size) {
-		mutex_unlock(&rp->fetch_lock);
+	if (offset >= rp->b_size)
 		return VM_FAULT_SIGBUS;
-	}
 	chunk_idx = offset / CHUNK_SIZE;
 	pageptr = rp->b_vec[chunk_idx].pg;
 	get_page(pageptr);
-	mutex_unlock(&rp->fetch_lock);
 	vmf->page = pageptr;
 	return 0;
 }


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-21 23:38             ` Pete Zaitcev
@ 2019-11-22  7:18               ` Dmitry Vyukov
  2019-11-22 15:27               ` Alan Stern
  1 sibling, 0 replies; 25+ messages in thread
From: Dmitry Vyukov @ 2019-11-22  7:18 UTC (permalink / raw)
  To: Pete Zaitcev
  Cc: Alan Stern, syzbot, Arnd Bergmann, Greg Kroah-Hartman,
	Souptick Joarder, Kees Cook, Kate Stewart,
	Kernel development list, USB list, syzkaller-bugs,
	Thomas Gleixner, Al Viro

On Fri, Nov 22, 2019 at 12:38 AM Pete Zaitcev <zaitcev@redhat.com> wrote:
>
> On Thu, 21 Nov 2019 11:20:20 -0500 (EST)
> Alan Stern <stern@rowland.harvard.edu> wrote:
>
> > On Thu, 21 Nov 2019, Pete Zaitcev wrote:
> >
> > > Anyway... If you are looking at it too, what do you think about not using
> > > any locks in mon_bin_vma_fault() at all? Isn't it valid? I think I tried
> > > to be "safe", but it only uses things that are constants unless we're
> > > opening and closing; a process cannot make page faults unless it has
> > > some thing mapped; and that is only possible if device is open and stays
> > > open. Can you find a hole in this reasoning?
> >
> > I think you're right. [...]
>
> How about the appended patch, then? You like?
>
> Do you happen to know how to refer to a syzbot report in a commit message?
>
> -- Pete
>
> commit 628f3bbf37eee21cce4cfbfaa6a796b129d7736d
> Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
> Date:   Thu Nov 21 17:24:00 2019 -0600
>
>     usb: Fix a deadlock in usbmon between mmap and read
>
>     Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>

/\/\/\/\/\/\/\/\/\/\

Please don't forget the Reported-by

> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..fb7df9810bad 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
>
>                 mutex_lock(&rp->fetch_lock);
>                 spin_lock_irqsave(&rp->b_lock, flags);
> -               mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> -               kfree(rp->b_vec);
> -               rp->b_vec  = vec;
> -               rp->b_size = size;
> -               rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> -               rp->cnt_lost = 0;
> +               if (rp->mmap_active) {
> +                       mon_free_buff(vec, size/CHUNK_SIZE);
> +                       kfree(vec);
> +                       ret = -EBUSY;
> +               } else {
> +                       mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> +                       kfree(rp->b_vec);
> +                       rp->b_vec  = vec;
> +                       rp->b_size = size;
> +                       rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> +                       rp->cnt_lost = 0;
> +               }
>                 spin_unlock_irqrestore(&rp->b_lock, flags);
>                 mutex_unlock(&rp->fetch_lock);
>                 }
> @@ -1093,11 +1099,11 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
>                         return ret;
>                 if (put_user(ret, &uptr->nfetch))
>                         return -EFAULT;
> -               ret = 0;
>                 }
>                 break;
>
> -       case MON_IOCG_STATS: {
> +       case MON_IOCG_STATS:
> +               {
>                 struct mon_bin_stats __user *sp;
>                 unsigned int nevents;
>                 unsigned int ndropped;
> @@ -1113,7 +1119,6 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
>                         return -EFAULT;
>                 if (put_user(nevents, &sp->queued))
>                         return -EFAULT;
> -
>                 }
>                 break;
>
> @@ -1216,13 +1221,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
>  static void mon_bin_vma_open(struct vm_area_struct *vma)
>  {
>         struct mon_reader_bin *rp = vma->vm_private_data;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&rp->b_lock, flags);
>         rp->mmap_active++;
> +       spin_unlock_irqrestore(&rp->b_lock, flags);
>  }
>
>  static void mon_bin_vma_close(struct vm_area_struct *vma)
>  {
> +       unsigned long flags;
> +
>         struct mon_reader_bin *rp = vma->vm_private_data;
> +       spin_lock_irqsave(&rp->b_lock, flags);
>         rp->mmap_active--;
> +       spin_unlock_irqrestore(&rp->b_lock, flags);
>  }
>
>  /*
> @@ -1234,16 +1247,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
>         unsigned long offset, chunk_idx;
>         struct page *pageptr;
>
> -       mutex_lock(&rp->fetch_lock);
>         offset = vmf->pgoff << PAGE_SHIFT;
> -       if (offset >= rp->b_size) {
> -               mutex_unlock(&rp->fetch_lock);
> +       if (offset >= rp->b_size)
>                 return VM_FAULT_SIGBUS;
> -       }
>         chunk_idx = offset / CHUNK_SIZE;
>         pageptr = rp->b_vec[chunk_idx].pg;
>         get_page(pageptr);
> -       mutex_unlock(&rp->fetch_lock);
>         vmf->page = pageptr;
>         return 0;
>  }
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bugs+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/20191121173825.1527c3a5%40suzdal.zaitcev.lan.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-21 23:38             ` Pete Zaitcev
  2019-11-22  7:18               ` Dmitry Vyukov
@ 2019-11-22 15:27               ` Alan Stern
  2019-11-22 20:52                 ` Pete Zaitcev
  1 sibling, 1 reply; 25+ messages in thread
From: Alan Stern @ 2019-11-22 15:27 UTC (permalink / raw)
  To: Pete Zaitcev
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro

On Thu, 21 Nov 2019, Pete Zaitcev wrote:

> On Thu, 21 Nov 2019 11:20:20 -0500 (EST)
> Alan Stern <stern@rowland.harvard.edu> wrote:
> 
> > On Thu, 21 Nov 2019, Pete Zaitcev wrote:
> > 
> > > Anyway... If you are looking at it too, what do you think about not using
> > > any locks in mon_bin_vma_fault() at all? Isn't it valid? I think I tried
> > > to be "safe", but it only uses things that are constants unless we're
> > > opening and closing; a process cannot make page faults unless it has
> > > some thing mapped; and that is only possible if device is open and stays
> > > open. Can you find a hole in this reasoning?  
> > 
> > I think you're right. [...]
> 
> How about the appended patch, then? You like?
> 
> Do you happen to know how to refer to a syzbot report in a commit message?

As Dmitry mentioned, you should put the Reported-by: line from the
original syzbot bug report (see
<https://marc.info/?l=linux-usb&m=153601206710985&w=2>) in the patch.

> -- Pete
> 
> commit 628f3bbf37eee21cce4cfbfaa6a796b129d7736d
> Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
> Date:   Thu Nov 21 17:24:00 2019 -0600
> 
>     usb: Fix a deadlock in usbmon between mmap and read
>     
>     Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
> 
> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..fb7df9810bad 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
>  
>  		mutex_lock(&rp->fetch_lock);
>  		spin_lock_irqsave(&rp->b_lock, flags);
> -		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> -		kfree(rp->b_vec);
> -		rp->b_vec  = vec;
> -		rp->b_size = size;
> -		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> -		rp->cnt_lost = 0;
> +		if (rp->mmap_active) {
> +			mon_free_buff(vec, size/CHUNK_SIZE);
> +			kfree(vec);
> +			ret = -EBUSY;

It would be more elegant to do the rp->mmap_active test before calling
kcalloc and mon_alloc_buf.  But of course that's a pretty minor thing.

> +		} else {
> +			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> +			kfree(rp->b_vec);
> +			rp->b_vec  = vec;
> +			rp->b_size = size;
> +			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> +			rp->cnt_lost = 0;
> +		}
>  		spin_unlock_irqrestore(&rp->b_lock, flags);
>  		mutex_unlock(&rp->fetch_lock);
>  		}
> @@ -1093,11 +1099,11 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
>  			return ret;
>  		if (put_user(ret, &uptr->nfetch))
>  			return -EFAULT;
> -		ret = 0;

What's the reason for this change?

>  		}
>  		break;
>  
> -	case MON_IOCG_STATS: {
> +	case MON_IOCG_STATS:
> +		{

And this one?  This disagrees with the usual kernel style.

>  		struct mon_bin_stats __user *sp;
>  		unsigned int nevents;
>  		unsigned int ndropped;
> @@ -1113,7 +1119,6 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
>  			return -EFAULT;
>  		if (put_user(nevents, &sp->queued))
>  			return -EFAULT;
> -
>  		}
>  		break;
>  
> @@ -1216,13 +1221,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
>  static void mon_bin_vma_open(struct vm_area_struct *vma)
>  {
>  	struct mon_reader_bin *rp = vma->vm_private_data;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&rp->b_lock, flags);
>  	rp->mmap_active++;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>  }
>  
>  static void mon_bin_vma_close(struct vm_area_struct *vma)
>  {
> +	unsigned long flags;
> +
>  	struct mon_reader_bin *rp = vma->vm_private_data;
> +	spin_lock_irqsave(&rp->b_lock, flags);
>  	rp->mmap_active--;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>  }
>  
>  /*
> @@ -1234,16 +1247,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
>  	unsigned long offset, chunk_idx;
>  	struct page *pageptr;
>  
> -	mutex_lock(&rp->fetch_lock);
>  	offset = vmf->pgoff << PAGE_SHIFT;
> -	if (offset >= rp->b_size) {
> -		mutex_unlock(&rp->fetch_lock);
> +	if (offset >= rp->b_size)
>  		return VM_FAULT_SIGBUS;
> -	}
>  	chunk_idx = offset / CHUNK_SIZE;
>  	pageptr = rp->b_vec[chunk_idx].pg;
>  	get_page(pageptr);
> -	mutex_unlock(&rp->fetch_lock);
>  	vmf->page = pageptr;
>  	return 0;
>  }

Apart from the items mentioned above, this looks right to me.

Alan Stern


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-22 15:27               ` Alan Stern
@ 2019-11-22 20:52                 ` Pete Zaitcev
  2019-11-22 22:13                   ` Alan Stern
  0 siblings, 1 reply; 25+ messages in thread
From: Pete Zaitcev @ 2019-11-22 20:52 UTC (permalink / raw)
  To: Alan Stern
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro,
	zaitcev

On Fri, 22 Nov 2019 10:27:10 -0500 (EST)
Alan Stern <stern@rowland.harvard.edu> wrote:

> As Dmitry mentioned, you should put the Reported-by: line from the
> original syzbot bug report (see
> <https://marc.info/?l=linux-usb&m=153601206710985&w=2>) in the patch.

Thanks, got it. I also dropped all the cosmetic changes.

> >  		mutex_lock(&rp->fetch_lock);
> >  		spin_lock_irqsave(&rp->b_lock, flags);
> > -		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> > -		kfree(rp->b_vec);
> > -		rp->b_vec  = vec;
> > -		rp->b_size = size;
> > -		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> > -		rp->cnt_lost = 0;
> > +		if (rp->mmap_active) {
> > +			mon_free_buff(vec, size/CHUNK_SIZE);
> > +			kfree(vec);
> > +			ret = -EBUSY;  
> 
> It would be more elegant to do the rp->mmap_active test before calling
> kcalloc and mon_alloc_buf.  But of course that's a pretty minor thing.

Indeed it feels wrong that so much work gets discarded. However, memory
allocations can block, right? In the same time, our main objective here is
to make sure that when a page fault happens, we fill in the page that VMA
is intended to refer, and not one that was re-allocated. Therefore, I'm
trying to avoid a situation where:

1. thread A checks mmap_active, finds it at zero and proceeds into the
reallocation ioctl
2. thread A sleeps in get_free_page()
3. thread B runs mmap() and succeeds
4. thread A obtains its pages and proceeds to substitute the buffer
5. thread B (or any other) pagefaults and ends with the new, unexpected page

The code is not pretty, but I don't see an alternative. Heck, I would
love you to find more races if you can.

-- Pete

commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
Date:   Thu Nov 21 17:24:00 2019 -0600

    usb: Fix a deadlock in usbmon between mmap and read
    
    Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
    Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index ac2b4fcc265f..f48a23adbc35 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
 
 		mutex_lock(&rp->fetch_lock);
 		spin_lock_irqsave(&rp->b_lock, flags);
-		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
-		kfree(rp->b_vec);
-		rp->b_vec  = vec;
-		rp->b_size = size;
-		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
-		rp->cnt_lost = 0;
+		if (rp->mmap_active) {
+			mon_free_buff(vec, size/CHUNK_SIZE);
+			kfree(vec);
+			ret = -EBUSY;
+		} else {
+			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+			kfree(rp->b_vec);
+			rp->b_vec  = vec;
+			rp->b_size = size;
+			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
+			rp->cnt_lost = 0;
+		}
 		spin_unlock_irqrestore(&rp->b_lock, flags);
 		mutex_unlock(&rp->fetch_lock);
 		}
@@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
 static void mon_bin_vma_open(struct vm_area_struct *vma)
 {
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active++;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 static void mon_bin_vma_close(struct vm_area_struct *vma)
 {
+	unsigned long flags;
+
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active--;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 /*
@@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
 	unsigned long offset, chunk_idx;
 	struct page *pageptr;
 
-	mutex_lock(&rp->fetch_lock);
 	offset = vmf->pgoff << PAGE_SHIFT;
-	if (offset >= rp->b_size) {
-		mutex_unlock(&rp->fetch_lock);
+	if (offset >= rp->b_size)
 		return VM_FAULT_SIGBUS;
-	}
 	chunk_idx = offset / CHUNK_SIZE;
 	pageptr = rp->b_vec[chunk_idx].pg;
 	get_page(pageptr);
-	mutex_unlock(&rp->fetch_lock);
 	vmf->page = pageptr;
 	return 0;
 }


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-22 20:52                 ` Pete Zaitcev
@ 2019-11-22 22:13                   ` Alan Stern
  2019-11-22 22:13                     ` syzbot
  2019-11-22 22:13                     ` syzbot
  0 siblings, 2 replies; 25+ messages in thread
From: Alan Stern @ 2019-11-22 22:13 UTC (permalink / raw)
  To: Pete Zaitcev
  Cc: syzbot, arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro

On Fri, 22 Nov 2019, Pete Zaitcev wrote:

> > It would be more elegant to do the rp->mmap_active test before calling
> > kcalloc and mon_alloc_buf.  But of course that's a pretty minor thing.
> 
> Indeed it feels wrong that so much work gets discarded. However, memory
> allocations can block, right? In the same time, our main objective here is
> to make sure that when a page fault happens, we fill in the page that VMA
> is intended to refer, and not one that was re-allocated. Therefore, I'm
> trying to avoid a situation where:
> 
> 1. thread A checks mmap_active, finds it at zero and proceeds into the
> reallocation ioctl
> 2. thread A sleeps in get_free_page()
> 3. thread B runs mmap() and succeeds
> 4. thread A obtains its pages and proceeds to substitute the buffer
> 5. thread B (or any other) pagefaults and ends with the new, unexpected page
> 
> The code is not pretty, but I don't see an alternative. Heck, I would
> love you to find more races if you can.

The alternative is to have the routines for mmap() hold fetch_lock
instead of b_lock.  mmap() is allowed to sleep, so that would be okay.  
Then you would also hold fetch_lock while checking mmap_active and
doing the memory allocations.  That would prevent any races -- in your
example above, thread A would acquire fetch_lock in step 1, so thread B
would block in step 3 until step 4 was finished.  Hence B would end up 
mapping the correct pages.

In practice, I don't see this being a routine problem.  How often do 
multiple threads independently try to mmap the same usbmon buffer?

Still, let's see syzbot reacts to your current patch.  The line below 
is how you ask syzbot to test a candidate patch.

Alan Stern

#syz test: linux-4.19.y f6e27dbb1afa

commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
Date:   Thu Nov 21 17:24:00 2019 -0600

    usb: Fix a deadlock in usbmon between mmap and read
    
    Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
    Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index ac2b4fcc265f..f48a23adbc35 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
 
 		mutex_lock(&rp->fetch_lock);
 		spin_lock_irqsave(&rp->b_lock, flags);
-		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
-		kfree(rp->b_vec);
-		rp->b_vec  = vec;
-		rp->b_size = size;
-		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
-		rp->cnt_lost = 0;
+		if (rp->mmap_active) {
+			mon_free_buff(vec, size/CHUNK_SIZE);
+			kfree(vec);
+			ret = -EBUSY;
+		} else {
+			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+			kfree(rp->b_vec);
+			rp->b_vec  = vec;
+			rp->b_size = size;
+			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
+			rp->cnt_lost = 0;
+		}
 		spin_unlock_irqrestore(&rp->b_lock, flags);
 		mutex_unlock(&rp->fetch_lock);
 		}
@@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
 static void mon_bin_vma_open(struct vm_area_struct *vma)
 {
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active++;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 static void mon_bin_vma_close(struct vm_area_struct *vma)
 {
+	unsigned long flags;
+
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active--;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 /*
@@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
 	unsigned long offset, chunk_idx;
 	struct page *pageptr;
 
-	mutex_lock(&rp->fetch_lock);
 	offset = vmf->pgoff << PAGE_SHIFT;
-	if (offset >= rp->b_size) {
-		mutex_unlock(&rp->fetch_lock);
+	if (offset >= rp->b_size)
 		return VM_FAULT_SIGBUS;
-	}
 	chunk_idx = offset / CHUNK_SIZE;
 	pageptr = rp->b_vec[chunk_idx].pg;
 	get_page(pageptr);
-	mutex_unlock(&rp->fetch_lock);
 	vmf->page = pageptr;
 	return 0;
 }




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: Re: possible deadlock in mon_bin_vma_fault
  2019-11-22 22:13                   ` Alan Stern
@ 2019-11-22 22:13                     ` syzbot
  2019-11-23 17:18                       ` Alan Stern
  2019-11-22 22:13                     ` syzbot
  1 sibling, 1 reply; 25+ messages in thread
From: syzbot @ 2019-11-22 22:13 UTC (permalink / raw)
  To: Alan Stern
  Cc: arnd, gregkh, jrdr.linux, keescook, kstewart, linux-kernel,
	linux-usb, stern, syzkaller-bugs, tglx, viro, zaitcev

> On Fri, 22 Nov 2019, Pete Zaitcev wrote:

>> > It would be more elegant to do the rp->mmap_active test before calling
>> > kcalloc and mon_alloc_buf.  But of course that's a pretty minor thing.

>> Indeed it feels wrong that so much work gets discarded. However, memory
>> allocations can block, right? In the same time, our main objective here  
>> is
>> to make sure that when a page fault happens, we fill in the page that VMA
>> is intended to refer, and not one that was re-allocated. Therefore, I'm
>> trying to avoid a situation where:

>> 1. thread A checks mmap_active, finds it at zero and proceeds into the
>> reallocation ioctl
>> 2. thread A sleeps in get_free_page()
>> 3. thread B runs mmap() and succeeds
>> 4. thread A obtains its pages and proceeds to substitute the buffer
>> 5. thread B (or any other) pagefaults and ends with the new, unexpected  
>> page

>> The code is not pretty, but I don't see an alternative. Heck, I would
>> love you to find more races if you can.

> The alternative is to have the routines for mmap() hold fetch_lock
> instead of b_lock.  mmap() is allowed to sleep, so that would be okay.
> Then you would also hold fetch_lock while checking mmap_active and
> doing the memory allocations.  That would prevent any races -- in your
> example above, thread A would acquire fetch_lock in step 1, so thread B
> would block in step 3 until step 4 was finished.  Hence B would end up
> mapping the correct pages.

> In practice, I don't see this being a routine problem.  How often do
> multiple threads independently try to mmap the same usbmon buffer?

> Still, let's see syzbot reacts to your current patch.  The line below
> is how you ask syzbot to test a candidate patch.

> Alan Stern

> #syz test: linux-4.19.y f6e27dbb1afa

"linux-4.19.y" does not look like a valid git repo address.


> commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
> Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
> Date:   Thu Nov 21 17:24:00 2019 -0600

>      usb: Fix a deadlock in usbmon between mmap and read

>      Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
>      Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..f48a23adbc35 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file,  
> unsigned int cmd, unsigned long arg

>   		mutex_lock(&rp->fetch_lock);
>   		spin_lock_irqsave(&rp->b_lock, flags);
> -		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> -		kfree(rp->b_vec);
> -		rp->b_vec  = vec;
> -		rp->b_size = size;
> -		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> -		rp->cnt_lost = 0;
> +		if (rp->mmap_active) {
> +			mon_free_buff(vec, size/CHUNK_SIZE);
> +			kfree(vec);
> +			ret = -EBUSY;
> +		} else {
> +			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> +			kfree(rp->b_vec);
> +			rp->b_vec  = vec;
> +			rp->b_size = size;
> +			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> +			rp->cnt_lost = 0;
> +		}
>   		spin_unlock_irqrestore(&rp->b_lock, flags);
>   		mutex_unlock(&rp->fetch_lock);
>   		}
> @@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct  
> poll_table_struct *wait)
>   static void mon_bin_vma_open(struct vm_area_struct *vma)
>   {
>   	struct mon_reader_bin *rp = vma->vm_private_data;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&rp->b_lock, flags);
>   	rp->mmap_active++;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>   }

>   static void mon_bin_vma_close(struct vm_area_struct *vma)
>   {
> +	unsigned long flags;
> +
>   	struct mon_reader_bin *rp = vma->vm_private_data;
> +	spin_lock_irqsave(&rp->b_lock, flags);
>   	rp->mmap_active--;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>   }

>   /*
> @@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct  
> vm_fault *vmf)
>   	unsigned long offset, chunk_idx;
>   	struct page *pageptr;

> -	mutex_lock(&rp->fetch_lock);
>   	offset = vmf->pgoff << PAGE_SHIFT;
> -	if (offset >= rp->b_size) {
> -		mutex_unlock(&rp->fetch_lock);
> +	if (offset >= rp->b_size)
>   		return VM_FAULT_SIGBUS;
> -	}
>   	chunk_idx = offset / CHUNK_SIZE;
>   	pageptr = rp->b_vec[chunk_idx].pg;
>   	get_page(pageptr);
> -	mutex_unlock(&rp->fetch_lock);
>   	vmf->page = pageptr;
>   	return 0;
>   }




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Re: possible deadlock in mon_bin_vma_fault
  2019-11-22 22:13                   ` Alan Stern
  2019-11-22 22:13                     ` syzbot
@ 2019-11-22 22:13                     ` syzbot
  1 sibling, 0 replies; 25+ messages in thread
From: syzbot @ 2019-11-22 22:13 UTC (permalink / raw)
  To: Alan Stern
  Cc: arnd, gregkh, jrdr.linux, keescook, kstewart, linux-kernel,
	linux-usb, stern, syzkaller-bugs, tglx, viro, zaitcev

> On Fri, 22 Nov 2019, Pete Zaitcev wrote:

>> > It would be more elegant to do the rp->mmap_active test before calling
>> > kcalloc and mon_alloc_buf.  But of course that's a pretty minor thing.

>> Indeed it feels wrong that so much work gets discarded. However, memory
>> allocations can block, right? In the same time, our main objective here  
>> is
>> to make sure that when a page fault happens, we fill in the page that VMA
>> is intended to refer, and not one that was re-allocated. Therefore, I'm
>> trying to avoid a situation where:

>> 1. thread A checks mmap_active, finds it at zero and proceeds into the
>> reallocation ioctl
>> 2. thread A sleeps in get_free_page()
>> 3. thread B runs mmap() and succeeds
>> 4. thread A obtains its pages and proceeds to substitute the buffer
>> 5. thread B (or any other) pagefaults and ends with the new, unexpected  
>> page

>> The code is not pretty, but I don't see an alternative. Heck, I would
>> love you to find more races if you can.

> The alternative is to have the routines for mmap() hold fetch_lock
> instead of b_lock.  mmap() is allowed to sleep, so that would be okay.
> Then you would also hold fetch_lock while checking mmap_active and
> doing the memory allocations.  That would prevent any races -- in your
> example above, thread A would acquire fetch_lock in step 1, so thread B
> would block in step 3 until step 4 was finished.  Hence B would end up
> mapping the correct pages.

> In practice, I don't see this being a routine problem.  How often do
> multiple threads independently try to mmap the same usbmon buffer?

> Still, let's see syzbot reacts to your current patch.  The line below
> is how you ask syzbot to test a candidate patch.

> Alan Stern

> #syz test: linux-4.19.y f6e27dbb1afa

"linux-4.19.y" does not look like a valid git repo address.


> commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
> Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
> Date:   Thu Nov 21 17:24:00 2019 -0600

>      usb: Fix a deadlock in usbmon between mmap and read

>      Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
>      Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..f48a23adbc35 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file,  
> unsigned int cmd, unsigned long arg

>   		mutex_lock(&rp->fetch_lock);
>   		spin_lock_irqsave(&rp->b_lock, flags);
> -		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> -		kfree(rp->b_vec);
> -		rp->b_vec  = vec;
> -		rp->b_size = size;
> -		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> -		rp->cnt_lost = 0;
> +		if (rp->mmap_active) {
> +			mon_free_buff(vec, size/CHUNK_SIZE);
> +			kfree(vec);
> +			ret = -EBUSY;
> +		} else {
> +			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> +			kfree(rp->b_vec);
> +			rp->b_vec  = vec;
> +			rp->b_size = size;
> +			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> +			rp->cnt_lost = 0;
> +		}
>   		spin_unlock_irqrestore(&rp->b_lock, flags);
>   		mutex_unlock(&rp->fetch_lock);
>   		}
> @@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct  
> poll_table_struct *wait)
>   static void mon_bin_vma_open(struct vm_area_struct *vma)
>   {
>   	struct mon_reader_bin *rp = vma->vm_private_data;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&rp->b_lock, flags);
>   	rp->mmap_active++;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>   }

>   static void mon_bin_vma_close(struct vm_area_struct *vma)
>   {
> +	unsigned long flags;
> +
>   	struct mon_reader_bin *rp = vma->vm_private_data;
> +	spin_lock_irqsave(&rp->b_lock, flags);
>   	rp->mmap_active--;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>   }

>   /*
> @@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct  
> vm_fault *vmf)
>   	unsigned long offset, chunk_idx;
>   	struct page *pageptr;

> -	mutex_lock(&rp->fetch_lock);
>   	offset = vmf->pgoff << PAGE_SHIFT;
> -	if (offset >= rp->b_size) {
> -		mutex_unlock(&rp->fetch_lock);
> +	if (offset >= rp->b_size)
>   		return VM_FAULT_SIGBUS;
> -	}
>   	chunk_idx = offset / CHUNK_SIZE;
>   	pageptr = rp->b_vec[chunk_idx].pg;
>   	get_page(pageptr);
> -	mutex_unlock(&rp->fetch_lock);
>   	vmf->page = pageptr;
>   	return 0;
>   }




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Re: possible deadlock in mon_bin_vma_fault
  2019-11-22 22:13                     ` syzbot
@ 2019-11-23 17:18                       ` Alan Stern
  2019-11-23 17:18                         ` syzbot
  2019-11-23 17:18                         ` Re: " syzbot
  0 siblings, 2 replies; 25+ messages in thread
From: Alan Stern @ 2019-11-23 17:18 UTC (permalink / raw)
  To: syzbot
  Cc: arnd, gregkh, jrdr.linux, keescook, kstewart, linux-kernel,
	linux-usb, syzkaller-bugs, tglx, viro, zaitcev

On Fri, 22 Nov 2019, syzbot wrote:

> > #syz test: linux-4.19.y f6e27dbb1afa
> 
> "linux-4.19.y" does not look like a valid git repo address.

Let's try again.  The "git tree" value in the original bug report was
"upstream", so I'll use that even though it doesn't look like a valid 
git repo address either.

Alan Stern

#syz test: upstream f6e27dbb1afa

commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
Date:   Thu Nov 21 17:24:00 2019 -0600

     usb: Fix a deadlock in usbmon between mmap and read

     Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
     Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index ac2b4fcc265f..f48a23adbc35 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg

  		mutex_lock(&rp->fetch_lock);
  		spin_lock_irqsave(&rp->b_lock, flags);
-		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
-		kfree(rp->b_vec);
-		rp->b_vec  = vec;
-		rp->b_size = size;
-		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
-		rp->cnt_lost = 0;
+		if (rp->mmap_active) {
+			mon_free_buff(vec, size/CHUNK_SIZE);
+			kfree(vec);
+			ret = -EBUSY;
+		} else {
+			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+			kfree(rp->b_vec);
+			rp->b_vec  = vec;
+			rp->b_size = size;
+			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
+			rp->cnt_lost = 0;
+		}
  		spin_unlock_irqrestore(&rp->b_lock, flags);
  		mutex_unlock(&rp->fetch_lock);
  		}
@@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
  static void mon_bin_vma_open(struct vm_area_struct *vma)
  {
  	struct mon_reader_bin *rp = vma->vm_private_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rp->b_lock, flags);
  	rp->mmap_active++;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
  }

  static void mon_bin_vma_close(struct vm_area_struct *vma)
  {
+	unsigned long flags;
+
  	struct mon_reader_bin *rp = vma->vm_private_data;
+	spin_lock_irqsave(&rp->b_lock, flags);
  	rp->mmap_active--;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
  }

  /*
@@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
  	unsigned long offset, chunk_idx;
  	struct page *pageptr;

-	mutex_lock(&rp->fetch_lock);
  	offset = vmf->pgoff << PAGE_SHIFT;
-	if (offset >= rp->b_size) {
-		mutex_unlock(&rp->fetch_lock);
+	if (offset >= rp->b_size)
  		return VM_FAULT_SIGBUS;
-	}
  	chunk_idx = offset / CHUNK_SIZE;
  	pageptr = rp->b_vec[chunk_idx].pg;
  	get_page(pageptr);
-	mutex_unlock(&rp->fetch_lock);
  	vmf->page = pageptr;
  	return 0;
  }



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: Re: Re: possible deadlock in mon_bin_vma_fault
  2019-11-23 17:18                       ` Alan Stern
@ 2019-11-23 17:18                         ` syzbot
  2019-11-24 15:59                           ` Alan Stern
  2019-11-23 17:18                         ` Re: " syzbot
  1 sibling, 1 reply; 25+ messages in thread
From: syzbot @ 2019-11-23 17:18 UTC (permalink / raw)
  To: Alan Stern
  Cc: arnd, gregkh, jrdr.linux, keescook, kstewart, linux-kernel,
	linux-usb, stern, syzkaller-bugs, tglx, viro, zaitcev

> On Fri, 22 Nov 2019, syzbot wrote:

>> > #syz test: linux-4.19.y f6e27dbb1afa

>> "linux-4.19.y" does not look like a valid git repo address.

> Let's try again.  The "git tree" value in the original bug report was
> "upstream", so I'll use that even though it doesn't look like a valid
> git repo address either.

> Alan Stern

> #syz test: upstream f6e27dbb1afa

"upstream" does not look like a valid git repo address.


> commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
> Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
> Date:   Thu Nov 21 17:24:00 2019 -0600

>       usb: Fix a deadlock in usbmon between mmap and read

>       Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
>       Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..f48a23adbc35 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file,  
> unsigned int cmd, unsigned long arg

>    		mutex_lock(&rp->fetch_lock);
>    		spin_lock_irqsave(&rp->b_lock, flags);
> -		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> -		kfree(rp->b_vec);
> -		rp->b_vec  = vec;
> -		rp->b_size = size;
> -		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> -		rp->cnt_lost = 0;
> +		if (rp->mmap_active) {
> +			mon_free_buff(vec, size/CHUNK_SIZE);
> +			kfree(vec);
> +			ret = -EBUSY;
> +		} else {
> +			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> +			kfree(rp->b_vec);
> +			rp->b_vec  = vec;
> +			rp->b_size = size;
> +			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> +			rp->cnt_lost = 0;
> +		}
>    		spin_unlock_irqrestore(&rp->b_lock, flags);
>    		mutex_unlock(&rp->fetch_lock);
>    		}
> @@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct  
> poll_table_struct *wait)
>    static void mon_bin_vma_open(struct vm_area_struct *vma)
>    {
>    	struct mon_reader_bin *rp = vma->vm_private_data;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&rp->b_lock, flags);
>    	rp->mmap_active++;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>    }

>    static void mon_bin_vma_close(struct vm_area_struct *vma)
>    {
> +	unsigned long flags;
> +
>    	struct mon_reader_bin *rp = vma->vm_private_data;
> +	spin_lock_irqsave(&rp->b_lock, flags);
>    	rp->mmap_active--;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>    }

>    /*
> @@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct  
> vm_fault *vmf)
>    	unsigned long offset, chunk_idx;
>    	struct page *pageptr;

> -	mutex_lock(&rp->fetch_lock);
>    	offset = vmf->pgoff << PAGE_SHIFT;
> -	if (offset >= rp->b_size) {
> -		mutex_unlock(&rp->fetch_lock);
> +	if (offset >= rp->b_size)
>    		return VM_FAULT_SIGBUS;
> -	}
>    	chunk_idx = offset / CHUNK_SIZE;
>    	pageptr = rp->b_vec[chunk_idx].pg;
>    	get_page(pageptr);
> -	mutex_unlock(&rp->fetch_lock);
>    	vmf->page = pageptr;
>    	return 0;
>    }



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Re: Re: possible deadlock in mon_bin_vma_fault
  2019-11-23 17:18                       ` Alan Stern
  2019-11-23 17:18                         ` syzbot
@ 2019-11-23 17:18                         ` syzbot
  1 sibling, 0 replies; 25+ messages in thread
From: syzbot @ 2019-11-23 17:18 UTC (permalink / raw)
  To: Alan Stern
  Cc: arnd, gregkh, jrdr.linux, keescook, kstewart, linux-kernel,
	linux-usb, stern, syzkaller-bugs, tglx, viro, zaitcev

> On Fri, 22 Nov 2019, syzbot wrote:

>> > #syz test: linux-4.19.y f6e27dbb1afa

>> "linux-4.19.y" does not look like a valid git repo address.

> Let's try again.  The "git tree" value in the original bug report was
> "upstream", so I'll use that even though it doesn't look like a valid
> git repo address either.

> Alan Stern

> #syz test: upstream f6e27dbb1afa

"upstream" does not look like a valid git repo address.


> commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
> Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
> Date:   Thu Nov 21 17:24:00 2019 -0600

>       usb: Fix a deadlock in usbmon between mmap and read

>       Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
>       Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..f48a23adbc35 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file,  
> unsigned int cmd, unsigned long arg

>    		mutex_lock(&rp->fetch_lock);
>    		spin_lock_irqsave(&rp->b_lock, flags);
> -		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> -		kfree(rp->b_vec);
> -		rp->b_vec  = vec;
> -		rp->b_size = size;
> -		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> -		rp->cnt_lost = 0;
> +		if (rp->mmap_active) {
> +			mon_free_buff(vec, size/CHUNK_SIZE);
> +			kfree(vec);
> +			ret = -EBUSY;
> +		} else {
> +			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> +			kfree(rp->b_vec);
> +			rp->b_vec  = vec;
> +			rp->b_size = size;
> +			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> +			rp->cnt_lost = 0;
> +		}
>    		spin_unlock_irqrestore(&rp->b_lock, flags);
>    		mutex_unlock(&rp->fetch_lock);
>    		}
> @@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct  
> poll_table_struct *wait)
>    static void mon_bin_vma_open(struct vm_area_struct *vma)
>    {
>    	struct mon_reader_bin *rp = vma->vm_private_data;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&rp->b_lock, flags);
>    	rp->mmap_active++;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>    }

>    static void mon_bin_vma_close(struct vm_area_struct *vma)
>    {
> +	unsigned long flags;
> +
>    	struct mon_reader_bin *rp = vma->vm_private_data;
> +	spin_lock_irqsave(&rp->b_lock, flags);
>    	rp->mmap_active--;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>    }

>    /*
> @@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct  
> vm_fault *vmf)
>    	unsigned long offset, chunk_idx;
>    	struct page *pageptr;

> -	mutex_lock(&rp->fetch_lock);
>    	offset = vmf->pgoff << PAGE_SHIFT;
> -	if (offset >= rp->b_size) {
> -		mutex_unlock(&rp->fetch_lock);
> +	if (offset >= rp->b_size)
>    		return VM_FAULT_SIGBUS;
> -	}
>    	chunk_idx = offset / CHUNK_SIZE;
>    	pageptr = rp->b_vec[chunk_idx].pg;
>    	get_page(pageptr);
> -	mutex_unlock(&rp->fetch_lock);
>    	vmf->page = pageptr;
>    	return 0;
>    }



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Re: Re: possible deadlock in mon_bin_vma_fault
  2019-11-23 17:18                         ` syzbot
@ 2019-11-24 15:59                           ` Alan Stern
  2019-11-24 19:10                             ` syzbot
  0 siblings, 1 reply; 25+ messages in thread
From: Alan Stern @ 2019-11-24 15:59 UTC (permalink / raw)
  To: syzbot, Andrey Konovalov
  Cc: arnd, gregkh, jrdr.linux, keescook, kstewart,
	Kernel development list, USB list, syzkaller-bugs, tglx, viro,
	zaitcev

On Sat, 23 Nov 2019, syzbot wrote:

> > On Fri, 22 Nov 2019, syzbot wrote:
> 
> >> > #syz test: linux-4.19.y f6e27dbb1afa
> 
> >> "linux-4.19.y" does not look like a valid git repo address.
> 
> > Let's try again.  The "git tree" value in the original bug report was
> > "upstream", so I'll use that even though it doesn't look like a valid
> > git repo address either.
> 
> > Alan Stern
> 
> > #syz test: upstream f6e27dbb1afa
> 
> "upstream" does not look like a valid git repo address.

Andrey, can you do something about that?  It would be a lot nicer if 
_all_ the syzbot output and records included an actual git repo address 
in the appropriate places.

Alan Stern

#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v5.3

commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
Date:   Thu Nov 21 17:24:00 2019 -0600

     usb: Fix a deadlock in usbmon between mmap and read

     Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
     Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index ac2b4fcc265f..f48a23adbc35 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg

  		mutex_lock(&rp->fetch_lock);
  		spin_lock_irqsave(&rp->b_lock, flags);
-		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
-		kfree(rp->b_vec);
-		rp->b_vec  = vec;
-		rp->b_size = size;
-		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
-		rp->cnt_lost = 0;
+		if (rp->mmap_active) {
+			mon_free_buff(vec, size/CHUNK_SIZE);
+			kfree(vec);
+			ret = -EBUSY;
+		} else {
+			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+			kfree(rp->b_vec);
+			rp->b_vec  = vec;
+			rp->b_size = size;
+			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
+			rp->cnt_lost = 0;
+		}
  		spin_unlock_irqrestore(&rp->b_lock, flags);
  		mutex_unlock(&rp->fetch_lock);
  		}
@@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
  static void mon_bin_vma_open(struct vm_area_struct *vma)
  {
  	struct mon_reader_bin *rp = vma->vm_private_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rp->b_lock, flags);
  	rp->mmap_active++;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
  }

  static void mon_bin_vma_close(struct vm_area_struct *vma)
  {
+	unsigned long flags;
+
  	struct mon_reader_bin *rp = vma->vm_private_data;
+	spin_lock_irqsave(&rp->b_lock, flags);
  	rp->mmap_active--;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
  }

  /*
@@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
  	unsigned long offset, chunk_idx;
  	struct page *pageptr;

-	mutex_lock(&rp->fetch_lock);
  	offset = vmf->pgoff << PAGE_SHIFT;
-	if (offset >= rp->b_size) {
-		mutex_unlock(&rp->fetch_lock);
+	if (offset >= rp->b_size)
  		return VM_FAULT_SIGBUS;
-	}
  	chunk_idx = offset / CHUNK_SIZE;
  	pageptr = rp->b_vec[chunk_idx].pg;
  	get_page(pageptr);
-	mutex_unlock(&rp->fetch_lock);
  	vmf->page = pageptr;
  	return 0;
  }




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-24 15:59                           ` Alan Stern
@ 2019-11-24 19:10                             ` syzbot
  2019-11-24 20:55                               ` Alan Stern
  0 siblings, 1 reply; 25+ messages in thread
From: syzbot @ 2019-11-24 19:10 UTC (permalink / raw)
  To: andreyknvl, arnd, gregkh, jrdr.linux, keescook, kstewart,
	linux-kernel, linux-usb, stern, syzkaller-bugs, tglx, viro,
	zaitcev

Hello,

syzbot tried to test the proposed patch but build/boot failed:

failed to apply patch:
checking file drivers/usb/mon/mon_bin.c
patch: **** unexpected end of file in patch



Tested on:

commit:         4d856f72 Linux 5.3
git tree:        
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v5.3
dashboard link: https://syzkaller.appspot.com/bug?extid=56f9673bb4cdcbeb0e92
compiler:       gcc (GCC) 9.0.0 20181231 (experimental)
patch:          https://syzkaller.appspot.com/x/patch.diff?x=17a22f16e00000


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-24 19:10                             ` syzbot
@ 2019-11-24 20:55                               ` Alan Stern
  2019-11-24 23:24                                 ` syzbot
  0 siblings, 1 reply; 25+ messages in thread
From: Alan Stern @ 2019-11-24 20:55 UTC (permalink / raw)
  To: syzbot
  Cc: andreyknvl, arnd, gregkh, jrdr.linux, keescook, kstewart,
	linux-kernel, linux-usb, syzkaller-bugs, tglx, viro, zaitcev

On Sun, 24 Nov 2019, syzbot wrote:

> Hello,
> 
> syzbot tried to test the proposed patch but build/boot failed:
> 
> failed to apply patch:
> checking file drivers/usb/mon/mon_bin.c
> patch: **** unexpected end of file in patch

One more try...

Alan Stern

#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v5.3

commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
Date:   Thu Nov 21 17:24:00 2019 -0600

     usb: Fix a deadlock in usbmon between mmap and read

     Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
     Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index ac2b4fcc265f..f48a23adbc35 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
 
 		mutex_lock(&rp->fetch_lock);
 		spin_lock_irqsave(&rp->b_lock, flags);
-		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
-		kfree(rp->b_vec);
-		rp->b_vec  = vec;
-		rp->b_size = size;
-		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
-		rp->cnt_lost = 0;
+		if (rp->mmap_active) {
+			mon_free_buff(vec, size/CHUNK_SIZE);
+			kfree(vec);
+			ret = -EBUSY;
+		} else {
+			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+			kfree(rp->b_vec);
+			rp->b_vec  = vec;
+			rp->b_size = size;
+			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
+			rp->cnt_lost = 0;
+		}
 		spin_unlock_irqrestore(&rp->b_lock, flags);
 		mutex_unlock(&rp->fetch_lock);
 		}
@@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
 static void mon_bin_vma_open(struct vm_area_struct *vma)
 {
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active++;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 static void mon_bin_vma_close(struct vm_area_struct *vma)
 {
+	unsigned long flags;
+
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active--;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 /*
@@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
 	unsigned long offset, chunk_idx;
 	struct page *pageptr;
 
-	mutex_lock(&rp->fetch_lock);
 	offset = vmf->pgoff << PAGE_SHIFT;
-	if (offset >= rp->b_size) {
-		mutex_unlock(&rp->fetch_lock);
+	if (offset >= rp->b_size)
 		return VM_FAULT_SIGBUS;
-	}
 	chunk_idx = offset / CHUNK_SIZE;
 	pageptr = rp->b_vec[chunk_idx].pg;
 	get_page(pageptr);
-	mutex_unlock(&rp->fetch_lock);
 	vmf->page = pageptr;
 	return 0;
 }



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-24 20:55                               ` Alan Stern
@ 2019-11-24 23:24                                 ` syzbot
  2019-11-25  0:10                                   ` Pete Zaitcev
  0 siblings, 1 reply; 25+ messages in thread
From: syzbot @ 2019-11-24 23:24 UTC (permalink / raw)
  To: andreyknvl, arnd, gregkh, jrdr.linux, keescook, kstewart,
	linux-kernel, linux-usb, stern, syzkaller-bugs, tglx, viro,
	zaitcev

Hello,

syzbot has tested the proposed patch and the reproducer did not trigger  
crash:

Reported-and-tested-by:  
syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

Tested on:

commit:         4d856f72 Linux 5.3
git tree:        
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v5.3
kernel config:  https://syzkaller.appspot.com/x/.config?x=86071634b2594991
dashboard link: https://syzkaller.appspot.com/bug?extid=56f9673bb4cdcbeb0e92
compiler:       gcc (GCC) 9.0.0 20181231 (experimental)
patch:          https://syzkaller.appspot.com/x/patch.diff?x=11ff3eeee00000

Note: testing is done by a robot and is best-effort only.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-24 23:24                                 ` syzbot
@ 2019-11-25  0:10                                   ` Pete Zaitcev
  2019-11-25  2:12                                     ` Alan Stern
  0 siblings, 1 reply; 25+ messages in thread
From: Pete Zaitcev @ 2019-11-25  0:10 UTC (permalink / raw)
  Cc: andreyknvl, arnd, gregkh, jrdr.linux, keescook, kstewart,
	linux-kernel, linux-usb, stern, zaitcev, tglx, viro

On Sun, 24 Nov 2019 15:24:00 -0800
syzbot <syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com> wrote:

> syzbot has tested the proposed patch and the reproducer did not trigger  
> crash:

Okay. Alan, what is the most appropriate tree for me to submit now?
Does Greg have one?

Do you want Reviewed-by or something?

-- Pete


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: possible deadlock in mon_bin_vma_fault
  2019-11-25  0:10                                   ` Pete Zaitcev
@ 2019-11-25  2:12                                     ` Alan Stern
  0 siblings, 0 replies; 25+ messages in thread
From: Alan Stern @ 2019-11-25  2:12 UTC (permalink / raw)
  To: Pete Zaitcev
  Cc: andreyknvl, arnd, gregkh, jrdr.linux, keescook, kstewart,
	linux-kernel, linux-usb, tglx, viro

On Sun, 24 Nov 2019, Pete Zaitcev wrote:

> On Sun, 24 Nov 2019 15:24:00 -0800
> syzbot <syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com> wrote:
> 
> > syzbot has tested the proposed patch and the reproducer did not trigger  
> > crash:
> 
> Okay. Alan, what is the most appropriate tree for me to submit now?
> Does Greg have one?
> 
> Do you want Reviewed-by or something?

Send it to Greg.  Be sure to add a Fixes: line referencing the commit 
that syzbot found, and add a CC: <stable@vger.kernel.org> line.  You 
can also add:

Reviewed-by: Alan Stern <stern@rowland.harvard.edu>

Alan Stern


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2019-11-25  2:12 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <0000000000006ad6030574fead2e@google.com>
2019-11-20 12:01 ` possible deadlock in mon_bin_vma_fault syzbot
2019-11-20 16:14   ` Alan Stern
2019-11-20 17:12     ` Pete Zaitcev
2019-11-20 18:47       ` Alan Stern
2019-11-21 14:48         ` Pete Zaitcev
2019-11-21 16:20           ` Alan Stern
2019-11-21 16:46             ` Pete Zaitcev
2019-11-21 23:38             ` Pete Zaitcev
2019-11-22  7:18               ` Dmitry Vyukov
2019-11-22 15:27               ` Alan Stern
2019-11-22 20:52                 ` Pete Zaitcev
2019-11-22 22:13                   ` Alan Stern
2019-11-22 22:13                     ` syzbot
2019-11-23 17:18                       ` Alan Stern
2019-11-23 17:18                         ` syzbot
2019-11-24 15:59                           ` Alan Stern
2019-11-24 19:10                             ` syzbot
2019-11-24 20:55                               ` Alan Stern
2019-11-24 23:24                                 ` syzbot
2019-11-25  0:10                                   ` Pete Zaitcev
2019-11-25  2:12                                     ` Alan Stern
2019-11-23 17:18                         ` Re: " syzbot
2019-11-22 22:13                     ` syzbot
2019-11-20 17:33     ` Pete Zaitcev
2019-11-20 18:18       ` Alan Stern

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).