All of lore.kernel.org
 help / color / mirror / Atom feed
From: syzbot <syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com>
To: Alan Stern <stern@rowland.harvard.edu>
Cc: arnd@arndb.de, gregkh@linuxfoundation.org, jrdr.linux@gmail.com,
	keescook@chromium.org, kstewart@linuxfoundation.org,
	linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org,
	stern@rowland.harvard.edu, syzkaller-bugs@googlegroups.com,
	tglx@linutronix.de, viro@zeniv.linux.org.uk, zaitcev@redhat.com
Subject: Re: Re: possible deadlock in mon_bin_vma_fault
Date: Fri, 22 Nov 2019 14:13:21 -0800	[thread overview]
Message-ID: <000000000000c789960597f6b88b@google.com> (raw)
In-Reply-To: <Pine.LNX.4.44L0.1911221700420.1511-100000@iolanthe.rowland.org>

> On Fri, 22 Nov 2019, Pete Zaitcev wrote:

>> > It would be more elegant to do the rp->mmap_active test before calling
>> > kcalloc and mon_alloc_buf.  But of course that's a pretty minor thing.

>> Indeed it feels wrong that so much work gets discarded. However, memory
>> allocations can block, right? In the same time, our main objective here  
>> is
>> to make sure that when a page fault happens, we fill in the page that VMA
>> is intended to refer, and not one that was re-allocated. Therefore, I'm
>> trying to avoid a situation where:

>> 1. thread A checks mmap_active, finds it at zero and proceeds into the
>> reallocation ioctl
>> 2. thread A sleeps in get_free_page()
>> 3. thread B runs mmap() and succeeds
>> 4. thread A obtains its pages and proceeds to substitute the buffer
>> 5. thread B (or any other) pagefaults and ends with the new, unexpected  
>> page

>> The code is not pretty, but I don't see an alternative. Heck, I would
>> love you to find more races if you can.

> The alternative is to have the routines for mmap() hold fetch_lock
> instead of b_lock.  mmap() is allowed to sleep, so that would be okay.
> Then you would also hold fetch_lock while checking mmap_active and
> doing the memory allocations.  That would prevent any races -- in your
> example above, thread A would acquire fetch_lock in step 1, so thread B
> would block in step 3 until step 4 was finished.  Hence B would end up
> mapping the correct pages.

> In practice, I don't see this being a routine problem.  How often do
> multiple threads independently try to mmap the same usbmon buffer?

> Still, let's see syzbot reacts to your current patch.  The line below
> is how you ask syzbot to test a candidate patch.

> Alan Stern

> #syz test: linux-4.19.y f6e27dbb1afa

"linux-4.19.y" does not look like a valid git repo address.


> commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
> Author: Pete Zaitcev <zaitcev@kotori.zaitcev.us>
> Date:   Thu Nov 21 17:24:00 2019 -0600

>      usb: Fix a deadlock in usbmon between mmap and read

>      Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
>      Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com

> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..f48a23adbc35 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file,  
> unsigned int cmd, unsigned long arg

>   		mutex_lock(&rp->fetch_lock);
>   		spin_lock_irqsave(&rp->b_lock, flags);
> -		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> -		kfree(rp->b_vec);
> -		rp->b_vec  = vec;
> -		rp->b_size = size;
> -		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> -		rp->cnt_lost = 0;
> +		if (rp->mmap_active) {
> +			mon_free_buff(vec, size/CHUNK_SIZE);
> +			kfree(vec);
> +			ret = -EBUSY;
> +		} else {
> +			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> +			kfree(rp->b_vec);
> +			rp->b_vec  = vec;
> +			rp->b_size = size;
> +			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> +			rp->cnt_lost = 0;
> +		}
>   		spin_unlock_irqrestore(&rp->b_lock, flags);
>   		mutex_unlock(&rp->fetch_lock);
>   		}
> @@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct  
> poll_table_struct *wait)
>   static void mon_bin_vma_open(struct vm_area_struct *vma)
>   {
>   	struct mon_reader_bin *rp = vma->vm_private_data;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&rp->b_lock, flags);
>   	rp->mmap_active++;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>   }

>   static void mon_bin_vma_close(struct vm_area_struct *vma)
>   {
> +	unsigned long flags;
> +
>   	struct mon_reader_bin *rp = vma->vm_private_data;
> +	spin_lock_irqsave(&rp->b_lock, flags);
>   	rp->mmap_active--;
> +	spin_unlock_irqrestore(&rp->b_lock, flags);
>   }

>   /*
> @@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct  
> vm_fault *vmf)
>   	unsigned long offset, chunk_idx;
>   	struct page *pageptr;

> -	mutex_lock(&rp->fetch_lock);
>   	offset = vmf->pgoff << PAGE_SHIFT;
> -	if (offset >= rp->b_size) {
> -		mutex_unlock(&rp->fetch_lock);
> +	if (offset >= rp->b_size)
>   		return VM_FAULT_SIGBUS;
> -	}
>   	chunk_idx = offset / CHUNK_SIZE;
>   	pageptr = rp->b_vec[chunk_idx].pg;
>   	get_page(pageptr);
> -	mutex_unlock(&rp->fetch_lock);
>   	vmf->page = pageptr;
>   	return 0;
>   }




  reply	other threads:[~2019-11-22 22:13 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-03 22:01 possible deadlock in mon_bin_vma_fault syzbot
2019-11-20 12:01 ` syzbot
2019-11-20 16:14   ` Alan Stern
2019-11-20 17:12     ` Pete Zaitcev
2019-11-20 18:47       ` Alan Stern
2019-11-21 14:48         ` Pete Zaitcev
2019-11-21 16:20           ` Alan Stern
2019-11-21 16:46             ` Pete Zaitcev
2019-11-21 23:38             ` Pete Zaitcev
2019-11-22  7:18               ` Dmitry Vyukov
2019-11-22 15:27               ` Alan Stern
2019-11-22 20:52                 ` Pete Zaitcev
2019-11-22 22:13                   ` Alan Stern
2019-11-22 22:13                     ` syzbot [this message]
2019-11-23 17:18                       ` Alan Stern
2019-11-23 17:18                         ` syzbot
2019-11-24 15:59                           ` Alan Stern
2019-11-24 19:10                             ` syzbot
2019-11-24 20:55                               ` Alan Stern
2019-11-24 23:24                                 ` syzbot
2019-11-25  0:10                                   ` Pete Zaitcev
2019-11-25  2:12                                     ` Alan Stern
2019-11-23 17:18                         ` Re: " syzbot
2019-11-22 22:13                     ` syzbot
2019-11-20 17:33     ` Pete Zaitcev
2019-11-20 18:18       ` Alan Stern

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=000000000000c789960597f6b88b@google.com \
    --to=syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com \
    --cc=arnd@arndb.de \
    --cc=gregkh@linuxfoundation.org \
    --cc=jrdr.linux@gmail.com \
    --cc=keescook@chromium.org \
    --cc=kstewart@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-usb@vger.kernel.org \
    --cc=stern@rowland.harvard.edu \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    --cc=zaitcev@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.