netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net] vhost: fix OOB in get_rx_bufs()
@ 2019-01-28  7:05 Jason Wang
  2019-01-29  6:30 ` Stefan Hajnoczi
  2019-01-29  6:54 ` David Miller
  0 siblings, 2 replies; 8+ messages in thread
From: Jason Wang @ 2019-01-28  7:05 UTC (permalink / raw)
  To: mst, jasowang, stefanha; +Cc: kvm, virtualization, netdev, linux-kernel

After batched used ring updating was introduced in commit e2b3b35eb989
("vhost_net: batch used ring update in rx"). We tend to batch heads in
vq->heads for more than one packet. But the quota passed to
get_rx_bufs() was not correctly limited, which can result a OOB write
in vq->heads.

        headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx,
                    vhost_len, &in, vq_log, &log,
                    likely(mergeable) ? UIO_MAXIOV : 1);

UIO_MAXIOV was still used which is wrong since we could have batched
used in vq->heads, this will cause OOB if the next buffer needs more
than 960 (1024 (UIO_MAXIOV) - 64 (VHOST_NET_BATCH)) heads after we've
batched 64 (VHOST_NET_BATCH) heads:

=============================================================================
BUG kmalloc-8k (Tainted: G    B            ): Redzone overwritten
-----------------------------------------------------------------------------

INFO: 0x00000000fd93b7a2-0x00000000f0713384. First byte 0xa9 instead of 0xcc
INFO: Allocated in alloc_pd+0x22/0x60 age=3933677 cpu=2 pid=2674
    kmem_cache_alloc_trace+0xbb/0x140
    alloc_pd+0x22/0x60
    gen8_ppgtt_create+0x11d/0x5f0
    i915_ppgtt_create+0x16/0x80
    i915_gem_create_context+0x248/0x390
    i915_gem_context_create_ioctl+0x4b/0xe0
    drm_ioctl_kernel+0xa5/0xf0
    drm_ioctl+0x2ed/0x3a0
    do_vfs_ioctl+0x9f/0x620
    ksys_ioctl+0x6b/0x80
    __x64_sys_ioctl+0x11/0x20
    do_syscall_64+0x43/0xf0
    entry_SYSCALL_64_after_hwframe+0x44/0xa9
INFO: Slab 0x00000000d13e87af objects=3 used=3 fp=0x          (null) flags=0x200000000010201
INFO: Object 0x0000000003278802 @offset=17064 fp=0x00000000e2e6652b

Fixing this by allocating UIO_MAXIOV + VHOST_NET_BATCH iovs for
vhost-net. This is done through set the limitation through
vhost_dev_init(), then set_owner can allocate the number of iov in a
per device manner.

This fixes CVE-2018-16880.

Fixes: e2b3b35eb989 ("vhost_net: batch used ring update in rx")
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/net.c   | 3 ++-
 drivers/vhost/scsi.c  | 2 +-
 drivers/vhost/vhost.c | 7 ++++---
 drivers/vhost/vhost.h | 4 +++-
 drivers/vhost/vsock.c | 2 +-
 5 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index bca86bf7189f..df51a35cf537 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1337,7 +1337,8 @@ static int vhost_net_open(struct inode *inode, struct file *f)
 		n->vqs[i].rx_ring = NULL;
 		vhost_net_buf_init(&n->vqs[i].rxq);
 	}
-	vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX);
+	vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX,
+		       UIO_MAXIOV + VHOST_NET_BATCH);
 
 	vhost_poll_init(n->poll + VHOST_NET_VQ_TX, handle_tx_net, EPOLLOUT, dev);
 	vhost_poll_init(n->poll + VHOST_NET_VQ_RX, handle_rx_net, EPOLLIN, dev);
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 344684f3e2e4..23593cb23dd0 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1627,7 +1627,7 @@ static int vhost_scsi_open(struct inode *inode, struct file *f)
 		vqs[i] = &vs->vqs[i].vq;
 		vs->vqs[i].vq.handle_kick = vhost_scsi_handle_kick;
 	}
-	vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ);
+	vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ, UIO_MAXIOV);
 
 	vhost_scsi_init_inflight(vs, NULL);
 
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 15a216cdd507..24a129fcdd61 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -390,9 +390,9 @@ static long vhost_dev_alloc_iovecs(struct vhost_dev *dev)
 		vq->indirect = kmalloc_array(UIO_MAXIOV,
 					     sizeof(*vq->indirect),
 					     GFP_KERNEL);
-		vq->log = kmalloc_array(UIO_MAXIOV, sizeof(*vq->log),
+		vq->log = kmalloc_array(dev->iov_limit, sizeof(*vq->log),
 					GFP_KERNEL);
-		vq->heads = kmalloc_array(UIO_MAXIOV, sizeof(*vq->heads),
+		vq->heads = kmalloc_array(dev->iov_limit, sizeof(*vq->heads),
 					  GFP_KERNEL);
 		if (!vq->indirect || !vq->log || !vq->heads)
 			goto err_nomem;
@@ -414,7 +414,7 @@ static void vhost_dev_free_iovecs(struct vhost_dev *dev)
 }
 
 void vhost_dev_init(struct vhost_dev *dev,
-		    struct vhost_virtqueue **vqs, int nvqs)
+		    struct vhost_virtqueue **vqs, int nvqs, int iov_limit)
 {
 	struct vhost_virtqueue *vq;
 	int i;
@@ -427,6 +427,7 @@ void vhost_dev_init(struct vhost_dev *dev,
 	dev->iotlb = NULL;
 	dev->mm = NULL;
 	dev->worker = NULL;
+	dev->iov_limit = iov_limit;
 	init_llist_head(&dev->work_list);
 	init_waitqueue_head(&dev->wait);
 	INIT_LIST_HEAD(&dev->read_list);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 1b675dad5e05..9490e7ddb340 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -170,9 +170,11 @@ struct vhost_dev {
 	struct list_head read_list;
 	struct list_head pending_list;
 	wait_queue_head_t wait;
+	int iov_limit;
 };
 
-void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs);
+void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs,
+		    int nvqs, int iov_limit);
 long vhost_dev_set_owner(struct vhost_dev *dev);
 bool vhost_dev_has_owner(struct vhost_dev *dev);
 long vhost_dev_check_owner(struct vhost_dev *);
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 3fbc068eaa9b..bb5fc0e9fbc2 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -531,7 +531,7 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
 	vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick;
 	vsock->vqs[VSOCK_VQ_RX].handle_kick = vhost_vsock_handle_rx_kick;
 
-	vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs));
+	vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs), UIO_MAXIOV);
 
 	file->private_data = vsock;
 	spin_lock_init(&vsock->send_pkt_list_lock);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH net] vhost: fix OOB in get_rx_bufs()
  2019-01-28  7:05 [PATCH net] vhost: fix OOB in get_rx_bufs() Jason Wang
@ 2019-01-29  6:30 ` Stefan Hajnoczi
  2019-01-29  6:54 ` David Miller
  1 sibling, 0 replies; 8+ messages in thread
From: Stefan Hajnoczi @ 2019-01-29  6:30 UTC (permalink / raw)
  To: Jason Wang; +Cc: mst, kvm, virtualization, netdev, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2566 bytes --]

On Mon, Jan 28, 2019 at 03:05:05PM +0800, Jason Wang wrote:
> After batched used ring updating was introduced in commit e2b3b35eb989
> ("vhost_net: batch used ring update in rx"). We tend to batch heads in
> vq->heads for more than one packet. But the quota passed to
> get_rx_bufs() was not correctly limited, which can result a OOB write
> in vq->heads.
> 
>         headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx,
>                     vhost_len, &in, vq_log, &log,
>                     likely(mergeable) ? UIO_MAXIOV : 1);
> 
> UIO_MAXIOV was still used which is wrong since we could have batched
> used in vq->heads, this will cause OOB if the next buffer needs more
> than 960 (1024 (UIO_MAXIOV) - 64 (VHOST_NET_BATCH)) heads after we've
> batched 64 (VHOST_NET_BATCH) heads:
> 
> =============================================================================
> BUG kmalloc-8k (Tainted: G    B            ): Redzone overwritten
> -----------------------------------------------------------------------------
> 
> INFO: 0x00000000fd93b7a2-0x00000000f0713384. First byte 0xa9 instead of 0xcc
> INFO: Allocated in alloc_pd+0x22/0x60 age=3933677 cpu=2 pid=2674
>     kmem_cache_alloc_trace+0xbb/0x140
>     alloc_pd+0x22/0x60
>     gen8_ppgtt_create+0x11d/0x5f0
>     i915_ppgtt_create+0x16/0x80
>     i915_gem_create_context+0x248/0x390
>     i915_gem_context_create_ioctl+0x4b/0xe0
>     drm_ioctl_kernel+0xa5/0xf0
>     drm_ioctl+0x2ed/0x3a0
>     do_vfs_ioctl+0x9f/0x620
>     ksys_ioctl+0x6b/0x80
>     __x64_sys_ioctl+0x11/0x20
>     do_syscall_64+0x43/0xf0
>     entry_SYSCALL_64_after_hwframe+0x44/0xa9
> INFO: Slab 0x00000000d13e87af objects=3 used=3 fp=0x          (null) flags=0x200000000010201
> INFO: Object 0x0000000003278802 @offset=17064 fp=0x00000000e2e6652b
> 
> Fixing this by allocating UIO_MAXIOV + VHOST_NET_BATCH iovs for
> vhost-net. This is done through set the limitation through
> vhost_dev_init(), then set_owner can allocate the number of iov in a
> per device manner.
> 
> This fixes CVE-2018-16880.
> 
> Fixes: e2b3b35eb989 ("vhost_net: batch used ring update in rx")
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  drivers/vhost/net.c   | 3 ++-
>  drivers/vhost/scsi.c  | 2 +-
>  drivers/vhost/vhost.c | 7 ++++---
>  drivers/vhost/vhost.h | 4 +++-
>  drivers/vhost/vsock.c | 2 +-
>  5 files changed, 11 insertions(+), 7 deletions(-)

No change in the scsi and vsock cases.  I haven't reviewed the net case.

Acked-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] vhost: fix OOB in get_rx_bufs()
  2019-01-28  7:05 [PATCH net] vhost: fix OOB in get_rx_bufs() Jason Wang
  2019-01-29  6:30 ` Stefan Hajnoczi
@ 2019-01-29  6:54 ` David Miller
  2019-01-29 22:54   ` Michael S. Tsirkin
  1 sibling, 1 reply; 8+ messages in thread
From: David Miller @ 2019-01-29  6:54 UTC (permalink / raw)
  To: jasowang; +Cc: mst, stefanha, kvm, virtualization, netdev, linux-kernel

From: Jason Wang <jasowang@redhat.com>
Date: Mon, 28 Jan 2019 15:05:05 +0800

> After batched used ring updating was introduced in commit e2b3b35eb989
> ("vhost_net: batch used ring update in rx"). We tend to batch heads in
> vq->heads for more than one packet. But the quota passed to
> get_rx_bufs() was not correctly limited, which can result a OOB write
> in vq->heads.
> 
>         headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx,
>                     vhost_len, &in, vq_log, &log,
>                     likely(mergeable) ? UIO_MAXIOV : 1);
> 
> UIO_MAXIOV was still used which is wrong since we could have batched
> used in vq->heads, this will cause OOB if the next buffer needs more
> than 960 (1024 (UIO_MAXIOV) - 64 (VHOST_NET_BATCH)) heads after we've
> batched 64 (VHOST_NET_BATCH) heads:
 ...
> Fixing this by allocating UIO_MAXIOV + VHOST_NET_BATCH iovs for
> vhost-net. This is done through set the limitation through
> vhost_dev_init(), then set_owner can allocate the number of iov in a
> per device manner.
> 
> This fixes CVE-2018-16880.
> 
> Fixes: e2b3b35eb989 ("vhost_net: batch used ring update in rx")
> Signed-off-by: Jason Wang <jasowang@redhat.com>

Applied and queued up for -stable, thanks!

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] vhost: fix OOB in get_rx_bufs()
  2019-01-29  6:54 ` David Miller
@ 2019-01-29 22:54   ` Michael S. Tsirkin
  2019-01-29 23:10     ` David Miller
  0 siblings, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2019-01-29 22:54 UTC (permalink / raw)
  To: David Miller
  Cc: jasowang, stefanha, kvm, virtualization, netdev, linux-kernel

On Mon, Jan 28, 2019 at 10:54:44PM -0800, David Miller wrote:
> From: Jason Wang <jasowang@redhat.com>
> Date: Mon, 28 Jan 2019 15:05:05 +0800
> 
> > After batched used ring updating was introduced in commit e2b3b35eb989
> > ("vhost_net: batch used ring update in rx"). We tend to batch heads in
> > vq->heads for more than one packet. But the quota passed to
> > get_rx_bufs() was not correctly limited, which can result a OOB write
> > in vq->heads.
> > 
> >         headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx,
> >                     vhost_len, &in, vq_log, &log,
> >                     likely(mergeable) ? UIO_MAXIOV : 1);
> > 
> > UIO_MAXIOV was still used which is wrong since we could have batched
> > used in vq->heads, this will cause OOB if the next buffer needs more
> > than 960 (1024 (UIO_MAXIOV) - 64 (VHOST_NET_BATCH)) heads after we've
> > batched 64 (VHOST_NET_BATCH) heads:
>  ...
> > Fixing this by allocating UIO_MAXIOV + VHOST_NET_BATCH iovs for
> > vhost-net. This is done through set the limitation through
> > vhost_dev_init(), then set_owner can allocate the number of iov in a
> > per device manner.
> > 
> > This fixes CVE-2018-16880.
> > 
> > Fixes: e2b3b35eb989 ("vhost_net: batch used ring update in rx")
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> 
> Applied and queued up for -stable, thanks!

Wow it seems we are down to hours round time post to queue.
It would be hard to keep up that rate generally.
However, I am guessing this was already in downstreams, and it's a CVE,
so I guess it's a no brainer and review wasn't really necessary - was
that the idea? Just checking.

-- 
MST

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] vhost: fix OOB in get_rx_bufs()
  2019-01-29 22:54   ` Michael S. Tsirkin
@ 2019-01-29 23:10     ` David Miller
  2019-01-29 23:38       ` David Miller
  0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2019-01-29 23:10 UTC (permalink / raw)
  To: mst; +Cc: jasowang, stefanha, kvm, virtualization, netdev, linux-kernel

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Tue, 29 Jan 2019 17:54:44 -0500

> On Mon, Jan 28, 2019 at 10:54:44PM -0800, David Miller wrote:
>> From: Jason Wang <jasowang@redhat.com>
>> Date: Mon, 28 Jan 2019 15:05:05 +0800
>> 
>> > After batched used ring updating was introduced in commit e2b3b35eb989
>> > ("vhost_net: batch used ring update in rx"). We tend to batch heads in
>> > vq->heads for more than one packet. But the quota passed to
>> > get_rx_bufs() was not correctly limited, which can result a OOB write
>> > in vq->heads.
>> > 
>> >         headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx,
>> >                     vhost_len, &in, vq_log, &log,
>> >                     likely(mergeable) ? UIO_MAXIOV : 1);
>> > 
>> > UIO_MAXIOV was still used which is wrong since we could have batched
>> > used in vq->heads, this will cause OOB if the next buffer needs more
>> > than 960 (1024 (UIO_MAXIOV) - 64 (VHOST_NET_BATCH)) heads after we've
>> > batched 64 (VHOST_NET_BATCH) heads:
>>  ...
>> > Fixing this by allocating UIO_MAXIOV + VHOST_NET_BATCH iovs for
>> > vhost-net. This is done through set the limitation through
>> > vhost_dev_init(), then set_owner can allocate the number of iov in a
>> > per device manner.
>> > 
>> > This fixes CVE-2018-16880.
>> > 
>> > Fixes: e2b3b35eb989 ("vhost_net: batch used ring update in rx")
>> > Signed-off-by: Jason Wang <jasowang@redhat.com>
>> 
>> Applied and queued up for -stable, thanks!
> 
> Wow it seems we are down to hours round time post to queue.
> It would be hard to keep up that rate generally.
> However, I am guessing this was already in downstreams, and it's a CVE,
> so I guess it's a no brainer and review wasn't really necessary - was
> that the idea? Just checking.

Yeah the CVE pushed my hand a little bit, and I knew I was going to send Linus
a pull request today because David Watson needs some TLS changes in net-next.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] vhost: fix OOB in get_rx_bufs()
  2019-01-29 23:10     ` David Miller
@ 2019-01-29 23:38       ` David Miller
  2019-01-30  1:36         ` Michael S. Tsirkin
  0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2019-01-29 23:38 UTC (permalink / raw)
  To: mst; +Cc: jasowang, stefanha, kvm, virtualization, netdev, linux-kernel

From: David Miller <davem@davemloft.net>
Date: Tue, 29 Jan 2019 15:10:26 -0800 (PST)

> Yeah the CVE pushed my hand a little bit, and I knew I was going to
> send Linus a pull request today because David Watson needs some TLS
> changes in net-next.

I also want to make a general comment.... for the record.

If I let patches slip consistently past 24 hours my backlog is
unmanageable.  Even with aggressively applying things quickly I'm
right now at 70-75.  If I do not do what I am doing, then it's in the
100-150 range.

So I am at the point where I often must move forward with patches that
I think I personally can verify and vet on my own.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] vhost: fix OOB in get_rx_bufs()
  2019-01-29 23:38       ` David Miller
@ 2019-01-30  1:36         ` Michael S. Tsirkin
  2019-01-30 22:31           ` David Miller
  0 siblings, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2019-01-30  1:36 UTC (permalink / raw)
  To: David Miller
  Cc: jasowang, stefanha, kvm, virtualization, netdev, linux-kernel

On Tue, Jan 29, 2019 at 03:38:10PM -0800, David Miller wrote:
> From: David Miller <davem@davemloft.net>
> Date: Tue, 29 Jan 2019 15:10:26 -0800 (PST)
> 
> > Yeah the CVE pushed my hand a little bit, and I knew I was going to
> > send Linus a pull request today because David Watson needs some TLS
> > changes in net-next.
> 
> I also want to make a general comment.... for the record.
> 
> If I let patches slip consistently past 24 hours my backlog is
> unmanageable.  Even with aggressively applying things quickly I'm
> right now at 70-75.  If I do not do what I am doing, then it's in the
> 100-150 range.
> 
> So I am at the point where I often must move forward with patches that
> I think I personally can verify and vet on my own.

If it helps I can include most virtio stuff in my pull requests instead.
Or if that can't work since there's too often a dependency on net-next,
maybe Jason wants to create a tree and send pull requests to you.  Let
us know if that will help, and which of the options looks better from
your POV.

-- 
MST

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net] vhost: fix OOB in get_rx_bufs()
  2019-01-30  1:36         ` Michael S. Tsirkin
@ 2019-01-30 22:31           ` David Miller
  0 siblings, 0 replies; 8+ messages in thread
From: David Miller @ 2019-01-30 22:31 UTC (permalink / raw)
  To: mst; +Cc: jasowang, stefanha, kvm, virtualization, netdev, linux-kernel

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Tue, 29 Jan 2019 20:36:31 -0500

> If it helps I can include most virtio stuff in my pull requests instead.
> Or if that can't work since there's too often a dependency on net-next,
> maybe Jason wants to create a tree and send pull requests to you.  Let
> us know if that will help, and which of the options looks better from
> your POV.

Thanks for offering Michael, I really appreciate it.

Let me think about the logistics of that and how it may or may not
help me with my backlog.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-01-30 23:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-28  7:05 [PATCH net] vhost: fix OOB in get_rx_bufs() Jason Wang
2019-01-29  6:30 ` Stefan Hajnoczi
2019-01-29  6:54 ` David Miller
2019-01-29 22:54   ` Michael S. Tsirkin
2019-01-29 23:10     ` David Miller
2019-01-29 23:38       ` David Miller
2019-01-30  1:36         ` Michael S. Tsirkin
2019-01-30 22:31           ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).