All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping
@ 2022-01-17  4:12 Raphael Norwitz
  2022-01-17  4:12 ` [PATCH v3 1/6] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Raphael Norwitz @ 2022-01-17  4:12 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

Hey Stefan, Marc-Andre, MST, David -

As promised here is a series cleaning up error handling in the
libvhost-user memory mapping path. Most of these cleanups are
straightforward and have been discussed on the mailing list in threads
[1] and [2].

[1] https://lore.kernel.org/qemu-devel/20211018143319.GA11006@raphael-debian-dev/
[2] https://lore.kernel.org/qemu-devel/9391f500-70be-26cf-bcfc-591d3ee84d4e@redhat.com/

Changes since V1:
 * Checks for a single fd vu_add_mem_reg and vu_rem_mem_reg return false
   instead of true.
 * Check for over-running max ram slots in vu_add_mem_reg returns false
   instead of true.
 * vu_rem_mem_reg unmaps all matching regions.
 * Decriment iterator variable when looping through regions in
   vu_rem_mem_reg to ensure matching regions aren’t missed.

Changes since V2:
 * Fixed FD leaks on all input validation failures
 * Added comment David suggested to explain removing duplicate regions
 * Added David’s patch to close message FDs on VHOST_USER_REM_MEM_REG
 * Expanded commit message for patches checking FD numbers
 * Fixed vmsg->size <= sizeof(vmsg->payload.memreg) validation check
 * Improved error message when a backend has no free slots
 * Improved error messages when the backend receives invalid vmsg->fd_num
   and/or vmeg->size

Dropped R-b tags due to non-trivial changes.

Thanks,
Raphael

David Hildenbrand (2):
  libvhost-user: Simplify VHOST_USER_REM_MEM_REG
  libvhost-user: fix VHOST_USER_REM_MEM_REG not closing the fd

Raphael Norwitz (4):
  libvhost-user: Add vu_rem_mem_reg input validation
  libvhost-user: Add vu_add_mem_reg input validation
  libvhost-user: prevent over-running max RAM slots
  libvhost-user: handle removal of identical regions

 subprojects/libvhost-user/libvhost-user.c | 76 ++++++++++++++++++-----
 subprojects/libvhost-user/libvhost-user.h |  2 +
 2 files changed, 61 insertions(+), 17 deletions(-)

-- 
2.20.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/6] libvhost-user: Add vu_rem_mem_reg input validation
  2022-01-17  4:12 [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
@ 2022-01-17  4:12 ` Raphael Norwitz
  2022-01-17  8:19   ` David Hildenbrand
  2022-01-17  4:12 ` [PATCH v3 2/6] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Raphael Norwitz @ 2022-01-17  4:12 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

Today if multiple FDs are sent from the VMM to the backend in a
VHOST_USER_REM_MEM_REG message, one FD will be unmapped and the remaining
FDs will be leaked. Therefore if multiple FDs are sent we report an
error and fail the operation, closing all FDs in the message.

Likewise in case the VMM sends a message with a size less than that of a
memory region descriptor, we add a check to gracefully report an error
and fail the operation rather than crashing.

Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 15 +++++++++++++++
 subprojects/libvhost-user/libvhost-user.h |  2 ++
 2 files changed, 17 insertions(+)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 787f4d2d4f..b09b1c269e 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -801,6 +801,21 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
     VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
 
+    if (vmsg->fd_num != 1) {
+        vmsg_close_fds(vmsg);
+        vu_panic(dev, "VHOST_USER_REM_MEM_REG received %d fds - only 1 fd "
+                      "should be sent for this message type", vmsg->fd_num);
+        return false;
+    }
+
+    if (vmsg->size < VHOST_USER_MEM_REG_SIZE) {
+        close(vmsg->fds[0]);
+        vu_panic(dev, "VHOST_USER_REM_MEM_REG requires a message size of at "
+                      "least %d bytes and only %d bytes were received",
+                      VHOST_USER_MEM_REG_SIZE, vmsg->size);
+        return false;
+    }
+
     DPRINT("Removing region:\n");
     DPRINT("    guest_phys_addr: 0x%016"PRIx64"\n",
            msg_region->guest_phys_addr);
diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvhost-user/libvhost-user.h
index 3d13dfadde..cde9f07bb3 100644
--- a/subprojects/libvhost-user/libvhost-user.h
+++ b/subprojects/libvhost-user/libvhost-user.h
@@ -129,6 +129,8 @@ typedef struct VhostUserMemoryRegion {
     uint64_t mmap_offset;
 } VhostUserMemoryRegion;
 
+#define VHOST_USER_MEM_REG_SIZE (sizeof(VhostUserMemoryRegion))
+
 typedef struct VhostUserMemory {
     uint32_t nregions;
     uint32_t padding;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 2/6] libvhost-user: Add vu_add_mem_reg input validation
  2022-01-17  4:12 [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
  2022-01-17  4:12 ` [PATCH v3 1/6] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
@ 2022-01-17  4:12 ` Raphael Norwitz
  2022-01-17  8:19   ` David Hildenbrand
  2022-01-17  4:12 ` [PATCH v3 3/6] libvhost-user: Simplify VHOST_USER_REM_MEM_REG Raphael Norwitz
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Raphael Norwitz @ 2022-01-17  4:12 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

Today if multiple FDs are sent from the VMM to the backend in a
VHOST_USER_ADD_MEM_REG message, one FD will be mapped and the remaining
FDs will be leaked. Therefore if multiple FDs are sent we report an
error and fail the operation, closing all FDs in the message.

Likewise in case the VMM sends a message with a size less than that
of a memory region descriptor, we add a check to gracefully report an
error and fail the operation rather than crashing.

Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index b09b1c269e..1a8fc9d600 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -690,6 +690,21 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     VuDevRegion *dev_region = &dev->regions[dev->nregions];
     void *mmap_addr;
 
+    if (vmsg->fd_num != 1) {
+        vmsg_close_fds(vmsg);
+        vu_panic(dev, "VHOST_USER_ADD_MEM_REG received %d fds - only 1 fd "
+                      "should be sent for this message type", vmsg->fd_num);
+        return false;
+    }
+
+    if (vmsg->size < VHOST_USER_MEM_REG_SIZE) {
+        close(vmsg->fds[0]);
+        vu_panic(dev, "VHOST_USER_ADD_MEM_REG requires a message size of at "
+                      "least %d bytes and only %d bytes were received",
+                      VHOST_USER_MEM_REG_SIZE, vmsg->size);
+        return false;
+    }
+
     /*
      * If we are in postcopy mode and we receive a u64 payload with a 0 value
      * we know all the postcopy client bases have been received, and we
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 3/6] libvhost-user: Simplify VHOST_USER_REM_MEM_REG
  2022-01-17  4:12 [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
  2022-01-17  4:12 ` [PATCH v3 1/6] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
  2022-01-17  4:12 ` [PATCH v3 2/6] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
@ 2022-01-17  4:12 ` Raphael Norwitz
  2022-01-17  4:12 ` [PATCH v3 4/6] libvhost-user: fix VHOST_USER_REM_MEM_REG not closing the fd Raphael Norwitz
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Raphael Norwitz @ 2022-01-17  4:12 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

From: David Hildenbrand <david@redhat.com>

Let's avoid having to manually copy all elements. Copy only the ones
necessary to close the hole and perform the operation in-place without
a second array.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 30 +++++++++++------------
 1 file changed, 14 insertions(+), 16 deletions(-)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 1a8fc9d600..7dd8e918b4 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -811,10 +811,8 @@ static inline bool reg_equal(VuDevRegion *vudev_reg,
 
 static bool
 vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
-    int i, j;
-    bool found = false;
-    VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
     VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
+    int i;
 
     if (vmsg->fd_num != 1) {
         vmsg_close_fds(vmsg);
@@ -841,28 +839,28 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     DPRINT("    mmap_offset      0x%016"PRIx64"\n",
            msg_region->mmap_offset);
 
-    for (i = 0, j = 0; i < dev->nregions; i++) {
-        if (!reg_equal(&dev->regions[i], msg_region)) {
-            shadow_regions[j].gpa = dev->regions[i].gpa;
-            shadow_regions[j].size = dev->regions[i].size;
-            shadow_regions[j].qva = dev->regions[i].qva;
-            shadow_regions[j].mmap_addr = dev->regions[i].mmap_addr;
-            shadow_regions[j].mmap_offset = dev->regions[i].mmap_offset;
-            j++;
-        } else {
-            found = true;
+    for (i = 0; i < dev->nregions; i++) {
+        if (reg_equal(&dev->regions[i], msg_region)) {
             VuDevRegion *r = &dev->regions[i];
             void *m = (void *) (uintptr_t) r->mmap_addr;
 
             if (m) {
                 munmap(m, r->size + r->mmap_offset);
             }
+
+            break;
         }
     }
 
-    if (found) {
-        memcpy(dev->regions, shadow_regions,
-               sizeof(VuDevRegion) * VHOST_USER_MAX_RAM_SLOTS);
+    if (i < dev->nregions) {
+        /*
+         * Shift all affected entries by 1 to close the hole at index i and
+         * zero out the last entry.
+         */
+        memmove(dev->regions + i, dev->regions + i + 1,
+               sizeof(VuDevRegion) * (dev->nregions - i - 1));
+        memset(dev->regions + dev->nregions - 1, 0,
+               sizeof(VuDevRegion));
         DPRINT("Successfully removed a region\n");
         dev->nregions--;
         vmsg_set_reply_u64(vmsg, 0);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 4/6] libvhost-user: fix VHOST_USER_REM_MEM_REG not closing the fd
  2022-01-17  4:12 [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
                   ` (2 preceding siblings ...)
  2022-01-17  4:12 ` [PATCH v3 3/6] libvhost-user: Simplify VHOST_USER_REM_MEM_REG Raphael Norwitz
@ 2022-01-17  4:12 ` Raphael Norwitz
  2022-01-17  4:12 ` [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
  2022-01-17  4:12 ` [PATCH v3 6/6] libvhost-user: handle removal of identical regions Raphael Norwitz
  5 siblings, 0 replies; 12+ messages in thread
From: Raphael Norwitz @ 2022-01-17  4:12 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: Paolo Bonzini, raphael.s.norwitz, qemu-devel, Coiby Xu, Raphael Norwitz

From: David Hildenbrand <david@redhat.com>

We end up not closing the file descriptor, resulting in leaking one
file descriptor for each VHOST_USER_REM_MEM_REG message.

Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user")
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Coiby Xu <coiby.xu@gmail.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 7dd8e918b4..3f4d7221ca 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -868,6 +868,8 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
         vu_panic(dev, "Specified region not found\n");
     }
 
+    close(vmsg->fds[0]);
+
     return true;
 }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots
  2022-01-17  4:12 [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
                   ` (3 preceding siblings ...)
  2022-01-17  4:12 ` [PATCH v3 4/6] libvhost-user: fix VHOST_USER_REM_MEM_REG not closing the fd Raphael Norwitz
@ 2022-01-17  4:12 ` Raphael Norwitz
  2022-01-17  8:20   ` David Hildenbrand
  2022-01-17  4:12 ` [PATCH v3 6/6] libvhost-user: handle removal of identical regions Raphael Norwitz
  5 siblings, 1 reply; 12+ messages in thread
From: Raphael Norwitz @ 2022-01-17  4:12 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
libvhost-user, no guardrails were added to protect against QEMU
attempting to hot-add too many RAM slots to a VM with a libvhost-user
based backed attached.

This change adds the missing error handling by introducing a check on
the number of RAM slots the device has available before proceeding to
process the VHOST_USER_ADD_MEM_REG message.

Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 3f4d7221ca..2a1fa00a44 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -705,6 +705,14 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
         return false;
     }
 
+    if (dev->nregions == VHOST_USER_MAX_RAM_SLOTS) {
+        close(vmsg->fds[0]);
+        vu_panic(dev, "failing attempt to hot add memory via "
+                      "VHOST_USER_ADD_MEM_REG message because the backend has "
+                      "no free ram slots available");
+        return false;
+    }
+
     /*
      * If we are in postcopy mode and we receive a u64 payload with a 0 value
      * we know all the postcopy client bases have been received, and we
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 6/6] libvhost-user: handle removal of identical regions
  2022-01-17  4:12 [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
                   ` (4 preceding siblings ...)
  2022-01-17  4:12 ` [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
@ 2022-01-17  4:12 ` Raphael Norwitz
  2022-01-17  8:21   ` David Hildenbrand
  5 siblings, 1 reply; 12+ messages in thread
From: Raphael Norwitz @ 2022-01-17  4:12 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

Today if QEMU (or any other VMM) has sent multiple copies of the same
region to a libvhost-user based backend and then attempts to remove the
region, only one instance of the region will be removed, leaving stale
copies of the region in dev->regions[].

This change resolves this by having vu_rem_mem_reg() iterate through all
regions in dev->regions[] and delete all matching regions.

Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 28 +++++++++++++----------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 2a1fa00a44..0ee43b8e93 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -821,6 +821,7 @@ static bool
 vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
     int i;
+    bool found = false;
 
     if (vmsg->fd_num != 1) {
         vmsg_close_fds(vmsg);
@@ -856,21 +857,24 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
                 munmap(m, r->size + r->mmap_offset);
             }
 
-            break;
+            /*
+             * Shift all affected entries by 1 to close the hole at index i and
+             * zero out the last entry.
+             */
+            memmove(dev->regions + i, dev->regions + i + 1,
+                    sizeof(VuDevRegion) * (dev->nregions - i - 1));
+            memset(dev->regions + dev->nregions - 1, 0, sizeof(VuDevRegion));
+            DPRINT("Successfully removed a region\n");
+            dev->nregions--;
+            i--;
+
+            found = true;
+
+            /* Continue the search for eventual duplicates. */
         }
     }
 
-    if (i < dev->nregions) {
-        /*
-         * Shift all affected entries by 1 to close the hole at index i and
-         * zero out the last entry.
-         */
-        memmove(dev->regions + i, dev->regions + i + 1,
-               sizeof(VuDevRegion) * (dev->nregions - i - 1));
-        memset(dev->regions + dev->nregions - 1, 0,
-               sizeof(VuDevRegion));
-        DPRINT("Successfully removed a region\n");
-        dev->nregions--;
+    if (found) {
         vmsg_set_reply_u64(vmsg, 0);
     } else {
         vu_panic(dev, "Specified region not found\n");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/6] libvhost-user: Add vu_rem_mem_reg input validation
  2022-01-17  4:12 ` [PATCH v3 1/6] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
@ 2022-01-17  8:19   ` David Hildenbrand
  0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand @ 2022-01-17  8:19 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 17.01.22 05:12, Raphael Norwitz wrote:
> Today if multiple FDs are sent from the VMM to the backend in a
> VHOST_USER_REM_MEM_REG message, one FD will be unmapped and the remaining
> FDs will be leaked. Therefore if multiple FDs are sent we report an
> error and fail the operation, closing all FDs in the message.
> 
> Likewise in case the VMM sends a message with a size less than that of a
> memory region descriptor, we add a check to gracefully report an error
> and fail the operation rather than crashing.
> 
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 15 +++++++++++++++
>  subprojects/libvhost-user/libvhost-user.h |  2 ++
>  2 files changed, 17 insertions(+)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 787f4d2d4f..b09b1c269e 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -801,6 +801,21 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>      VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
>      VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
>  
> +    if (vmsg->fd_num != 1) {
> +        vmsg_close_fds(vmsg);
> +        vu_panic(dev, "VHOST_USER_REM_MEM_REG received %d fds - only 1 fd "
> +                      "should be sent for this message type", vmsg->fd_num);
> +        return false;
> +    }
> +
> +    if (vmsg->size < VHOST_USER_MEM_REG_SIZE) {
> +        close(vmsg->fds[0]);

I wonder if using vmsg_close_fds(vmsg); makes the code easier to read.

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 2/6] libvhost-user: Add vu_add_mem_reg input validation
  2022-01-17  4:12 ` [PATCH v3 2/6] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
@ 2022-01-17  8:19   ` David Hildenbrand
  0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand @ 2022-01-17  8:19 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 17.01.22 05:12, Raphael Norwitz wrote:
> Today if multiple FDs are sent from the VMM to the backend in a
> VHOST_USER_ADD_MEM_REG message, one FD will be mapped and the remaining
> FDs will be leaked. Therefore if multiple FDs are sent we report an
> error and fail the operation, closing all FDs in the message.
> 
> Likewise in case the VMM sends a message with a size less than that
> of a memory region descriptor, we add a check to gracefully report an
> error and fail the operation rather than crashing.
> 
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index b09b1c269e..1a8fc9d600 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -690,6 +690,21 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>      VuDevRegion *dev_region = &dev->regions[dev->nregions];
>      void *mmap_addr;
>  
> +    if (vmsg->fd_num != 1) {
> +        vmsg_close_fds(vmsg);
> +        vu_panic(dev, "VHOST_USER_ADD_MEM_REG received %d fds - only 1 fd "
> +                      "should be sent for this message type", vmsg->fd_num);
> +        return false;
> +    }
> +
> +    if (vmsg->size < VHOST_USER_MEM_REG_SIZE) {
> +        close(vmsg->fds[0]);

Same comment as for patch #1

Reviewed-by: David Hildenbrand <david@redhat.com>


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots
  2022-01-17  4:12 ` [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
@ 2022-01-17  8:20   ` David Hildenbrand
  2022-01-17 12:32     ` Philippe Mathieu-Daudé via
  0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2022-01-17  8:20 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 17.01.22 05:12, Raphael Norwitz wrote:
> When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
> libvhost-user, no guardrails were added to protect against QEMU
> attempting to hot-add too many RAM slots to a VM with a libvhost-user
> based backed attached.
> 
> This change adds the missing error handling by introducing a check on
> the number of RAM slots the device has available before proceeding to
> process the VHOST_USER_ADD_MEM_REG message.
> 
> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 3f4d7221ca..2a1fa00a44 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -705,6 +705,14 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>          return false;
>      }
>  
> +    if (dev->nregions == VHOST_USER_MAX_RAM_SLOTS) {
> +        close(vmsg->fds[0]);
> +        vu_panic(dev, "failing attempt to hot add memory via "
> +                      "VHOST_USER_ADD_MEM_REG message because the backend has "
> +                      "no free ram slots available");

Maybe simply "VHOST_USER_ADD_MEM_REG failed because there are no free
ram slots"

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 6/6] libvhost-user: handle removal of identical regions
  2022-01-17  4:12 ` [PATCH v3 6/6] libvhost-user: handle removal of identical regions Raphael Norwitz
@ 2022-01-17  8:21   ` David Hildenbrand
  0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand @ 2022-01-17  8:21 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 17.01.22 05:12, Raphael Norwitz wrote:
> Today if QEMU (or any other VMM) has sent multiple copies of the same
> region to a libvhost-user based backend and then attempts to remove the
> region, only one instance of the region will be removed, leaving stale
> copies of the region in dev->regions[].
> 
> This change resolves this by having vu_rem_mem_reg() iterate through all
> regions in dev->regions[] and delete all matching regions.
> 
> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>

Reviewed-by: David Hildenbrand <david@redhat.com>


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots
  2022-01-17  8:20   ` David Hildenbrand
@ 2022-01-17 12:32     ` Philippe Mathieu-Daudé via
  0 siblings, 0 replies; 12+ messages in thread
From: Philippe Mathieu-Daudé via @ 2022-01-17 12:32 UTC (permalink / raw)
  To: David Hildenbrand, Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 1/17/22 09:20, David Hildenbrand wrote:
> On 17.01.22 05:12, Raphael Norwitz wrote:
>> When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
>> libvhost-user, no guardrails were added to protect against QEMU
>> attempting to hot-add too many RAM slots to a VM with a libvhost-user
>> based backed attached.
>>
>> This change adds the missing error handling by introducing a check on
>> the number of RAM slots the device has available before proceeding to
>> process the VHOST_USER_ADD_MEM_REG message.
>>
>> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
>> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
>> ---
>>  subprojects/libvhost-user/libvhost-user.c | 8 ++++++++
>>  1 file changed, 8 insertions(+)

>> +    if (dev->nregions == VHOST_USER_MAX_RAM_SLOTS) {
>> +        close(vmsg->fds[0]);
>> +        vu_panic(dev, "failing attempt to hot add memory via "
>> +                      "VHOST_USER_ADD_MEM_REG message because the backend has "
>> +                      "no free ram slots available");
> 
> Maybe simply "VHOST_USER_ADD_MEM_REG failed because there are no free
> ram slots"
> 
> Reviewed-by: David Hildenbrand <david@redhat.com>
> 

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-01-17 13:05 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-17  4:12 [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
2022-01-17  4:12 ` [PATCH v3 1/6] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
2022-01-17  8:19   ` David Hildenbrand
2022-01-17  4:12 ` [PATCH v3 2/6] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
2022-01-17  8:19   ` David Hildenbrand
2022-01-17  4:12 ` [PATCH v3 3/6] libvhost-user: Simplify VHOST_USER_REM_MEM_REG Raphael Norwitz
2022-01-17  4:12 ` [PATCH v3 4/6] libvhost-user: fix VHOST_USER_REM_MEM_REG not closing the fd Raphael Norwitz
2022-01-17  4:12 ` [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
2022-01-17  8:20   ` David Hildenbrand
2022-01-17 12:32     ` Philippe Mathieu-Daudé via
2022-01-17  4:12 ` [PATCH v3 6/6] libvhost-user: handle removal of identical regions Raphael Norwitz
2022-01-17  8:21   ` David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.