All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping
@ 2022-01-06  6:47 Raphael Norwitz
  2022-01-06  6:47 ` [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-06  6:47 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

Hey Stefan, Marc-Andre, MST, David -

As promised here is a series cleaning up error handling in the
libvhost-user memory mapping path. Most of these cleanups are
straightforward and have been discussed on the mailing list in threads
[1] and [2].

[1] https://lore.kernel.org/qemu-devel/20211018143319.GA11006@raphael-debian-dev/
[2] https://lore.kernel.org/qemu-devel/9391f500-70be-26cf-bcfc-591d3ee84d4e@redhat.com/

Changes since V1:
 * Checks for a single fd vu_add_mem_reg and vu_rem_mem_reg return false
   instead of true.
 * Check for over-running max ram slots in vu_add_mem_reg returns false
   instead of true.
 * vu_rem_mem_reg unmaps all matching regions.
 * Decriment iterator variable when looping through regions in
   vu_rem_mem_reg to ensure matching regions aren’t missed.

Thanks,
Raphael

David Hildenbrand (1):
  libvhost-user: Simplify VHOST_USER_REM_MEM_REG

Raphael Norwitz (4):
  libvhost-user: Add vu_rem_mem_reg input validation
  libvhost-user: Add vu_add_mem_reg input validation
  libvhost-user: prevent over-running max RAM slots
  libvhost-user: handle removal of identical regions

 subprojects/libvhost-user/libvhost-user.c | 51 +++++++++++++++--------
 1 file changed, 34 insertions(+), 17 deletions(-)

-- 
2.20.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation
  2022-01-06  6:47 [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
@ 2022-01-06  6:47 ` Raphael Norwitz
  2022-01-10  8:56   ` David Hildenbrand
  2022-01-10  9:36   ` Michael S. Tsirkin
  2022-01-06  6:47 ` [PATCH v2 2/5] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-06  6:47 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 787f4d2d4f..a6dadeb637 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -801,6 +801,12 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
     VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
 
+    if (vmsg->fd_num != 1 ||
+        vmsg->size != sizeof(vmsg->payload.memreg)) {
+        vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
+        return false;
+    }
+
     DPRINT("Removing region:\n");
     DPRINT("    guest_phys_addr: 0x%016"PRIx64"\n",
            msg_region->guest_phys_addr);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 2/5] libvhost-user: Add vu_add_mem_reg input validation
  2022-01-06  6:47 [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
  2022-01-06  6:47 ` [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
@ 2022-01-06  6:47 ` Raphael Norwitz
  2022-01-10  8:56   ` David Hildenbrand
  2022-01-06  6:47 ` [PATCH v2 3/5] libvhost-user: Simplify VHOST_USER_REM_MEM_REG Raphael Norwitz
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-06  6:47 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index a6dadeb637..d61285e991 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -690,6 +690,12 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     VuDevRegion *dev_region = &dev->regions[dev->nregions];
     void *mmap_addr;
 
+    if (vmsg->fd_num != 1 ||
+        vmsg->size != sizeof(vmsg->payload.memreg)) {
+        vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
+        return false;
+    }
+
     /*
      * If we are in postcopy mode and we receive a u64 payload with a 0 value
      * we know all the postcopy client bases have been received, and we
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 3/5] libvhost-user: Simplify VHOST_USER_REM_MEM_REG
  2022-01-06  6:47 [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
  2022-01-06  6:47 ` [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
  2022-01-06  6:47 ` [PATCH v2 2/5] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
@ 2022-01-06  6:47 ` Raphael Norwitz
  2022-01-06  6:47 ` [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-06  6:47 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

From: David Hildenbrand <david@redhat.com>

Let's avoid having to manually copy all elements. Copy only the ones
necessary to close the hole and perform the operation in-place without
a second array.

Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 subprojects/libvhost-user/libvhost-user.c | 30 +++++++++++------------
 1 file changed, 14 insertions(+), 16 deletions(-)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index d61285e991..77ddc96ddf 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -802,10 +802,8 @@ static inline bool reg_equal(VuDevRegion *vudev_reg,
 
 static bool
 vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
-    int i, j;
-    bool found = false;
-    VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
     VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
+    int i;
 
     if (vmsg->fd_num != 1 ||
         vmsg->size != sizeof(vmsg->payload.memreg)) {
@@ -823,28 +821,28 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     DPRINT("    mmap_offset      0x%016"PRIx64"\n",
            msg_region->mmap_offset);
 
-    for (i = 0, j = 0; i < dev->nregions; i++) {
-        if (!reg_equal(&dev->regions[i], msg_region)) {
-            shadow_regions[j].gpa = dev->regions[i].gpa;
-            shadow_regions[j].size = dev->regions[i].size;
-            shadow_regions[j].qva = dev->regions[i].qva;
-            shadow_regions[j].mmap_addr = dev->regions[i].mmap_addr;
-            shadow_regions[j].mmap_offset = dev->regions[i].mmap_offset;
-            j++;
-        } else {
-            found = true;
+    for (i = 0; i < dev->nregions; i++) {
+        if (reg_equal(&dev->regions[i], msg_region)) {
             VuDevRegion *r = &dev->regions[i];
             void *m = (void *) (uintptr_t) r->mmap_addr;
 
             if (m) {
                 munmap(m, r->size + r->mmap_offset);
             }
+
+            break;
         }
     }
 
-    if (found) {
-        memcpy(dev->regions, shadow_regions,
-               sizeof(VuDevRegion) * VHOST_USER_MAX_RAM_SLOTS);
+    if (i < dev->nregions) {
+        /*
+         * Shift all affected entries by 1 to close the hole at index i and
+         * zero out the last entry.
+         */
+        memmove(dev->regions + i, dev->regions + i + 1,
+               sizeof(VuDevRegion) * (dev->nregions - i - 1));
+        memset(dev->regions + dev->nregions - 1, 0,
+               sizeof(VuDevRegion));
         DPRINT("Successfully removed a region\n");
         dev->nregions--;
         vmsg_set_reply_u64(vmsg, 0);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots
  2022-01-06  6:47 [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
                   ` (2 preceding siblings ...)
  2022-01-06  6:47 ` [PATCH v2 3/5] libvhost-user: Simplify VHOST_USER_REM_MEM_REG Raphael Norwitz
@ 2022-01-06  6:47 ` Raphael Norwitz
  2022-01-10  8:57   ` David Hildenbrand
                     ` (2 more replies)
  2022-01-06  6:47 ` [PATCH v2 5/5] libvhost-user: handle removal of identical regions Raphael Norwitz
                   ` (2 subsequent siblings)
  6 siblings, 3 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-06  6:47 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
libvhost-user, no guardrails were added to protect against QEMU
attempting to hot-add too many RAM slots to a VM with a libvhost-user
based backed attached.

This change adds the missing error handling by introducing a check on
the number of RAM slots the device has available before proceeding to
process the VHOST_USER_ADD_MEM_REG message.

Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 77ddc96ddf..0fe3aa155b 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -690,6 +690,11 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     VuDevRegion *dev_region = &dev->regions[dev->nregions];
     void *mmap_addr;
 
+    if (dev->nregions == VHOST_USER_MAX_RAM_SLOTS) {
+        vu_panic(dev, "No free ram slots available");
+        return false;
+    }
+
     if (vmsg->fd_num != 1 ||
         vmsg->size != sizeof(vmsg->payload.memreg)) {
         vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 5/5] libvhost-user: handle removal of identical regions
  2022-01-06  6:47 [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
                   ` (3 preceding siblings ...)
  2022-01-06  6:47 ` [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
@ 2022-01-06  6:47 ` Raphael Norwitz
  2022-01-10  8:58   ` David Hildenbrand
  2022-01-10 11:25   ` Stefan Hajnoczi
  2022-01-10  9:01 ` [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping David Hildenbrand
  2022-01-10 11:25 ` Stefan Hajnoczi
  6 siblings, 2 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-06  6:47 UTC (permalink / raw)
  To: stefanha, marcandre.lureau, mst, david
  Cc: raphael.s.norwitz, qemu-devel, Raphael Norwitz

Today if QEMU (or any other VMM) has sent multiple copies of the same
region to a libvhost-user based backend and then attempts to remove the
region, only one instance of the region will be removed, leaving stale
copies of the region in dev->regions[].

This change resolves this by having vu_rem_mem_reg() iterate through all
regions in dev->regions[] and delete all matching regions.

Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 subprojects/libvhost-user/libvhost-user.c | 26 ++++++++++++-----------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 0fe3aa155b..14482484d3 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -809,6 +809,7 @@ static bool
 vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
     VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
     int i;
+    bool found = false;
 
     if (vmsg->fd_num != 1 ||
         vmsg->size != sizeof(vmsg->payload.memreg)) {
@@ -835,21 +836,22 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
                 munmap(m, r->size + r->mmap_offset);
             }
 
-            break;
+            /*
+             * Shift all affected entries by 1 to close the hole at index i and
+             * zero out the last entry.
+             */
+            memmove(dev->regions + i, dev->regions + i + 1,
+                    sizeof(VuDevRegion) * (dev->nregions - i - 1));
+            memset(dev->regions + dev->nregions - 1, 0, sizeof(VuDevRegion));
+            DPRINT("Successfully removed a region\n");
+            dev->nregions--;
+            i--;
+
+            found = true;
         }
     }
 
-    if (i < dev->nregions) {
-        /*
-         * Shift all affected entries by 1 to close the hole at index i and
-         * zero out the last entry.
-         */
-        memmove(dev->regions + i, dev->regions + i + 1,
-               sizeof(VuDevRegion) * (dev->nregions - i - 1));
-        memset(dev->regions + dev->nregions - 1, 0,
-               sizeof(VuDevRegion));
-        DPRINT("Successfully removed a region\n");
-        dev->nregions--;
+    if (found) {
         vmsg_set_reply_u64(vmsg, 0);
     } else {
         vu_panic(dev, "Specified region not found\n");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation
  2022-01-06  6:47 ` [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
@ 2022-01-10  8:56   ` David Hildenbrand
  2022-01-10  9:36   ` Michael S. Tsirkin
  1 sibling, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2022-01-10  8:56 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 06.01.22 07:47, Raphael Norwitz wrote:
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 787f4d2d4f..a6dadeb637 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -801,6 +801,12 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>      VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
>      VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
>  
> +    if (vmsg->fd_num != 1 ||
> +        vmsg->size != sizeof(vmsg->payload.memreg)) {
> +        vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
> +        return false;
> +    }
> +
>      DPRINT("Removing region:\n");
>      DPRINT("    guest_phys_addr: 0x%016"PRIx64"\n",
>             msg_region->guest_phys_addr);

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 2/5] libvhost-user: Add vu_add_mem_reg input validation
  2022-01-06  6:47 ` [PATCH v2 2/5] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
@ 2022-01-10  8:56   ` David Hildenbrand
  0 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2022-01-10  8:56 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 06.01.22 07:47, Raphael Norwitz wrote:
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index a6dadeb637..d61285e991 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -690,6 +690,12 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>      VuDevRegion *dev_region = &dev->regions[dev->nregions];
>      void *mmap_addr;
>  
> +    if (vmsg->fd_num != 1 ||
> +        vmsg->size != sizeof(vmsg->payload.memreg)) {
> +        vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
> +        return false;
> +    }
> +
>      /*
>       * If we are in postcopy mode and we receive a u64 payload with a 0 value
>       * we know all the postcopy client bases have been received, and we

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots
  2022-01-06  6:47 ` [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
@ 2022-01-10  8:57   ` David Hildenbrand
  2022-01-10  9:40   ` Michael S. Tsirkin
  2022-01-10 11:24   ` Stefan Hajnoczi
  2 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2022-01-10  8:57 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 06.01.22 07:47, Raphael Norwitz wrote:
> When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
> libvhost-user, no guardrails were added to protect against QEMU
> attempting to hot-add too many RAM slots to a VM with a libvhost-user
> based backed attached.
> 
> This change adds the missing error handling by introducing a check on
> the number of RAM slots the device has available before proceeding to
> process the VHOST_USER_ADD_MEM_REG message.
> 
> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 77ddc96ddf..0fe3aa155b 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -690,6 +690,11 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>      VuDevRegion *dev_region = &dev->regions[dev->nregions];
>      void *mmap_addr;
>  
> +    if (dev->nregions == VHOST_USER_MAX_RAM_SLOTS) {
> +        vu_panic(dev, "No free ram slots available");
> +        return false;
> +    }
> +
>      if (vmsg->fd_num != 1 ||
>          vmsg->size != sizeof(vmsg->payload.memreg)) {
>          vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/5] libvhost-user: handle removal of identical regions
  2022-01-06  6:47 ` [PATCH v2 5/5] libvhost-user: handle removal of identical regions Raphael Norwitz
@ 2022-01-10  8:58   ` David Hildenbrand
  2022-01-10 22:38     ` Raphael Norwitz
  2022-01-10 11:25   ` Stefan Hajnoczi
  1 sibling, 1 reply; 22+ messages in thread
From: David Hildenbrand @ 2022-01-10  8:58 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 06.01.22 07:47, Raphael Norwitz wrote:
> Today if QEMU (or any other VMM) has sent multiple copies of the same
> region to a libvhost-user based backend and then attempts to remove the
> region, only one instance of the region will be removed, leaving stale
> copies of the region in dev->regions[].
> 
> This change resolves this by having vu_rem_mem_reg() iterate through all
> regions in dev->regions[] and delete all matching regions.
> 
> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 26 ++++++++++++-----------
>  1 file changed, 14 insertions(+), 12 deletions(-)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 0fe3aa155b..14482484d3 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -809,6 +809,7 @@ static bool
>  vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>      VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
>      int i;
> +    bool found = false;
>  
>      if (vmsg->fd_num != 1 ||
>          vmsg->size != sizeof(vmsg->payload.memreg)) {
> @@ -835,21 +836,22 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>                  munmap(m, r->size + r->mmap_offset);
>              }
>  
> -            break;
> +            /*
> +             * Shift all affected entries by 1 to close the hole at index i and
> +             * zero out the last entry.
> +             */
> +            memmove(dev->regions + i, dev->regions + i + 1,
> +                    sizeof(VuDevRegion) * (dev->nregions - i - 1));
> +            memset(dev->regions + dev->nregions - 1, 0, sizeof(VuDevRegion));
> +            DPRINT("Successfully removed a region\n");
> +            dev->nregions--;
> +            i--;
> +
> +            found = true;

Maybe add a comment like

/* Continue the search for eventual duplicates. */


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping
  2022-01-06  6:47 [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
                   ` (4 preceding siblings ...)
  2022-01-06  6:47 ` [PATCH v2 5/5] libvhost-user: handle removal of identical regions Raphael Norwitz
@ 2022-01-10  9:01 ` David Hildenbrand
  2022-01-10 22:36   ` Raphael Norwitz
  2022-01-10 11:25 ` Stefan Hajnoczi
  6 siblings, 1 reply; 22+ messages in thread
From: David Hildenbrand @ 2022-01-10  9:01 UTC (permalink / raw)
  To: Raphael Norwitz, stefanha, marcandre.lureau, mst
  Cc: raphael.s.norwitz, qemu-devel

On 06.01.22 07:47, Raphael Norwitz wrote:
> Hey Stefan, Marc-Andre, MST, David -
> 
> As promised here is a series cleaning up error handling in the
> libvhost-user memory mapping path. Most of these cleanups are
> straightforward and have been discussed on the mailing list in threads
> [1] and [2].
> 

A note that we still want the fix in [3] upstream:

[3] https://lkml.kernel.org/r/20211012183832.62603-1-david@redhat.com


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation
  2022-01-06  6:47 ` [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
  2022-01-10  8:56   ` David Hildenbrand
@ 2022-01-10  9:36   ` Michael S. Tsirkin
  2022-01-10 19:43     ` Raphael Norwitz
  1 sibling, 1 reply; 22+ messages in thread
From: Michael S. Tsirkin @ 2022-01-10  9:36 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: marcandre.lureau, raphael.s.norwitz, qemu-devel, stefanha, david

On Thu, Jan 06, 2022 at 06:47:26AM +0000, Raphael Norwitz wrote:
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>


Raphael any chance you can add a bit more to commit logs?
E.g. what happens right now if you pass more?

> ---
>  subprojects/libvhost-user/libvhost-user.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 787f4d2d4f..a6dadeb637 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -801,6 +801,12 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>      VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
>      VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
>  
> +    if (vmsg->fd_num != 1 ||
> +        vmsg->size != sizeof(vmsg->payload.memreg)) {

Is there a chance someone is sending larger messages and relying
on libvhost-user to ignore padding?

> +        vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");

Maybe print the parameters that caused the issue?

> +        return false;
> +    }
> +
>      DPRINT("Removing region:\n");
>      DPRINT("    guest_phys_addr: 0x%016"PRIx64"\n",
>             msg_region->guest_phys_addr);
> -- 
> 2.20.1



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots
  2022-01-06  6:47 ` [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
  2022-01-10  8:57   ` David Hildenbrand
@ 2022-01-10  9:40   ` Michael S. Tsirkin
  2022-01-10 22:38     ` Raphael Norwitz
  2022-01-10 11:24   ` Stefan Hajnoczi
  2 siblings, 1 reply; 22+ messages in thread
From: Michael S. Tsirkin @ 2022-01-10  9:40 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: marcandre.lureau, raphael.s.norwitz, qemu-devel, stefanha, david

On Thu, Jan 06, 2022 at 06:47:35AM +0000, Raphael Norwitz wrote:
> When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
> libvhost-user, no guardrails were added to protect against QEMU
> attempting to hot-add too many RAM slots to a VM with a libvhost-user
> based backed attached.
> 
> This change adds the missing error handling by introducing a check on
> the number of RAM slots the device has available before proceeding to
> process the VHOST_USER_ADD_MEM_REG message.
> 
> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> index 77ddc96ddf..0fe3aa155b 100644
> --- a/subprojects/libvhost-user/libvhost-user.c
> +++ b/subprojects/libvhost-user/libvhost-user.c
> @@ -690,6 +690,11 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
>      VuDevRegion *dev_region = &dev->regions[dev->nregions];
>      void *mmap_addr;
>  
> +    if (dev->nregions == VHOST_USER_MAX_RAM_SLOTS) {
> +        vu_panic(dev, "No free ram slots available");

A bit more verbose maybe? Describe what happened to trigger this?
e.g. adding a region with no free ram slots?

> +        return false;
> +    }
> +
>      if (vmsg->fd_num != 1 ||
>          vmsg->size != sizeof(vmsg->payload.memreg)) {
>          vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
> -- 
> 2.20.1



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots
  2022-01-06  6:47 ` [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
  2022-01-10  8:57   ` David Hildenbrand
  2022-01-10  9:40   ` Michael S. Tsirkin
@ 2022-01-10 11:24   ` Stefan Hajnoczi
  2 siblings, 0 replies; 22+ messages in thread
From: Stefan Hajnoczi @ 2022-01-10 11:24 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: marcandre.lureau, david, qemu-devel, raphael.s.norwitz, mst

[-- Attachment #1: Type: text/plain, Size: 776 bytes --]

On Thu, Jan 06, 2022 at 06:47:35AM +0000, Raphael Norwitz wrote:
> When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
> libvhost-user, no guardrails were added to protect against QEMU
> attempting to hot-add too many RAM slots to a VM with a libvhost-user
> based backed attached.
> 
> This change adds the missing error handling by introducing a check on
> the number of RAM slots the device has available before proceeding to
> process the VHOST_USER_ADD_MEM_REG message.
> 
> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 5 +++++
>  1 file changed, 5 insertions(+)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/5] libvhost-user: handle removal of identical regions
  2022-01-06  6:47 ` [PATCH v2 5/5] libvhost-user: handle removal of identical regions Raphael Norwitz
  2022-01-10  8:58   ` David Hildenbrand
@ 2022-01-10 11:25   ` Stefan Hajnoczi
  1 sibling, 0 replies; 22+ messages in thread
From: Stefan Hajnoczi @ 2022-01-10 11:25 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: marcandre.lureau, david, qemu-devel, raphael.s.norwitz, mst

[-- Attachment #1: Type: text/plain, Size: 786 bytes --]

On Thu, Jan 06, 2022 at 06:47:36AM +0000, Raphael Norwitz wrote:
> Today if QEMU (or any other VMM) has sent multiple copies of the same
> region to a libvhost-user based backend and then attempts to remove the
> region, only one instance of the region will be removed, leaving stale
> copies of the region in dev->regions[].
> 
> This change resolves this by having vu_rem_mem_reg() iterate through all
> regions in dev->regions[] and delete all matching regions.
> 
> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> ---
>  subprojects/libvhost-user/libvhost-user.c | 26 ++++++++++++-----------
>  1 file changed, 14 insertions(+), 12 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping
  2022-01-06  6:47 [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
                   ` (5 preceding siblings ...)
  2022-01-10  9:01 ` [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping David Hildenbrand
@ 2022-01-10 11:25 ` Stefan Hajnoczi
  6 siblings, 0 replies; 22+ messages in thread
From: Stefan Hajnoczi @ 2022-01-10 11:25 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: marcandre.lureau, david, qemu-devel, raphael.s.norwitz, mst

[-- Attachment #1: Type: text/plain, Size: 1481 bytes --]

On Thu, Jan 06, 2022 at 06:47:24AM +0000, Raphael Norwitz wrote:
> Hey Stefan, Marc-Andre, MST, David -
> 
> As promised here is a series cleaning up error handling in the
> libvhost-user memory mapping path. Most of these cleanups are
> straightforward and have been discussed on the mailing list in threads
> [1] and [2].
> 
> [1] https://lore.kernel.org/qemu-devel/20211018143319.GA11006@raphael-debian-dev/
> [2] https://lore.kernel.org/qemu-devel/9391f500-70be-26cf-bcfc-591d3ee84d4e@redhat.com/
> 
> Changes since V1:
>  * Checks for a single fd vu_add_mem_reg and vu_rem_mem_reg return false
>    instead of true.
>  * Check for over-running max ram slots in vu_add_mem_reg returns false
>    instead of true.
>  * vu_rem_mem_reg unmaps all matching regions.
>  * Decriment iterator variable when looping through regions in
>    vu_rem_mem_reg to ensure matching regions aren’t missed.
> 
> Thanks,
> Raphael
> 
> David Hildenbrand (1):
>   libvhost-user: Simplify VHOST_USER_REM_MEM_REG
> 
> Raphael Norwitz (4):
>   libvhost-user: Add vu_rem_mem_reg input validation
>   libvhost-user: Add vu_add_mem_reg input validation
>   libvhost-user: prevent over-running max RAM slots
>   libvhost-user: handle removal of identical regions
> 
>  subprojects/libvhost-user/libvhost-user.c | 51 +++++++++++++++--------
>  1 file changed, 34 insertions(+), 17 deletions(-)
> 
> -- 
> 2.20.1

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation
  2022-01-10  9:36   ` Michael S. Tsirkin
@ 2022-01-10 19:43     ` Raphael Norwitz
  2022-01-10 21:11       ` Michael S. Tsirkin
  2022-01-11  9:13       ` Stefan Hajnoczi
  0 siblings, 2 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-10 19:43 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: raphael.s.norwitz, david, qemu-devel, Raphael Norwitz, stefanha,
	marcandre.lureau

On Mon, Jan 10, 2022 at 04:36:34AM -0500, Michael S. Tsirkin wrote:
> On Thu, Jan 06, 2022 at 06:47:26AM +0000, Raphael Norwitz wrote:
> > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> 
> 
> Raphael any chance you can add a bit more to commit logs?
> E.g. what happens right now if you pass more?
>

Sure - I'll add those details.

> > ---
> >  subprojects/libvhost-user/libvhost-user.c | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > index 787f4d2d4f..a6dadeb637 100644
> > --- a/subprojects/libvhost-user/libvhost-user.c
> > +++ b/subprojects/libvhost-user/libvhost-user.c
> > @@ -801,6 +801,12 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> >      VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
> >      VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
> >  
> > +    if (vmsg->fd_num != 1 ||
> > +        vmsg->size != sizeof(vmsg->payload.memreg)) {
> 
> Is there a chance someone is sending larger messages and relying
> on libvhost-user to ignore padding?
> 

Great point -  I didn't consider padding. I'll drop the vmsg->size check
here.

It looks like we are inconsistent with size checking. For example, in
vu_set_log_base_exec() we also check the size.

Should we make it consistent across the library or am I missing some
details about why the padding is not an issue in that case?

> > +        vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
> 
> Maybe print the parameters that caused the issue?
>

Ack.

> > +        return false;
> > +    }
> > +
> >      DPRINT("Removing region:\n");
> >      DPRINT("    guest_phys_addr: 0x%016"PRIx64"\n",
> >             msg_region->guest_phys_addr);
> > -- 
> > 2.20.1
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation
  2022-01-10 19:43     ` Raphael Norwitz
@ 2022-01-10 21:11       ` Michael S. Tsirkin
  2022-01-11  9:13       ` Stefan Hajnoczi
  1 sibling, 0 replies; 22+ messages in thread
From: Michael S. Tsirkin @ 2022-01-10 21:11 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: marcandre.lureau, raphael.s.norwitz, qemu-devel, stefanha, david

On Mon, Jan 10, 2022 at 07:43:06PM +0000, Raphael Norwitz wrote:
> On Mon, Jan 10, 2022 at 04:36:34AM -0500, Michael S. Tsirkin wrote:
> > On Thu, Jan 06, 2022 at 06:47:26AM +0000, Raphael Norwitz wrote:
> > > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > 
> > 
> > Raphael any chance you can add a bit more to commit logs?
> > E.g. what happens right now if you pass more?
> >
> 
> Sure - I'll add those details.
> 
> > > ---
> > >  subprojects/libvhost-user/libvhost-user.c | 6 ++++++
> > >  1 file changed, 6 insertions(+)
> > > 
> > > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > > index 787f4d2d4f..a6dadeb637 100644
> > > --- a/subprojects/libvhost-user/libvhost-user.c
> > > +++ b/subprojects/libvhost-user/libvhost-user.c
> > > @@ -801,6 +801,12 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> > >      VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
> > >      VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
> > >  
> > > +    if (vmsg->fd_num != 1 ||
> > > +        vmsg->size != sizeof(vmsg->payload.memreg)) {
> > 
> > Is there a chance someone is sending larger messages and relying
> > on libvhost-user to ignore padding?
> > 
> 
> Great point -  I didn't consider padding. I'll drop the vmsg->size check
> here.
> 
> It looks like we are inconsistent with size checking. For example, in
> vu_set_log_base_exec() we also check the size.
> 
> Should we make it consistent across the library or am I missing some
> details about why the padding is not an issue in that case?

Not sure. We don't allow arbitrary padding do we? That would require
extra processing to skip padding in case it's very large.
Let's try to come up with a policy and document it as part of the spec?

> > > +        vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
> > 
> > Maybe print the parameters that caused the issue?
> >
> 
> Ack.
> 
> > > +        return false;
> > > +    }
> > > +
> > >      DPRINT("Removing region:\n");
> > >      DPRINT("    guest_phys_addr: 0x%016"PRIx64"\n",
> > >             msg_region->guest_phys_addr);
> > > -- 
> > > 2.20.1
> > 



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping
  2022-01-10  9:01 ` [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping David Hildenbrand
@ 2022-01-10 22:36   ` Raphael Norwitz
  0 siblings, 0 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-10 22:36 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: raphael.s.norwitz, mst, qemu-devel, Raphael Norwitz, stefanha,
	marcandre.lureau

On Mon, Jan 10, 2022 at 10:01:36AM +0100, David Hildenbrand wrote:
> On 06.01.22 07:47, Raphael Norwitz wrote:
> > Hey Stefan, Marc-Andre, MST, David -
> > 
> > As promised here is a series cleaning up error handling in the
> > libvhost-user memory mapping path. Most of these cleanups are
> > straightforward and have been discussed on the mailing list in threads
> > [1] and [2].
> > 
> 
> A note that we still want the fix in [3] upstream:
>
> [3] https://lore.kernel.org/all/20211012183832.62603-1-david@redhat.com/T/#u 

Ah I thought it was merged.

I'll add it to the series to ensure it doesn't get lost.

> 
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots
  2022-01-10  9:40   ` Michael S. Tsirkin
@ 2022-01-10 22:38     ` Raphael Norwitz
  0 siblings, 0 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-10 22:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: raphael.s.norwitz, david, qemu-devel, Raphael Norwitz, stefanha,
	marcandre.lureau

On Mon, Jan 10, 2022 at 04:40:08AM -0500, Michael S. Tsirkin wrote:
> On Thu, Jan 06, 2022 at 06:47:35AM +0000, Raphael Norwitz wrote:
> > When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
> > libvhost-user, no guardrails were added to protect against QEMU
> > attempting to hot-add too many RAM slots to a VM with a libvhost-user
> > based backed attached.
> > 
> > This change adds the missing error handling by introducing a check on
> > the number of RAM slots the device has available before proceeding to
> > process the VHOST_USER_ADD_MEM_REG message.
> > 
> > Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > ---
> >  subprojects/libvhost-user/libvhost-user.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> > 
> > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > index 77ddc96ddf..0fe3aa155b 100644
> > --- a/subprojects/libvhost-user/libvhost-user.c
> > +++ b/subprojects/libvhost-user/libvhost-user.c
> > @@ -690,6 +690,11 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> >      VuDevRegion *dev_region = &dev->regions[dev->nregions];
> >      void *mmap_addr;
> >  
> > +    if (dev->nregions == VHOST_USER_MAX_RAM_SLOTS) {
> > +        vu_panic(dev, "No free ram slots available");
> 
> A bit more verbose maybe? Describe what happened to trigger this?
> e.g. adding a region with no free ram slots?
>

Ack

> > +        return false;
> > +    }
> > +
> >      if (vmsg->fd_num != 1 ||
> >          vmsg->size != sizeof(vmsg->payload.memreg)) {
> >          vu_panic(dev, "VHOST_USER_REM_MEM_REG received multiple regions");
> > -- 
> > 2.20.1
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/5] libvhost-user: handle removal of identical regions
  2022-01-10  8:58   ` David Hildenbrand
@ 2022-01-10 22:38     ` Raphael Norwitz
  0 siblings, 0 replies; 22+ messages in thread
From: Raphael Norwitz @ 2022-01-10 22:38 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: raphael.s.norwitz, mst, qemu-devel, Raphael Norwitz, stefanha,
	marcandre.lureau

On Mon, Jan 10, 2022 at 09:58:01AM +0100, David Hildenbrand wrote:
> On 06.01.22 07:47, Raphael Norwitz wrote:
> > Today if QEMU (or any other VMM) has sent multiple copies of the same
> > region to a libvhost-user based backend and then attempts to remove the
> > region, only one instance of the region will be removed, leaving stale
> > copies of the region in dev->regions[].
> > 
> > This change resolves this by having vu_rem_mem_reg() iterate through all
> > regions in dev->regions[] and delete all matching regions.
> > 
> > Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > ---
> >  subprojects/libvhost-user/libvhost-user.c | 26 ++++++++++++-----------
> >  1 file changed, 14 insertions(+), 12 deletions(-)
> > 
> > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > index 0fe3aa155b..14482484d3 100644
> > --- a/subprojects/libvhost-user/libvhost-user.c
> > +++ b/subprojects/libvhost-user/libvhost-user.c
> > @@ -809,6 +809,7 @@ static bool
> >  vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> >      VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
> >      int i;
> > +    bool found = false;
> >  
> >      if (vmsg->fd_num != 1 ||
> >          vmsg->size != sizeof(vmsg->payload.memreg)) {
> > @@ -835,21 +836,22 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> >                  munmap(m, r->size + r->mmap_offset);
> >              }
> >  
> > -            break;
> > +            /*
> > +             * Shift all affected entries by 1 to close the hole at index i and
> > +             * zero out the last entry.
> > +             */
> > +            memmove(dev->regions + i, dev->regions + i + 1,
> > +                    sizeof(VuDevRegion) * (dev->nregions - i - 1));
> > +            memset(dev->regions + dev->nregions - 1, 0, sizeof(VuDevRegion));
> > +            DPRINT("Successfully removed a region\n");
> > +            dev->nregions--;
> > +            i--;
> > +
> > +            found = true;
> 
> Maybe add a comment like
> 
> /* Continue the search for eventual duplicates. */

Ack - I'll add it.

> 
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation
  2022-01-10 19:43     ` Raphael Norwitz
  2022-01-10 21:11       ` Michael S. Tsirkin
@ 2022-01-11  9:13       ` Stefan Hajnoczi
  1 sibling, 0 replies; 22+ messages in thread
From: Stefan Hajnoczi @ 2022-01-11  9:13 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: marcandre.lureau, david, qemu-devel, raphael.s.norwitz,
	Michael S. Tsirkin

[-- Attachment #1: Type: text/plain, Size: 1762 bytes --]

On Mon, Jan 10, 2022 at 07:43:06PM +0000, Raphael Norwitz wrote:
> On Mon, Jan 10, 2022 at 04:36:34AM -0500, Michael S. Tsirkin wrote:
> > On Thu, Jan 06, 2022 at 06:47:26AM +0000, Raphael Norwitz wrote:
> > > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > 
> > 
> > Raphael any chance you can add a bit more to commit logs?
> > E.g. what happens right now if you pass more?
> >
> 
> Sure - I'll add those details.
> 
> > > ---
> > >  subprojects/libvhost-user/libvhost-user.c | 6 ++++++
> > >  1 file changed, 6 insertions(+)
> > > 
> > > diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
> > > index 787f4d2d4f..a6dadeb637 100644
> > > --- a/subprojects/libvhost-user/libvhost-user.c
> > > +++ b/subprojects/libvhost-user/libvhost-user.c
> > > @@ -801,6 +801,12 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
> > >      VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {};
> > >      VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
> > >  
> > > +    if (vmsg->fd_num != 1 ||
> > > +        vmsg->size != sizeof(vmsg->payload.memreg)) {
> > 
> > Is there a chance someone is sending larger messages and relying
> > on libvhost-user to ignore padding?
> > 
> 
> Great point -  I didn't consider padding. I'll drop the vmsg->size check
> here.

A vmsg->size > sizeof(vmsg->payload.memreg) check is still reasonable.

Without a check we'll treat the 0-initialized vmsg bytes as input,
which should be okay as long as there is input validation later on. In
some cases 0s may really be valid input and we'll perform some
operation. So in the end, I think checking vmsg->size is safest, just
ignore extra bytes.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-01-11  9:15 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-06  6:47 [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
2022-01-06  6:47 ` [PATCH v2 1/5] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
2022-01-10  8:56   ` David Hildenbrand
2022-01-10  9:36   ` Michael S. Tsirkin
2022-01-10 19:43     ` Raphael Norwitz
2022-01-10 21:11       ` Michael S. Tsirkin
2022-01-11  9:13       ` Stefan Hajnoczi
2022-01-06  6:47 ` [PATCH v2 2/5] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
2022-01-10  8:56   ` David Hildenbrand
2022-01-06  6:47 ` [PATCH v2 3/5] libvhost-user: Simplify VHOST_USER_REM_MEM_REG Raphael Norwitz
2022-01-06  6:47 ` [PATCH v2 4/5] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
2022-01-10  8:57   ` David Hildenbrand
2022-01-10  9:40   ` Michael S. Tsirkin
2022-01-10 22:38     ` Raphael Norwitz
2022-01-10 11:24   ` Stefan Hajnoczi
2022-01-06  6:47 ` [PATCH v2 5/5] libvhost-user: handle removal of identical regions Raphael Norwitz
2022-01-10  8:58   ` David Hildenbrand
2022-01-10 22:38     ` Raphael Norwitz
2022-01-10 11:25   ` Stefan Hajnoczi
2022-01-10  9:01 ` [PATCH v2 0/5] Clean up error handling in libvhost-user memory mapping David Hildenbrand
2022-01-10 22:36   ` Raphael Norwitz
2022-01-10 11:25 ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.