All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem
@ 2021-07-27  8:25 David Hildenbrand
  2021-07-27  8:25 ` [PATCH v4 1/4] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-27  8:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Stefan Berger, Eduardo Habkost,
	David Hildenbrand, Michael S. Tsirkin, Dr. David Alan Gilbert,
	Peter Xu, Alex Williamson, Paolo Bonzini, Claudio Fontana,
	Marc-André Lureau, Igor Mammedov, Alex Bennée,
	Stefan Berger

Minor fixes and cleanups, followed by an optimization for virtio-mem
regarding guest dumps and tpm.

virtio-mem logically plugs/unplugs memory within a sparse memory region
and notifies via the RamDiscardMgr interface when parts become
plugged (populated) or unplugged (discarded).

Currently, guest_phys_blocks_append() appends the whole (sparse)
virtio-mem managed region and therefore tpm code might zero the hole
region and dump code will dump the whole region. Let's only add logically
plugged (populated) parts of that region, skipping over logically
unplugged (discarded) parts by reusing the RamDiscardMgr infrastructure
introduced to handle virtio-mem + VFIO properly.

v3 -> v4:
- "tpm: mark correct memory region range dirty when clearing RAM"
-- Finally get it right :) I tried triggering that code without luck. So
   I ended up forcing that call path, verifying that the offset into
   memory regions is now correct.

v2 -> v3:
- "tpm: mark correct memory region range dirty when clearing RAM"
-- Fix calculation of offset into memory region (thanks Peter!)
- "softmmu/memory_mapping: reuse qemu_get_guest_simple_memory_mapping()"
-- Dropped

v1 -> v2:
- "softmmu/memory_mapping: factor out adding physical memory ranges"
-- Simplify based on RamDiscardManager changes: add using a
   MemoryRegionSection
- "softmmu/memory_mapping: optimize for RamDiscardManager sections"
-- Simplify based on RamDiscardManager changes

David Hildenbrand (4):
  tpm: mark correct memory region range dirty when clearing RAM
  softmmu/memory_mapping: never merge ranges accross memory regions
  softmmu/memory_mapping: factor out adding physical memory ranges
  softmmu/memory_mapping: optimize for RamDiscardManager sections

 hw/tpm/tpm_ppi.c         |  5 +++-
 softmmu/memory_mapping.c | 64 ++++++++++++++++++++++++++--------------
 2 files changed, 46 insertions(+), 23 deletions(-)

-- 
2.31.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v4 1/4] tpm: mark correct memory region range dirty when clearing RAM
  2021-07-27  8:25 [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
@ 2021-07-27  8:25 ` David Hildenbrand
  2021-07-27 15:46   ` Peter Xu
  2021-07-27  8:25 ` [PATCH v4 2/4] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2021-07-27  8:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Stefan Berger, Eduardo Habkost,
	Michael S. Tsirkin, David Hildenbrand, Dr . David Alan Gilbert,
	Peter Xu, Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov,
	Stefan Berger

We might not start at the beginning of the memory region. Let's
calculate the offset into the memory region via the difference in the
host addresses.

Acked-by: Stefan Berger <stefanb@linux.ibm.com>
Fixes: ffab1be70692 ("tpm: clear RAM when "memory overwrite" requested")
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 hw/tpm/tpm_ppi.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
index 362edcc5c9..274e9aa4b0 100644
--- a/hw/tpm/tpm_ppi.c
+++ b/hw/tpm/tpm_ppi.c
@@ -30,11 +30,14 @@ void tpm_ppi_reset(TPMPPI *tpmppi)
         guest_phys_blocks_init(&guest_phys_blocks);
         guest_phys_blocks_append(&guest_phys_blocks);
         QTAILQ_FOREACH(block, &guest_phys_blocks.head, next) {
+            hwaddr mr_offs = block->host_addr -
+                             (uint8_t *)memory_region_get_ram_ptr(block->mr);
+
             trace_tpm_ppi_memset(block->host_addr,
                                  block->target_end - block->target_start);
             memset(block->host_addr, 0,
                    block->target_end - block->target_start);
-            memory_region_set_dirty(block->mr, 0,
+            memory_region_set_dirty(block->mr, mr_offs,
                                     block->target_end - block->target_start);
         }
         guest_phys_blocks_free(&guest_phys_blocks);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 2/4] softmmu/memory_mapping: never merge ranges accross memory regions
  2021-07-27  8:25 [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
  2021-07-27  8:25 ` [PATCH v4 1/4] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
@ 2021-07-27  8:25 ` David Hildenbrand
  2021-07-27  8:25 ` [PATCH v4 3/4] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-27  8:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
	David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov,
	Stefan Berger

Let's make sure to not merge when different memory regions are involved.
Unlikely, but theoretically possible.

Acked-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 softmmu/memory_mapping.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index e7af276546..d401ca7e31 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -229,7 +229,8 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
 
         /* we want continuity in both guest-physical and host-virtual memory */
         if (predecessor->target_end < target_start ||
-            predecessor->host_addr + predecessor_size != host_addr) {
+            predecessor->host_addr + predecessor_size != host_addr ||
+            predecessor->mr != section->mr) {
             predecessor = NULL;
         }
     }
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 3/4] softmmu/memory_mapping: factor out adding physical memory ranges
  2021-07-27  8:25 [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
  2021-07-27  8:25 ` [PATCH v4 1/4] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
  2021-07-27  8:25 ` [PATCH v4 2/4] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
@ 2021-07-27  8:25 ` David Hildenbrand
  2021-07-27  8:25 ` [PATCH v4 4/4] softmmu/memory_mapping: optimize for RamDiscardManager sections David Hildenbrand
  2021-10-02  9:55 ` [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem Paolo Bonzini
  4 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-27  8:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
	David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov,
	Stefan Berger

Let's factor out adding a MemoryRegionSection to the list, to be reused in
RamDiscardManager context next.

Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 softmmu/memory_mapping.c | 41 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index d401ca7e31..a2af02c41c 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -193,29 +193,14 @@ typedef struct GuestPhysListener {
     MemoryListener listener;
 } GuestPhysListener;
 
-static void guest_phys_blocks_region_add(MemoryListener *listener,
+static void guest_phys_block_add_section(GuestPhysListener *g,
                                          MemoryRegionSection *section)
 {
-    GuestPhysListener *g;
-    uint64_t section_size;
-    hwaddr target_start, target_end;
-    uint8_t *host_addr;
-    GuestPhysBlock *predecessor;
-
-    /* we only care about RAM */
-    if (!memory_region_is_ram(section->mr) ||
-        memory_region_is_ram_device(section->mr) ||
-        memory_region_is_nonvolatile(section->mr)) {
-        return;
-    }
-
-    g            = container_of(listener, GuestPhysListener, listener);
-    section_size = int128_get64(section->size);
-    target_start = section->offset_within_address_space;
-    target_end   = target_start + section_size;
-    host_addr    = memory_region_get_ram_ptr(section->mr) +
-                   section->offset_within_region;
-    predecessor  = NULL;
+    const hwaddr target_start = section->offset_within_address_space;
+    const hwaddr target_end = target_start + int128_get64(section->size);
+    uint8_t *host_addr = memory_region_get_ram_ptr(section->mr) +
+                         section->offset_within_region;
+    GuestPhysBlock *predecessor = NULL;
 
     /* find continuity in guest physical address space */
     if (!QTAILQ_EMPTY(&g->list->head)) {
@@ -261,6 +246,20 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
 #endif
 }
 
+static void guest_phys_blocks_region_add(MemoryListener *listener,
+                                         MemoryRegionSection *section)
+{
+    GuestPhysListener *g = container_of(listener, GuestPhysListener, listener);
+
+    /* we only care about RAM */
+    if (!memory_region_is_ram(section->mr) ||
+        memory_region_is_ram_device(section->mr) ||
+        memory_region_is_nonvolatile(section->mr)) {
+        return;
+    }
+    guest_phys_block_add_section(g, section);
+}
+
 void guest_phys_blocks_append(GuestPhysBlockList *list)
 {
     GuestPhysListener g = { 0 };
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 4/4] softmmu/memory_mapping: optimize for RamDiscardManager sections
  2021-07-27  8:25 [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
                   ` (2 preceding siblings ...)
  2021-07-27  8:25 ` [PATCH v4 3/4] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
@ 2021-07-27  8:25 ` David Hildenbrand
  2021-10-02  9:55 ` [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem Paolo Bonzini
  4 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-27  8:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
	David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov,
	Stefan Berger

virtio-mem logically plugs/unplugs memory within a sparse memory region
and notifies via the RamDiscardManager interface when parts become
plugged (populated) or unplugged (discarded).

Currently, we end up (via the two users)
1) zeroing all logically unplugged/discarded memory during TPM resets.
2) reading all logically unplugged/discarded memory when dumping, to
   figure out the content is zero.

1) is always bad, because we assume unplugged memory stays discarded
   (and is already implicitly zero).
2) isn't that bad with anonymous memory, we end up reading the zero
   page (slow and unnecessary, though). However, once we use some
   file-backed memory (future use case), even reading will populate memory.

Let's cut out all parts marked as not-populated (discarded) via the
RamDiscardManager. As virtio-mem is the single user, this now means that
logically unplugged memory ranges will no longer be included in the
dump, which results in smaller dump files and faster dumping.

virtio-mem has a minimum granularity of 1 MiB (and the default is usually
2 MiB). Theoretically, we can see quite some fragmentation, in practice
we won't have it completely fragmented in 1 MiB pieces. Still, we might
end up with many physical ranges.

Both, the ELF format and kdump seem to be ready to support many
individual ranges (e.g., for ELF it seems to be UINT32_MAX, kdump has a
linear bitmap).

Reviewed-by: Peter Xu <peterx@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 softmmu/memory_mapping.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index a2af02c41c..a62eaa49cc 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -246,6 +246,15 @@ static void guest_phys_block_add_section(GuestPhysListener *g,
 #endif
 }
 
+static int guest_phys_ram_populate_cb(MemoryRegionSection *section,
+                                      void *opaque)
+{
+    GuestPhysListener *g = opaque;
+
+    guest_phys_block_add_section(g, section);
+    return 0;
+}
+
 static void guest_phys_blocks_region_add(MemoryListener *listener,
                                          MemoryRegionSection *section)
 {
@@ -257,6 +266,17 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
         memory_region_is_nonvolatile(section->mr)) {
         return;
     }
+
+    /* for special sparse regions, only add populated parts */
+    if (memory_region_has_ram_discard_manager(section->mr)) {
+        RamDiscardManager *rdm;
+
+        rdm = memory_region_get_ram_discard_manager(section->mr);
+        ram_discard_manager_replay_populated(rdm, section,
+                                             guest_phys_ram_populate_cb, g);
+        return;
+    }
+
     guest_phys_block_add_section(g, section);
 }
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 1/4] tpm: mark correct memory region range dirty when clearing RAM
  2021-07-27  8:25 ` [PATCH v4 1/4] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
@ 2021-07-27 15:46   ` Peter Xu
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Xu @ 2021-07-27 15:46 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
	Stefan Berger, qemu-devel, Dr . David Alan Gilbert,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov,
	Stefan Berger

On Tue, Jul 27, 2021 at 10:25:42AM +0200, David Hildenbrand wrote:
> We might not start at the beginning of the memory region. Let's
> calculate the offset into the memory region via the difference in the
> host addresses.
> 
> Acked-by: Stefan Berger <stefanb@linux.ibm.com>
> Fixes: ffab1be70692 ("tpm: clear RAM when "memory overwrite" requested")
> Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Claudio Fontana <cfontana@suse.de>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: "Alex Bennée" <alex.bennee@linaro.org>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Laurent Vivier <lvivier@redhat.com>
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem
  2021-07-27  8:25 [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
                   ` (3 preceding siblings ...)
  2021-07-27  8:25 ` [PATCH v4 4/4] softmmu/memory_mapping: optimize for RamDiscardManager sections David Hildenbrand
@ 2021-10-02  9:55 ` Paolo Bonzini
  2021-10-04  7:39   ` David Hildenbrand
  4 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2021-10-02  9:55 UTC (permalink / raw)
  To: David Hildenbrand, qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
	Stefan Berger, Dr. David Alan Gilbert, Peter Xu, Alex Williamson,
	Claudio Fontana, Igor Mammedov, Marc-André Lureau,
	Alex Bennée, Stefan Berger

On 27/07/21 10:25, David Hildenbrand wrote:
> Minor fixes and cleanups, followed by an optimization for virtio-mem
> regarding guest dumps and tpm.
> 
> virtio-mem logically plugs/unplugs memory within a sparse memory region
> and notifies via the RamDiscardMgr interface when parts become
> plugged (populated) or unplugged (discarded).
> 
> Currently, guest_phys_blocks_append() appends the whole (sparse)
> virtio-mem managed region and therefore tpm code might zero the hole
> region and dump code will dump the whole region. Let's only add logically
> plugged (populated) parts of that region, skipping over logically
> unplugged (discarded) parts by reusing the RamDiscardMgr infrastructure
> introduced to handle virtio-mem + VFIO properly.

Queued, thanks.

Paolo

> v3 -> v4:
> - "tpm: mark correct memory region range dirty when clearing RAM"
> -- Finally get it right :) I tried triggering that code without luck. So
>     I ended up forcing that call path, verifying that the offset into
>     memory regions is now correct.
> 
> v2 -> v3:
> - "tpm: mark correct memory region range dirty when clearing RAM"
> -- Fix calculation of offset into memory region (thanks Peter!)
> - "softmmu/memory_mapping: reuse qemu_get_guest_simple_memory_mapping()"
> -- Dropped
> 
> v1 -> v2:
> - "softmmu/memory_mapping: factor out adding physical memory ranges"
> -- Simplify based on RamDiscardManager changes: add using a
>     MemoryRegionSection
> - "softmmu/memory_mapping: optimize for RamDiscardManager sections"
> -- Simplify based on RamDiscardManager changes
> 
> David Hildenbrand (4):
>    tpm: mark correct memory region range dirty when clearing RAM
>    softmmu/memory_mapping: never merge ranges accross memory regions
>    softmmu/memory_mapping: factor out adding physical memory ranges
>    softmmu/memory_mapping: optimize for RamDiscardManager sections
> 
>   hw/tpm/tpm_ppi.c         |  5 +++-
>   softmmu/memory_mapping.c | 64 ++++++++++++++++++++++++++--------------
>   2 files changed, 46 insertions(+), 23 deletions(-)
> 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem
  2021-10-02  9:55 ` [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem Paolo Bonzini
@ 2021-10-04  7:39   ` David Hildenbrand
  0 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-10-04  7:39 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
	Stefan Berger, Dr. David Alan Gilbert, Peter Xu, Alex Williamson,
	Claudio Fontana, Igor Mammedov, Marc-André Lureau,
	Alex Bennée, Stefan Berger

On 02.10.21 11:55, Paolo Bonzini wrote:
> On 27/07/21 10:25, David Hildenbrand wrote:
>> Minor fixes and cleanups, followed by an optimization for virtio-mem
>> regarding guest dumps and tpm.
>>
>> virtio-mem logically plugs/unplugs memory within a sparse memory region
>> and notifies via the RamDiscardMgr interface when parts become
>> plugged (populated) or unplugged (discarded).
>>
>> Currently, guest_phys_blocks_append() appends the whole (sparse)
>> virtio-mem managed region and therefore tpm code might zero the hole
>> region and dump code will dump the whole region. Let's only add logically
>> plugged (populated) parts of that region, skipping over logically
>> unplugged (discarded) parts by reusing the RamDiscardMgr infrastructure
>> introduced to handle virtio-mem + VFIO properly.
> 
> Queued, thanks.
> 

Thanks Paolo!

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-10-04  7:41 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-27  8:25 [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
2021-07-27  8:25 ` [PATCH v4 1/4] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
2021-07-27 15:46   ` Peter Xu
2021-07-27  8:25 ` [PATCH v4 2/4] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
2021-07-27  8:25 ` [PATCH v4 3/4] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
2021-07-27  8:25 ` [PATCH v4 4/4] softmmu/memory_mapping: optimize for RamDiscardManager sections David Hildenbrand
2021-10-02  9:55 ` [PATCH v4 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem Paolo Bonzini
2021-10-04  7:39   ` David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.