All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/5] softmmu/memory_mapping: optimize dump/tpm for virtio-mem
@ 2021-02-10 17:15 David Hildenbrand
  2021-02-10 17:15 ` [PATCH v1 1/5] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: David Hildenbrand @ 2021-02-10 17:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Stefan Berger, David Hildenbrand,
	Michael S. Tsirkin, Dr. David Alan Gilbert, Peter Xu,
	Alex Williamson, Paolo Bonzini, Claudio Fontana,
	Marc-André Lureau, Igor Mammedov, Alex Bennée

Minor fixes and cleanups, followed by an optimization for virtio-mem
regarding guest dumps and tpm.

virtio-mem logically plugs/unplugs memory within a sparse memory region
and notifies via the RamDiscardMgr interface when parts become
plugged (populated) or unplugged (discarded).

Currently, guest_phys_blocks_append() appends the whole (sparse)
virtio-mem managed region and therefore tpm code might zero the hole
region and dump code will dump the whole region. Let's only add logically
plugged (populated) parts of that region, skipping over logically
unplugged (discarded) parts by reusing the RamDiscardMgr infrastructure
introduced to handle virtio-mem + VFIO properly.

Based-on: https://lkml.kernel.org/r/20210121110540.33704-1-david@redhat.com

David Hildenbrand (5):
  tpm: mark correct memory region range dirty when clearing RAM
  softmmu/memory_mapping: reuse qemu_get_guest_simple_memory_mapping()
  softmmu/memory_mapping: never merge ranges accross memory regions
  softmmu/memory_mapping: factor out adding physical memory ranges
  softmmu/memory_mapping: optimize for RamDiscardMgr sections

 hw/tpm/tpm_ppi.c         |   4 +-
 softmmu/memory_mapping.c | 118 +++++++++++++++++++++++++++------------
 2 files changed, 84 insertions(+), 38 deletions(-)

-- 
2.29.2



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v1 1/5] tpm: mark correct memory region range dirty when clearing RAM
  2021-02-10 17:15 [PATCH v1 0/5] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
@ 2021-02-10 17:15 ` David Hildenbrand
  2021-02-14  0:38   ` Stefan Berger
  2021-02-10 17:15 ` [PATCH v1 2/5] softmmu/memory_mapping: reuse qemu_get_guest_simple_memory_mapping() David Hildenbrand
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 7+ messages in thread
From: David Hildenbrand @ 2021-02-10 17:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Stefan Berger, Michael S. Tsirkin,
	David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov

We might not start at the beginning of the memory region. We could also
calculate via the difference in the host address; however,
memory_region_set_dirty() also relies on memory_region_get_ram_addr()
internally, so let's just use that.

Fixes: ffab1be70692 ("tpm: clear RAM when "memory overwrite" requested")
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 hw/tpm/tpm_ppi.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
index 72d7a3d926..e0e2d2c8e1 100644
--- a/hw/tpm/tpm_ppi.c
+++ b/hw/tpm/tpm_ppi.c
@@ -30,11 +30,13 @@ void tpm_ppi_reset(TPMPPI *tpmppi)
         guest_phys_blocks_init(&guest_phys_blocks);
         guest_phys_blocks_append(&guest_phys_blocks);
         QTAILQ_FOREACH(block, &guest_phys_blocks.head, next) {
+            ram_addr_t mr_start = memory_region_get_ram_addr(block->mr);
+
             trace_tpm_ppi_memset(block->host_addr,
                                  block->target_end - block->target_start);
             memset(block->host_addr, 0,
                    block->target_end - block->target_start);
-            memory_region_set_dirty(block->mr, 0,
+            memory_region_set_dirty(block->mr, block->target_start - mr_start,
                                     block->target_end - block->target_start);
         }
         guest_phys_blocks_free(&guest_phys_blocks);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 2/5] softmmu/memory_mapping: reuse qemu_get_guest_simple_memory_mapping()
  2021-02-10 17:15 [PATCH v1 0/5] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
  2021-02-10 17:15 ` [PATCH v1 1/5] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
@ 2021-02-10 17:15 ` David Hildenbrand
  2021-02-10 17:15 ` [PATCH v1 3/5] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: David Hildenbrand @ 2021-02-10 17:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Michael S. Tsirkin,
	David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov

Let's reuse qemu_get_guest_simple_memory_mapping(), which does exactly
what we want.

Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 softmmu/memory_mapping.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index 18d0b8067c..2677392de7 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -289,8 +289,6 @@ void qemu_get_guest_memory_mapping(MemoryMappingList *list,
                                    Error **errp)
 {
     CPUState *cpu, *first_paging_enabled_cpu;
-    GuestPhysBlock *block;
-    ram_addr_t offset, length;
 
     first_paging_enabled_cpu = find_paging_enabled_cpu(first_cpu);
     if (first_paging_enabled_cpu) {
@@ -310,11 +308,7 @@ void qemu_get_guest_memory_mapping(MemoryMappingList *list,
      * If the guest doesn't use paging, the virtual address is equal to physical
      * address.
      */
-    QTAILQ_FOREACH(block, &guest_phys_blocks->head, next) {
-        offset = block->target_start;
-        length = block->target_end - block->target_start;
-        create_new_memory_mapping(list, offset, offset, length);
-    }
+    qemu_get_guest_simple_memory_mapping(list, guest_phys_blocks);
 }
 
 void qemu_get_guest_simple_memory_mapping(MemoryMappingList *list,
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 3/5] softmmu/memory_mapping: never merge ranges accross memory regions
  2021-02-10 17:15 [PATCH v1 0/5] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
  2021-02-10 17:15 ` [PATCH v1 1/5] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
  2021-02-10 17:15 ` [PATCH v1 2/5] softmmu/memory_mapping: reuse qemu_get_guest_simple_memory_mapping() David Hildenbrand
@ 2021-02-10 17:15 ` David Hildenbrand
  2021-02-10 17:15 ` [PATCH v1 4/5] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
  2021-02-10 17:15 ` [PATCH v1 5/5] softmmu/memory_mapping: optimize for RamDiscardMgr sections David Hildenbrand
  4 siblings, 0 replies; 7+ messages in thread
From: David Hildenbrand @ 2021-02-10 17:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Michael S. Tsirkin,
	David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov

Let's make sure to not merge when different memory regions are involved.
Unlikely, but theoretically possible.

Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 softmmu/memory_mapping.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index 2677392de7..ad4911427a 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -230,7 +230,8 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
 
         /* we want continuity in both guest-physical and host-virtual memory */
         if (predecessor->target_end < target_start ||
-            predecessor->host_addr + predecessor_size != host_addr) {
+            predecessor->host_addr + predecessor_size != host_addr ||
+            predecessor->mr != section->mr) {
             predecessor = NULL;
         }
     }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 4/5] softmmu/memory_mapping: factor out adding physical memory ranges
  2021-02-10 17:15 [PATCH v1 0/5] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
                   ` (2 preceding siblings ...)
  2021-02-10 17:15 ` [PATCH v1 3/5] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
@ 2021-02-10 17:15 ` David Hildenbrand
  2021-02-10 17:15 ` [PATCH v1 5/5] softmmu/memory_mapping: optimize for RamDiscardMgr sections David Hildenbrand
  4 siblings, 0 replies; 7+ messages in thread
From: David Hildenbrand @ 2021-02-10 17:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Michael S. Tsirkin,
	David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov

Let's factor out adding an item to the list, to be reused in
RamDiscardMgr context next.

Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 softmmu/memory_mapping.c | 64 +++++++++++++++++++++-------------------
 1 file changed, 34 insertions(+), 30 deletions(-)

diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index ad4911427a..05e8270edc 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -194,35 +194,17 @@ typedef struct GuestPhysListener {
     MemoryListener listener;
 } GuestPhysListener;
 
-static void guest_phys_blocks_region_add(MemoryListener *listener,
-                                         MemoryRegionSection *section)
+static void guest_phys_block_add(GuestPhysBlockList *list, MemoryRegion *mr,
+                                 hwaddr target_start, hwaddr target_end,
+                                 uint8_t *host_addr)
 {
-    GuestPhysListener *g;
-    uint64_t section_size;
-    hwaddr target_start, target_end;
-    uint8_t *host_addr;
-    GuestPhysBlock *predecessor;
-
-    /* we only care about RAM */
-    if (!memory_region_is_ram(section->mr) ||
-        memory_region_is_ram_device(section->mr) ||
-        memory_region_is_nonvolatile(section->mr)) {
-        return;
-    }
-
-    g            = container_of(listener, GuestPhysListener, listener);
-    section_size = int128_get64(section->size);
-    target_start = section->offset_within_address_space;
-    target_end   = target_start + section_size;
-    host_addr    = memory_region_get_ram_ptr(section->mr) +
-                   section->offset_within_region;
-    predecessor  = NULL;
+    GuestPhysBlock *predecessor = NULL;
 
     /* find continuity in guest physical address space */
-    if (!QTAILQ_EMPTY(&g->list->head)) {
+    if (!QTAILQ_EMPTY(&list->head)) {
         hwaddr predecessor_size;
 
-        predecessor = QTAILQ_LAST(&g->list->head);
+        predecessor = QTAILQ_LAST(&list->head);
         predecessor_size = predecessor->target_end - predecessor->target_start;
 
         /* the memory API guarantees monotonically increasing traversal */
@@ -231,7 +213,7 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
         /* we want continuity in both guest-physical and host-virtual memory */
         if (predecessor->target_end < target_start ||
             predecessor->host_addr + predecessor_size != host_addr ||
-            predecessor->mr != section->mr) {
+            predecessor->mr != mr) {
             predecessor = NULL;
         }
     }
@@ -243,11 +225,11 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
         block->target_start = target_start;
         block->target_end   = target_end;
         block->host_addr    = host_addr;
-        block->mr           = section->mr;
-        memory_region_ref(section->mr);
+        block->mr           = mr;
+        memory_region_ref(mr);
 
-        QTAILQ_INSERT_TAIL(&g->list->head, block, next);
-        ++g->list->num;
+        QTAILQ_INSERT_TAIL(&list->head, block, next);
+        ++list->num;
     } else {
         /* expand predecessor until @target_end; predecessor's start doesn't
          * change
@@ -258,10 +240,32 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
 #ifdef DEBUG_GUEST_PHYS_REGION_ADD
     fprintf(stderr, "%s: target_start=" TARGET_FMT_plx " target_end="
             TARGET_FMT_plx ": %s (count: %u)\n", __func__, target_start,
-            target_end, predecessor ? "joined" : "added", g->list->num);
+            target_end, predecessor ? "joined" : "added", list->num);
 #endif
 }
 
+static void guest_phys_blocks_region_add(MemoryListener *listener,
+                                         MemoryRegionSection *section)
+{
+    GuestPhysListener *g = container_of(listener, GuestPhysListener, listener);
+    hwaddr target_start, target_end;
+    uint8_t *host_addr;
+
+    /* we only care about RAM */
+    if (!memory_region_is_ram(section->mr) ||
+        memory_region_is_ram_device(section->mr) ||
+        memory_region_is_nonvolatile(section->mr)) {
+        return;
+    }
+
+    target_start = section->offset_within_address_space;
+    target_end = target_start + int128_get64(section->size);
+    host_addr = memory_region_get_ram_ptr(section->mr) +
+                section->offset_within_region;
+    guest_phys_block_add(g->list, section->mr, target_start, target_end,
+                         host_addr);
+}
+
 void guest_phys_blocks_append(GuestPhysBlockList *list)
 {
     GuestPhysListener g = { 0 };
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 5/5] softmmu/memory_mapping: optimize for RamDiscardMgr sections
  2021-02-10 17:15 [PATCH v1 0/5] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
                   ` (3 preceding siblings ...)
  2021-02-10 17:15 ` [PATCH v1 4/5] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
@ 2021-02-10 17:15 ` David Hildenbrand
  4 siblings, 0 replies; 7+ messages in thread
From: David Hildenbrand @ 2021-02-10 17:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Michael S. Tsirkin,
	David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
	Alex Williamson, Claudio Fontana, Paolo Bonzini,
	Marc-André Lureau, Alex Bennée, Igor Mammedov

virtio-mem logically plugs/unplugs memory within a sparse memory region
and notifies via the RamDiscardMgr interface when parts become
plugged (populated) or unplugged (discarded).

Currently, we end up (via the two users)
1) zeroing all logically unplugged/discarded memory during TPM resets.
2) reading all logically unplugged/discarded memory when dumping, to
   figure out the content is zero.

1) is always bad, because we assume unplugged memory stays discarded
   (and is already implicitly zero).
2) isn't that bad with anonymous memory, we end up reading the zero
   page (slow and unnecessary, though). However, once we use some
   file-backed memory (future use case), even reading will populate memory.

Let's cut out all parts marked as not-populated (discarded) via the
RamDiscardMgr. As virtio-mem is the single user, this now means that
logically unplugged memory ranges will no longer be included in the
dump, which results in smaller dump files and faster dumping.

virtio-mem has a minimum granularity of 1 MiB (and the default is usually
2 MiB). Theoretically, we can see quite some fragmentation, in practice
we won't have it completely fragmented in 1 MiB pieces. Still, we might
end up with many physical ranges.

Both, the ELF format and kdump seem to be ready to support many
individual ranges (e.g., for ELF it seems to be UINT32_MAX, kdump has a
linear bitmap).

Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 softmmu/memory_mapping.c | 45 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index 05e8270edc..c9a3da6b54 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -244,6 +244,35 @@ static void guest_phys_block_add(GuestPhysBlockList *list, MemoryRegion *mr,
 #endif
 }
 
+typedef struct GuestPhysRDL {
+    GuestPhysBlockList *list;
+    RamDiscardListener listener;
+    MemoryRegionSection *section;
+} GuestPhysRDL;
+
+static int guest_phys_ram_discard_notify_populate(RamDiscardListener *listener,
+                                                  const MemoryRegion *const_mr,
+                                                  ram_addr_t offset,
+                                                  ram_addr_t size)
+{
+    GuestPhysRDL *rdl = container_of(listener, GuestPhysRDL, listener);
+    MemoryRegionSection *s = rdl->section;
+    const hwaddr mr_start = MAX(offset, s->offset_within_region);
+    const hwaddr mr_end = MIN(offset + size,
+                              s->offset_within_region + int128_get64(s->size));
+    uint8_t *host_addr;
+
+    if (mr_start >= mr_end) {
+        return 0;
+    }
+
+    host_addr = memory_region_get_ram_ptr(s->mr) + mr_start;
+    guest_phys_block_add(rdl->list, s->mr,
+                         mr_start + s->offset_within_address_space,
+                         mr_end + s->offset_within_address_space, host_addr);
+    return 0;
+}
+
 static void guest_phys_blocks_region_add(MemoryListener *listener,
                                          MemoryRegionSection *section)
 {
@@ -258,6 +287,22 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
         return;
     }
 
+    /* for special sparse regions, only add populated parts */
+    if (memory_region_has_ram_discard_mgr(section->mr)) {
+        RamDiscardMgr *rdm = memory_region_get_ram_discard_mgr(section->mr);
+        RamDiscardMgrClass *rdmc = RAM_DISCARD_MGR_GET_CLASS(rdm);
+        GuestPhysRDL rdl = {
+            .list = g->list,
+            .section = section,
+        };
+
+        ram_discard_listener_init(&rdl.listener,
+                                  guest_phys_ram_discard_notify_populate, NULL,
+                                  NULL);
+        rdmc->replay_populated(rdm, section->mr, &rdl.listener);
+        return;
+    }
+
     target_start = section->offset_within_address_space;
     target_end = target_start + int128_get64(section->size);
     host_addr = memory_region_get_ram_ptr(section->mr) +
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/5] tpm: mark correct memory region range dirty when clearing RAM
  2021-02-10 17:15 ` [PATCH v1 1/5] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
@ 2021-02-14  0:38   ` Stefan Berger
  0 siblings, 0 replies; 7+ messages in thread
From: Stefan Berger @ 2021-02-14  0:38 UTC (permalink / raw)
  To: David Hildenbrand, qemu-devel
  Cc: Laurent Vivier, Thomas Huth, Michael S. Tsirkin, Stefan Berger,
	Dr . David Alan Gilbert, Peter Xu, Alex Williamson,
	Claudio Fontana, Marc-André Lureau, Paolo Bonzini,
	Alex Bennée, Igor Mammedov

On 2/10/21 12:15 PM, David Hildenbrand wrote:
> We might not start at the beginning of the memory region. We could also
> calculate via the difference in the host address; however,
> memory_region_set_dirty() also relies on memory_region_get_ram_addr()
> internally, so let's just use that.
>
> Fixes: ffab1be70692 ("tpm: clear RAM when "memory overwrite" requested")
> Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Claudio Fontana <cfontana@suse.de>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: "Alex Bennée" <alex.bennee@linaro.org>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Laurent Vivier <lvivier@redhat.com>
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>   hw/tpm/tpm_ppi.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
> index 72d7a3d926..e0e2d2c8e1 100644
> --- a/hw/tpm/tpm_ppi.c
> +++ b/hw/tpm/tpm_ppi.c
> @@ -30,11 +30,13 @@ void tpm_ppi_reset(TPMPPI *tpmppi)
>           guest_phys_blocks_init(&guest_phys_blocks);
>           guest_phys_blocks_append(&guest_phys_blocks);
>           QTAILQ_FOREACH(block, &guest_phys_blocks.head, next) {
> +            ram_addr_t mr_start = memory_region_get_ram_addr(block->mr);
> +
>               trace_tpm_ppi_memset(block->host_addr,
>                                    block->target_end - block->target_start);
>               memset(block->host_addr, 0,
>                      block->target_end - block->target_start);
> -            memory_region_set_dirty(block->mr, 0,
> +            memory_region_set_dirty(block->mr, block->target_start - mr_start,
>                                       block->target_end - block->target_start);
>           }
>           guest_phys_blocks_free(&guest_phys_blocks);

Acked-by: Stefan Berger <stefanb@linux.ibm.com>




^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-02-14  0:40 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-10 17:15 [PATCH v1 0/5] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
2021-02-10 17:15 ` [PATCH v1 1/5] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
2021-02-14  0:38   ` Stefan Berger
2021-02-10 17:15 ` [PATCH v1 2/5] softmmu/memory_mapping: reuse qemu_get_guest_simple_memory_mapping() David Hildenbrand
2021-02-10 17:15 ` [PATCH v1 3/5] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
2021-02-10 17:15 ` [PATCH v1 4/5] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
2021-02-10 17:15 ` [PATCH v1 5/5] softmmu/memory_mapping: optimize for RamDiscardMgr sections David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.