* [PATCH v3 1/4] tpm: mark correct memory region range dirty when clearing RAM
2021-07-26 16:03 [PATCH v3 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
@ 2021-07-26 16:03 ` David Hildenbrand
2021-07-26 16:57 ` Peter Xu
2021-07-26 16:03 ` [PATCH v3 2/4] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2021-07-26 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Thomas Huth, Stefan Berger, Eduardo Habkost,
Michael S. Tsirkin, David Hildenbrand, Dr . David Alan Gilbert,
Peter Xu, Alex Williamson, Claudio Fontana, Paolo Bonzini,
Marc-André Lureau, Alex Bennée, Igor Mammedov,
Stefan Berger
We might not start at the beginning of the memory region. Let's
calculate the offset into the memory region via the difference in the
host addresses.
Acked-by: Stefan Berger <stefanb@linux.ibm.com>
Fixes: ffab1be70692 ("tpm: clear RAM when "memory overwrite" requested")
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
hw/tpm/tpm_ppi.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
index 362edcc5c9..f243d9d0f6 100644
--- a/hw/tpm/tpm_ppi.c
+++ b/hw/tpm/tpm_ppi.c
@@ -30,11 +30,14 @@ void tpm_ppi_reset(TPMPPI *tpmppi)
guest_phys_blocks_init(&guest_phys_blocks);
guest_phys_blocks_append(&guest_phys_blocks);
QTAILQ_FOREACH(block, &guest_phys_blocks.head, next) {
+ hwaddr mr_offs = (uint8_t *)memory_region_get_ram_ptr(block->mr) -
+ block->host_addr;
+
trace_tpm_ppi_memset(block->host_addr,
block->target_end - block->target_start);
memset(block->host_addr, 0,
block->target_end - block->target_start);
- memory_region_set_dirty(block->mr, 0,
+ memory_region_set_dirty(block->mr, mr_offs,
block->target_end - block->target_start);
}
guest_phys_blocks_free(&guest_phys_blocks);
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/4] tpm: mark correct memory region range dirty when clearing RAM
2021-07-26 16:03 ` [PATCH v3 1/4] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
@ 2021-07-26 16:57 ` Peter Xu
2021-07-26 16:58 ` David Hildenbrand
0 siblings, 1 reply; 8+ messages in thread
From: Peter Xu @ 2021-07-26 16:57 UTC (permalink / raw)
To: David Hildenbrand
Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
Stefan Berger, qemu-devel, Dr . David Alan Gilbert,
Alex Williamson, Claudio Fontana, Paolo Bonzini,
Marc-André Lureau, Alex Bennée, Igor Mammedov,
Stefan Berger
On Mon, Jul 26, 2021 at 06:03:43PM +0200, David Hildenbrand wrote:
> We might not start at the beginning of the memory region. Let's
> calculate the offset into the memory region via the difference in the
> host addresses.
>
> Acked-by: Stefan Berger <stefanb@linux.ibm.com>
> Fixes: ffab1be70692 ("tpm: clear RAM when "memory overwrite" requested")
> Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Claudio Fontana <cfontana@suse.de>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: "Alex Bennée" <alex.bennee@linaro.org>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Laurent Vivier <lvivier@redhat.com>
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> hw/tpm/tpm_ppi.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
> index 362edcc5c9..f243d9d0f6 100644
> --- a/hw/tpm/tpm_ppi.c
> +++ b/hw/tpm/tpm_ppi.c
> @@ -30,11 +30,14 @@ void tpm_ppi_reset(TPMPPI *tpmppi)
> guest_phys_blocks_init(&guest_phys_blocks);
> guest_phys_blocks_append(&guest_phys_blocks);
> QTAILQ_FOREACH(block, &guest_phys_blocks.head, next) {
> + hwaddr mr_offs = (uint8_t *)memory_region_get_ram_ptr(block->mr) -
> + block->host_addr;
Didn't look closely previous - should it be reversed instead?
block->host_addr - memory_region_get_ram_ptr(block->mr)
Thanks,
> +
> trace_tpm_ppi_memset(block->host_addr,
> block->target_end - block->target_start);
> memset(block->host_addr, 0,
> block->target_end - block->target_start);
> - memory_region_set_dirty(block->mr, 0,
> + memory_region_set_dirty(block->mr, mr_offs,
> block->target_end - block->target_start);
> }
> guest_phys_blocks_free(&guest_phys_blocks);
> --
> 2.31.1
>
--
Peter Xu
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/4] tpm: mark correct memory region range dirty when clearing RAM
2021-07-26 16:57 ` Peter Xu
@ 2021-07-26 16:58 ` David Hildenbrand
0 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-26 16:58 UTC (permalink / raw)
To: Peter Xu
Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
Stefan Berger, qemu-devel, Dr . David Alan Gilbert,
Alex Williamson, Claudio Fontana, Paolo Bonzini,
Marc-André Lureau, Alex Bennée, Igor Mammedov,
Stefan Berger
On 26.07.21 18:57, Peter Xu wrote:
> On Mon, Jul 26, 2021 at 06:03:43PM +0200, David Hildenbrand wrote:
>> We might not start at the beginning of the memory region. Let's
>> calculate the offset into the memory region via the difference in the
>> host addresses.
>>
>> Acked-by: Stefan Berger <stefanb@linux.ibm.com>
>> Fixes: ffab1be70692 ("tpm: clear RAM when "memory overwrite" requested")
>> Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>> Cc: Eduardo Habkost <ehabkost@redhat.com>
>> Cc: Alex Williamson <alex.williamson@redhat.com>
>> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Cc: Igor Mammedov <imammedo@redhat.com>
>> Cc: Claudio Fontana <cfontana@suse.de>
>> Cc: Thomas Huth <thuth@redhat.com>
>> Cc: "Alex Bennée" <alex.bennee@linaro.org>
>> Cc: Peter Xu <peterx@redhat.com>
>> Cc: Laurent Vivier <lvivier@redhat.com>
>> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> hw/tpm/tpm_ppi.c | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
>> index 362edcc5c9..f243d9d0f6 100644
>> --- a/hw/tpm/tpm_ppi.c
>> +++ b/hw/tpm/tpm_ppi.c
>> @@ -30,11 +30,14 @@ void tpm_ppi_reset(TPMPPI *tpmppi)
>> guest_phys_blocks_init(&guest_phys_blocks);
>> guest_phys_blocks_append(&guest_phys_blocks);
>> QTAILQ_FOREACH(block, &guest_phys_blocks.head, next) {
>> + hwaddr mr_offs = (uint8_t *)memory_region_get_ram_ptr(block->mr) -
>> + block->host_addr;
>
> Didn't look closely previous - should it be reversed instead?
>
> block->host_addr - memory_region_get_ram_ptr(block->mr)
Of course it should :(
Thanks! :)
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v3 2/4] softmmu/memory_mapping: never merge ranges accross memory regions
2021-07-26 16:03 [PATCH v3 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
2021-07-26 16:03 ` [PATCH v3 1/4] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
@ 2021-07-26 16:03 ` David Hildenbrand
2021-07-26 16:03 ` [PATCH v3 3/4] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
2021-07-26 16:03 ` [PATCH v3 4/4] softmmu/memory_mapping: optimize for RamDiscardManager sections David Hildenbrand
3 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-26 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
Alex Williamson, Claudio Fontana, Paolo Bonzini,
Marc-André Lureau, Alex Bennée, Igor Mammedov,
Stefan Berger
Let's make sure to not merge when different memory regions are involved.
Unlikely, but theoretically possible.
Acked-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
softmmu/memory_mapping.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index e7af276546..d401ca7e31 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -229,7 +229,8 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
/* we want continuity in both guest-physical and host-virtual memory */
if (predecessor->target_end < target_start ||
- predecessor->host_addr + predecessor_size != host_addr) {
+ predecessor->host_addr + predecessor_size != host_addr ||
+ predecessor->mr != section->mr) {
predecessor = NULL;
}
}
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 3/4] softmmu/memory_mapping: factor out adding physical memory ranges
2021-07-26 16:03 [PATCH v3 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
2021-07-26 16:03 ` [PATCH v3 1/4] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
2021-07-26 16:03 ` [PATCH v3 2/4] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
@ 2021-07-26 16:03 ` David Hildenbrand
2021-07-26 16:03 ` [PATCH v3 4/4] softmmu/memory_mapping: optimize for RamDiscardManager sections David Hildenbrand
3 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-26 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
Alex Williamson, Claudio Fontana, Paolo Bonzini,
Marc-André Lureau, Alex Bennée, Igor Mammedov,
Stefan Berger
Let's factor out adding a MemoryRegionSection to the list, to be reused in
RamDiscardManager context next.
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
softmmu/memory_mapping.c | 41 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index d401ca7e31..a2af02c41c 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -193,29 +193,14 @@ typedef struct GuestPhysListener {
MemoryListener listener;
} GuestPhysListener;
-static void guest_phys_blocks_region_add(MemoryListener *listener,
+static void guest_phys_block_add_section(GuestPhysListener *g,
MemoryRegionSection *section)
{
- GuestPhysListener *g;
- uint64_t section_size;
- hwaddr target_start, target_end;
- uint8_t *host_addr;
- GuestPhysBlock *predecessor;
-
- /* we only care about RAM */
- if (!memory_region_is_ram(section->mr) ||
- memory_region_is_ram_device(section->mr) ||
- memory_region_is_nonvolatile(section->mr)) {
- return;
- }
-
- g = container_of(listener, GuestPhysListener, listener);
- section_size = int128_get64(section->size);
- target_start = section->offset_within_address_space;
- target_end = target_start + section_size;
- host_addr = memory_region_get_ram_ptr(section->mr) +
- section->offset_within_region;
- predecessor = NULL;
+ const hwaddr target_start = section->offset_within_address_space;
+ const hwaddr target_end = target_start + int128_get64(section->size);
+ uint8_t *host_addr = memory_region_get_ram_ptr(section->mr) +
+ section->offset_within_region;
+ GuestPhysBlock *predecessor = NULL;
/* find continuity in guest physical address space */
if (!QTAILQ_EMPTY(&g->list->head)) {
@@ -261,6 +246,20 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
#endif
}
+static void guest_phys_blocks_region_add(MemoryListener *listener,
+ MemoryRegionSection *section)
+{
+ GuestPhysListener *g = container_of(listener, GuestPhysListener, listener);
+
+ /* we only care about RAM */
+ if (!memory_region_is_ram(section->mr) ||
+ memory_region_is_ram_device(section->mr) ||
+ memory_region_is_nonvolatile(section->mr)) {
+ return;
+ }
+ guest_phys_block_add_section(g, section);
+}
+
void guest_phys_blocks_append(GuestPhysBlockList *list)
{
GuestPhysListener g = { 0 };
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 4/4] softmmu/memory_mapping: optimize for RamDiscardManager sections
2021-07-26 16:03 [PATCH v3 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
` (2 preceding siblings ...)
2021-07-26 16:03 ` [PATCH v3 3/4] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
@ 2021-07-26 16:03 ` David Hildenbrand
3 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-26 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
Alex Williamson, Claudio Fontana, Paolo Bonzini,
Marc-André Lureau, Alex Bennée, Igor Mammedov,
Stefan Berger
virtio-mem logically plugs/unplugs memory within a sparse memory region
and notifies via the RamDiscardManager interface when parts become
plugged (populated) or unplugged (discarded).
Currently, we end up (via the two users)
1) zeroing all logically unplugged/discarded memory during TPM resets.
2) reading all logically unplugged/discarded memory when dumping, to
figure out the content is zero.
1) is always bad, because we assume unplugged memory stays discarded
(and is already implicitly zero).
2) isn't that bad with anonymous memory, we end up reading the zero
page (slow and unnecessary, though). However, once we use some
file-backed memory (future use case), even reading will populate memory.
Let's cut out all parts marked as not-populated (discarded) via the
RamDiscardManager. As virtio-mem is the single user, this now means that
logically unplugged memory ranges will no longer be included in the
dump, which results in smaller dump files and faster dumping.
virtio-mem has a minimum granularity of 1 MiB (and the default is usually
2 MiB). Theoretically, we can see quite some fragmentation, in practice
we won't have it completely fragmented in 1 MiB pieces. Still, we might
end up with many physical ranges.
Both, the ELF format and kdump seem to be ready to support many
individual ranges (e.g., for ELF it seems to be UINT32_MAX, kdump has a
linear bitmap).
Reviewed-by: Peter Xu <peterx@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
softmmu/memory_mapping.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index a2af02c41c..a62eaa49cc 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -246,6 +246,15 @@ static void guest_phys_block_add_section(GuestPhysListener *g,
#endif
}
+static int guest_phys_ram_populate_cb(MemoryRegionSection *section,
+ void *opaque)
+{
+ GuestPhysListener *g = opaque;
+
+ guest_phys_block_add_section(g, section);
+ return 0;
+}
+
static void guest_phys_blocks_region_add(MemoryListener *listener,
MemoryRegionSection *section)
{
@@ -257,6 +266,17 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
memory_region_is_nonvolatile(section->mr)) {
return;
}
+
+ /* for special sparse regions, only add populated parts */
+ if (memory_region_has_ram_discard_manager(section->mr)) {
+ RamDiscardManager *rdm;
+
+ rdm = memory_region_get_ram_discard_manager(section->mr);
+ ram_discard_manager_replay_populated(rdm, section,
+ guest_phys_ram_populate_cb, g);
+ return;
+ }
+
guest_phys_block_add_section(g, section);
}
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 3/4] softmmu/memory_mapping: factor out adding physical memory ranges
2021-07-21 8:38 [PATCH v3 0/4] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
@ 2021-07-21 8:38 ` David Hildenbrand
0 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2021-07-21 8:38 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Thomas Huth, Eduardo Habkost, Michael S. Tsirkin,
David Hildenbrand, Dr . David Alan Gilbert, Peter Xu,
Alex Williamson, Claudio Fontana, Paolo Bonzini,
Marc-André Lureau, Alex Bennée, Igor Mammedov,
Stefan Berger
Let's factor out adding a MemoryRegionSection to the list, to be reused in
RamDiscardManager context next.
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Thomas Huth <thuth@redhat.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
softmmu/memory_mapping.c | 41 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index d401ca7e31..a2af02c41c 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -193,29 +193,14 @@ typedef struct GuestPhysListener {
MemoryListener listener;
} GuestPhysListener;
-static void guest_phys_blocks_region_add(MemoryListener *listener,
+static void guest_phys_block_add_section(GuestPhysListener *g,
MemoryRegionSection *section)
{
- GuestPhysListener *g;
- uint64_t section_size;
- hwaddr target_start, target_end;
- uint8_t *host_addr;
- GuestPhysBlock *predecessor;
-
- /* we only care about RAM */
- if (!memory_region_is_ram(section->mr) ||
- memory_region_is_ram_device(section->mr) ||
- memory_region_is_nonvolatile(section->mr)) {
- return;
- }
-
- g = container_of(listener, GuestPhysListener, listener);
- section_size = int128_get64(section->size);
- target_start = section->offset_within_address_space;
- target_end = target_start + section_size;
- host_addr = memory_region_get_ram_ptr(section->mr) +
- section->offset_within_region;
- predecessor = NULL;
+ const hwaddr target_start = section->offset_within_address_space;
+ const hwaddr target_end = target_start + int128_get64(section->size);
+ uint8_t *host_addr = memory_region_get_ram_ptr(section->mr) +
+ section->offset_within_region;
+ GuestPhysBlock *predecessor = NULL;
/* find continuity in guest physical address space */
if (!QTAILQ_EMPTY(&g->list->head)) {
@@ -261,6 +246,20 @@ static void guest_phys_blocks_region_add(MemoryListener *listener,
#endif
}
+static void guest_phys_blocks_region_add(MemoryListener *listener,
+ MemoryRegionSection *section)
+{
+ GuestPhysListener *g = container_of(listener, GuestPhysListener, listener);
+
+ /* we only care about RAM */
+ if (!memory_region_is_ram(section->mr) ||
+ memory_region_is_ram_device(section->mr) ||
+ memory_region_is_nonvolatile(section->mr)) {
+ return;
+ }
+ guest_phys_block_add_section(g, section);
+}
+
void guest_phys_blocks_append(GuestPhysBlockList *list)
{
GuestPhysListener g = { 0 };
--
2.31.1
^ permalink raw reply related [flat|nested] 8+ messages in thread