All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] target/arm: Reduced-IPA space and highmem=off fixes
@ 2021-10-03 16:46 ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Andrew Jones, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

Here's another stab at enabling QEMU on systems with pathologically
reduced IPA ranges such as the Apple M1 (original version at [1]).
Eventually, we're able to run a KVM guest with more than just 3GB of
RAM on a system with a 36bit IPA space, and at most 123 vCPUs.

This series does a few things:
- decouple the enabling of the highmem PCIe region from the highmem
  attribute
- introduce a new attribute to control the enabling of the highmem
  GICv3 redistributors
- correctly cap the PA range with highmem is off
- generalise the highmem behaviour to any PA range
- disable both highmem PCIe and GICv3 RDs when they are outside of the
  PA range

This has been tested on an M1-based Mac-mini running Linux v5.15-rc3.

[1] https://lore.kernel.org/r/20210822144441.1290891-1-maz@kernel.org

Marc Zyngier (5):
  hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  hw/arm/virt: Add a control for the the highmem redistributors
  hw/arm/virt: Honor highmem setting when computing the memory map
  hw/arm/virt: Use the PA range to compute the memory map
  hw/arm/virt: Disable highmem devices that don't fit in the PA range

 hw/arm/virt-acpi-build.c | 12 ++++-----
 hw/arm/virt.c            | 53 ++++++++++++++++++++++++++++++++++------
 include/hw/arm/virt.h    |  4 ++-
 3 files changed, 54 insertions(+), 15 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 0/5] target/arm: Reduced-IPA space and highmem=off fixes
@ 2021-10-03 16:46 ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: kvm, kernel-team, kvmarm

Here's another stab at enabling QEMU on systems with pathologically
reduced IPA ranges such as the Apple M1 (original version at [1]).
Eventually, we're able to run a KVM guest with more than just 3GB of
RAM on a system with a 36bit IPA space, and at most 123 vCPUs.

This series does a few things:
- decouple the enabling of the highmem PCIe region from the highmem
  attribute
- introduce a new attribute to control the enabling of the highmem
  GICv3 redistributors
- correctly cap the PA range with highmem is off
- generalise the highmem behaviour to any PA range
- disable both highmem PCIe and GICv3 RDs when they are outside of the
  PA range

This has been tested on an M1-based Mac-mini running Linux v5.15-rc3.

[1] https://lore.kernel.org/r/20210822144441.1290891-1-maz@kernel.org

Marc Zyngier (5):
  hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  hw/arm/virt: Add a control for the the highmem redistributors
  hw/arm/virt: Honor highmem setting when computing the memory map
  hw/arm/virt: Use the PA range to compute the memory map
  hw/arm/virt: Disable highmem devices that don't fit in the PA range

 hw/arm/virt-acpi-build.c | 12 ++++-----
 hw/arm/virt.c            | 53 ++++++++++++++++++++++++++++++++++------
 include/hw/arm/virt.h    |  4 ++-
 3 files changed, 54 insertions(+), 15 deletions(-)

-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 0/5] target/arm: Reduced-IPA space and highmem=off fixes
@ 2021-10-03 16:46 ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Andrew Jones, kvm, Eric Auger, kernel-team, kvmarm

Here's another stab at enabling QEMU on systems with pathologically
reduced IPA ranges such as the Apple M1 (original version at [1]).
Eventually, we're able to run a KVM guest with more than just 3GB of
RAM on a system with a 36bit IPA space, and at most 123 vCPUs.

This series does a few things:
- decouple the enabling of the highmem PCIe region from the highmem
  attribute
- introduce a new attribute to control the enabling of the highmem
  GICv3 redistributors
- correctly cap the PA range with highmem is off
- generalise the highmem behaviour to any PA range
- disable both highmem PCIe and GICv3 RDs when they are outside of the
  PA range

This has been tested on an M1-based Mac-mini running Linux v5.15-rc3.

[1] https://lore.kernel.org/r/20210822144441.1290891-1-maz@kernel.org

Marc Zyngier (5):
  hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  hw/arm/virt: Add a control for the the highmem redistributors
  hw/arm/virt: Honor highmem setting when computing the memory map
  hw/arm/virt: Use the PA range to compute the memory map
  hw/arm/virt: Disable highmem devices that don't fit in the PA range

 hw/arm/virt-acpi-build.c | 12 ++++-----
 hw/arm/virt.c            | 53 ++++++++++++++++++++++++++++++++++------
 include/hw/arm/virt.h    |  4 ++-
 3 files changed, 54 insertions(+), 15 deletions(-)

-- 
2.30.2



^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2021-10-03 16:46 ` Marc Zyngier
  (?)
@ 2021-10-03 16:46   ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Andrew Jones, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

Currently, the highmem PCIe region is oddly keyed on the highmem
attribute instead of highmem_ecam. Move the enablement of this PCIe
region over to highmem_ecam.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt-acpi-build.c | 10 ++++------
 hw/arm/virt.c            |  4 ++--
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 037cc1fd82..d7bef0e627 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
 }
 
 static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
-                              uint32_t irq, bool use_highmem, bool highmem_ecam,
-                              VirtMachineState *vms)
+                              uint32_t irq, VirtMachineState *vms)
 {
-    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
+    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
     struct GPEXConfig cfg = {
         .mmio32 = memmap[VIRT_PCIE_MMIO],
         .pio    = memmap[VIRT_PCIE_PIO],
@@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
         .bus    = vms->bus,
     };
 
-    if (use_highmem) {
+    if (vms->highmem_ecam) {
         cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
     }
 
@@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
     acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
     acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
                     (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
-    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
-                      vms->highmem, vms->highmem_ecam, vms);
+    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);
     if (vms->acpi_dev) {
         build_ged_aml(scope, "\\_SB."GED_DEVICE,
                       HOTPLUG_HANDLER(vms->acpi_dev),
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 7170aaacd5..8021d545c3 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
                              mmio_reg, base_mmio, size_mmio);
     memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
 
-    if (vms->highmem) {
+    if (vms->highmem_ecam) {
         /* Map high MMIO space */
         MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
 
@@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
     qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, base_ecam, 2, size_ecam);
 
-    if (vms->highmem) {
+    if (vms->highmem_ecam) {
         qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
                                      1, FDT_PCI_RANGE_IOPORT, 2, 0,
                                      2, base_pio, 2, size_pio,
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: kvm, kernel-team, kvmarm

Currently, the highmem PCIe region is oddly keyed on the highmem
attribute instead of highmem_ecam. Move the enablement of this PCIe
region over to highmem_ecam.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt-acpi-build.c | 10 ++++------
 hw/arm/virt.c            |  4 ++--
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 037cc1fd82..d7bef0e627 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
 }
 
 static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
-                              uint32_t irq, bool use_highmem, bool highmem_ecam,
-                              VirtMachineState *vms)
+                              uint32_t irq, VirtMachineState *vms)
 {
-    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
+    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
     struct GPEXConfig cfg = {
         .mmio32 = memmap[VIRT_PCIE_MMIO],
         .pio    = memmap[VIRT_PCIE_PIO],
@@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
         .bus    = vms->bus,
     };
 
-    if (use_highmem) {
+    if (vms->highmem_ecam) {
         cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
     }
 
@@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
     acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
     acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
                     (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
-    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
-                      vms->highmem, vms->highmem_ecam, vms);
+    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);
     if (vms->acpi_dev) {
         build_ged_aml(scope, "\\_SB."GED_DEVICE,
                       HOTPLUG_HANDLER(vms->acpi_dev),
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 7170aaacd5..8021d545c3 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
                              mmio_reg, base_mmio, size_mmio);
     memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
 
-    if (vms->highmem) {
+    if (vms->highmem_ecam) {
         /* Map high MMIO space */
         MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
 
@@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
     qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, base_ecam, 2, size_ecam);
 
-    if (vms->highmem) {
+    if (vms->highmem_ecam) {
         qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
                                      1, FDT_PCI_RANGE_IOPORT, 2, 0,
                                      2, base_pio, 2, size_pio,
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Andrew Jones, kvm, Eric Auger, kernel-team, kvmarm

Currently, the highmem PCIe region is oddly keyed on the highmem
attribute instead of highmem_ecam. Move the enablement of this PCIe
region over to highmem_ecam.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt-acpi-build.c | 10 ++++------
 hw/arm/virt.c            |  4 ++--
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 037cc1fd82..d7bef0e627 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
 }
 
 static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
-                              uint32_t irq, bool use_highmem, bool highmem_ecam,
-                              VirtMachineState *vms)
+                              uint32_t irq, VirtMachineState *vms)
 {
-    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
+    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
     struct GPEXConfig cfg = {
         .mmio32 = memmap[VIRT_PCIE_MMIO],
         .pio    = memmap[VIRT_PCIE_PIO],
@@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
         .bus    = vms->bus,
     };
 
-    if (use_highmem) {
+    if (vms->highmem_ecam) {
         cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
     }
 
@@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
     acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
     acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
                     (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
-    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
-                      vms->highmem, vms->highmem_ecam, vms);
+    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);
     if (vms->acpi_dev) {
         build_ged_aml(scope, "\\_SB."GED_DEVICE,
                       HOTPLUG_HANDLER(vms->acpi_dev),
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 7170aaacd5..8021d545c3 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
                              mmio_reg, base_mmio, size_mmio);
     memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
 
-    if (vms->highmem) {
+    if (vms->highmem_ecam) {
         /* Map high MMIO space */
         MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
 
@@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
     qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
                                  2, base_ecam, 2, size_ecam);
 
-    if (vms->highmem) {
+    if (vms->highmem_ecam) {
         qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
                                      1, FDT_PCI_RANGE_IOPORT, 2, 0,
                                      2, base_pio, 2, size_pio,
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
  2021-10-03 16:46 ` Marc Zyngier
  (?)
@ 2021-10-03 16:46   ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Andrew Jones, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

Just like we can control the enablement of the highmem PCIe region
using highmem_ecam, let's add a control for the highmem GICv3
redistributor region.

Similarily to highmem_ecam, these redistributors are disabled when
highmem is off.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt-acpi-build.c | 2 ++
 hw/arm/virt.c            | 3 +++
 include/hw/arm/virt.h    | 4 +++-
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index d7bef0e627..f0d0b662b7 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -792,6 +792,8 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
     acpi_add_table(table_offsets, tables_blob);
     build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
 
+    vms->highmem_redists &= vms->highmem;
+
     acpi_add_table(table_offsets, tables_blob);
     build_madt(tables_blob, tables->linker, vms);
 
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 8021d545c3..bcf58f677d 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2053,6 +2053,8 @@ static void machvirt_init(MachineState *machine)
 
     virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
 
+    vms->highmem_redists &= vms->highmem;
+
     create_gic(vms, sysmem);
 
     virt_cpu_post_init(vms, sysmem);
@@ -2750,6 +2752,7 @@ static void virt_instance_init(Object *obj)
     vms->gic_version = VIRT_GIC_VERSION_NOSEL;
 
     vms->highmem_ecam = !vmc->no_highmem_ecam;
+    vms->highmem_redists = true;
 
     if (vmc->no_its) {
         vms->its = false;
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index b461b8d261..787cc8a27d 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -141,6 +141,7 @@ struct VirtMachineState {
     bool secure;
     bool highmem;
     bool highmem_ecam;
+    bool highmem_redists;
     bool its;
     bool tcg_its;
     bool virt;
@@ -187,7 +188,8 @@ static inline int virt_gicv3_redist_region_count(VirtMachineState *vms)
 
     assert(vms->gic_version == VIRT_GIC_VERSION_3);
 
-    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
+    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
+            vms->highmem_redists) ? 2 : 1;
 }
 
 #endif /* QEMU_ARM_VIRT_H */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: kvm, kernel-team, kvmarm

Just like we can control the enablement of the highmem PCIe region
using highmem_ecam, let's add a control for the highmem GICv3
redistributor region.

Similarily to highmem_ecam, these redistributors are disabled when
highmem is off.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt-acpi-build.c | 2 ++
 hw/arm/virt.c            | 3 +++
 include/hw/arm/virt.h    | 4 +++-
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index d7bef0e627..f0d0b662b7 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -792,6 +792,8 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
     acpi_add_table(table_offsets, tables_blob);
     build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
 
+    vms->highmem_redists &= vms->highmem;
+
     acpi_add_table(table_offsets, tables_blob);
     build_madt(tables_blob, tables->linker, vms);
 
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 8021d545c3..bcf58f677d 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2053,6 +2053,8 @@ static void machvirt_init(MachineState *machine)
 
     virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
 
+    vms->highmem_redists &= vms->highmem;
+
     create_gic(vms, sysmem);
 
     virt_cpu_post_init(vms, sysmem);
@@ -2750,6 +2752,7 @@ static void virt_instance_init(Object *obj)
     vms->gic_version = VIRT_GIC_VERSION_NOSEL;
 
     vms->highmem_ecam = !vmc->no_highmem_ecam;
+    vms->highmem_redists = true;
 
     if (vmc->no_its) {
         vms->its = false;
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index b461b8d261..787cc8a27d 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -141,6 +141,7 @@ struct VirtMachineState {
     bool secure;
     bool highmem;
     bool highmem_ecam;
+    bool highmem_redists;
     bool its;
     bool tcg_its;
     bool virt;
@@ -187,7 +188,8 @@ static inline int virt_gicv3_redist_region_count(VirtMachineState *vms)
 
     assert(vms->gic_version == VIRT_GIC_VERSION_3);
 
-    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
+    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
+            vms->highmem_redists) ? 2 : 1;
 }
 
 #endif /* QEMU_ARM_VIRT_H */
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Andrew Jones, kvm, Eric Auger, kernel-team, kvmarm

Just like we can control the enablement of the highmem PCIe region
using highmem_ecam, let's add a control for the highmem GICv3
redistributor region.

Similarily to highmem_ecam, these redistributors are disabled when
highmem is off.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt-acpi-build.c | 2 ++
 hw/arm/virt.c            | 3 +++
 include/hw/arm/virt.h    | 4 +++-
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index d7bef0e627..f0d0b662b7 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -792,6 +792,8 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
     acpi_add_table(table_offsets, tables_blob);
     build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
 
+    vms->highmem_redists &= vms->highmem;
+
     acpi_add_table(table_offsets, tables_blob);
     build_madt(tables_blob, tables->linker, vms);
 
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 8021d545c3..bcf58f677d 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2053,6 +2053,8 @@ static void machvirt_init(MachineState *machine)
 
     virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
 
+    vms->highmem_redists &= vms->highmem;
+
     create_gic(vms, sysmem);
 
     virt_cpu_post_init(vms, sysmem);
@@ -2750,6 +2752,7 @@ static void virt_instance_init(Object *obj)
     vms->gic_version = VIRT_GIC_VERSION_NOSEL;
 
     vms->highmem_ecam = !vmc->no_highmem_ecam;
+    vms->highmem_redists = true;
 
     if (vmc->no_its) {
         vms->its = false;
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index b461b8d261..787cc8a27d 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -141,6 +141,7 @@ struct VirtMachineState {
     bool secure;
     bool highmem;
     bool highmem_ecam;
+    bool highmem_redists;
     bool its;
     bool tcg_its;
     bool virt;
@@ -187,7 +188,8 @@ static inline int virt_gicv3_redist_region_count(VirtMachineState *vms)
 
     assert(vms->gic_version == VIRT_GIC_VERSION_3);
 
-    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
+    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
+            vms->highmem_redists) ? 2 : 1;
 }
 
 #endif /* QEMU_ARM_VIRT_H */
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
  2021-10-03 16:46 ` Marc Zyngier
  (?)
@ 2021-10-03 16:46   ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Andrew Jones, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

Even when the VM is configured with highmem=off, the highest_gpa
field includes devices that are above the 4GiB limit.
Similarily, nothing seem to check that the memory is within
the limit set by the highmem=off option.

This leads to failures in virt_kvm_type() on systems that have
a crippled IPA range, as the reported IPA space is larger than
what it should be.

Instead, honor the user-specified limit to only use the devices
at the lowest end of the spectrum, and fail if we have memory
crossing the 4GiB limit.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index bcf58f677d..9d2abdbd5f 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
         exit(EXIT_FAILURE);
     }
 
+    if (!vms->highmem &&
+        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
+        error_report("highmem=off, but memory crosses the 4GiB limit\n");
+        exit(EXIT_FAILURE);
+    }
     /*
      * We compute the base of the high IO region depending on the
      * amount of initial and device memory. The device memory start/size
@@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
         vms->memmap[i].size = size;
         base += size;
     }
-    vms->highest_gpa = base - 1;
+    vms->highest_gpa = (vms->highmem ?
+                        base :
+                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
     if (device_memory_size > 0) {
         ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
         ms->device_memory->base = device_memory_base;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: kvm, kernel-team, kvmarm

Even when the VM is configured with highmem=off, the highest_gpa
field includes devices that are above the 4GiB limit.
Similarily, nothing seem to check that the memory is within
the limit set by the highmem=off option.

This leads to failures in virt_kvm_type() on systems that have
a crippled IPA range, as the reported IPA space is larger than
what it should be.

Instead, honor the user-specified limit to only use the devices
at the lowest end of the spectrum, and fail if we have memory
crossing the 4GiB limit.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index bcf58f677d..9d2abdbd5f 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
         exit(EXIT_FAILURE);
     }
 
+    if (!vms->highmem &&
+        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
+        error_report("highmem=off, but memory crosses the 4GiB limit\n");
+        exit(EXIT_FAILURE);
+    }
     /*
      * We compute the base of the high IO region depending on the
      * amount of initial and device memory. The device memory start/size
@@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
         vms->memmap[i].size = size;
         base += size;
     }
-    vms->highest_gpa = base - 1;
+    vms->highest_gpa = (vms->highmem ?
+                        base :
+                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
     if (device_memory_size > 0) {
         ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
         ms->device_memory->base = device_memory_base;
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Andrew Jones, kvm, Eric Auger, kernel-team, kvmarm

Even when the VM is configured with highmem=off, the highest_gpa
field includes devices that are above the 4GiB limit.
Similarily, nothing seem to check that the memory is within
the limit set by the highmem=off option.

This leads to failures in virt_kvm_type() on systems that have
a crippled IPA range, as the reported IPA space is larger than
what it should be.

Instead, honor the user-specified limit to only use the devices
at the lowest end of the spectrum, and fail if we have memory
crossing the 4GiB limit.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index bcf58f677d..9d2abdbd5f 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
         exit(EXIT_FAILURE);
     }
 
+    if (!vms->highmem &&
+        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
+        error_report("highmem=off, but memory crosses the 4GiB limit\n");
+        exit(EXIT_FAILURE);
+    }
     /*
      * We compute the base of the high IO region depending on the
      * amount of initial and device memory. The device memory start/size
@@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
         vms->memmap[i].size = size;
         base += size;
     }
-    vms->highest_gpa = base - 1;
+    vms->highest_gpa = (vms->highmem ?
+                        base :
+                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
     if (device_memory_size > 0) {
         ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
         ms->device_memory->base = device_memory_base;
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
  2021-10-03 16:46 ` Marc Zyngier
  (?)
@ 2021-10-03 16:46   ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Andrew Jones, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

The highmem attribute is nothing but another way to express the
PA range of a VM. To support HW that has a smaller PA range then
what QEMU assumes, pass this PA range to the virt_set_memmap()
function, allowing it to correctly exclude highmem devices
if they are outside of the PA range.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 35 insertions(+), 11 deletions(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 9d2abdbd5f..a572e0c9d9 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
     return arm_cpu_mp_affinity(idx, clustersz);
 }
 
-static void virt_set_memmap(VirtMachineState *vms)
+static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
 {
     MachineState *ms = MACHINE(vms);
-    hwaddr base, device_memory_base, device_memory_size;
+    hwaddr base, device_memory_base, device_memory_size, memtop;
     int i;
 
     vms->memmap = extended_memmap;
@@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
         exit(EXIT_FAILURE);
     }
 
-    if (!vms->highmem &&
-        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
-        error_report("highmem=off, but memory crosses the 4GiB limit\n");
+    if (!vms->highmem)
+	    pa_bits = 32;
+
+    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
+	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
+			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
         exit(EXIT_FAILURE);
     }
     /*
@@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
     device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
 
     /* Base address of the high IO region */
-    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
+    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
     if (base < device_memory_base) {
         error_report("maxmem/slots too huge");
         exit(EXIT_FAILURE);
@@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
         vms->memmap[i].size = size;
         base += size;
     }
-    vms->highest_gpa = (vms->highmem ?
-                        base :
-                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
+
+    /*
+     * If base fits within pa_bits, all good. If it doesn't, limit it
+     * to the end of RAM, which is guaranteed to fit within pa_bits.
+     */
+    if (base <= BIT_ULL(pa_bits)) {
+        vms->highest_gpa = base -1;
+    } else {
+        vms->highest_gpa = memtop - 1;
+    }
+
     if (device_memory_size > 0) {
         ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
         ms->device_memory->base = device_memory_base;
@@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
      * to create a VM with the right number of IPA bits.
      */
     if (!vms->memmap) {
-        virt_set_memmap(vms);
+        ARMCPU *armcpu = ARM_CPU(first_cpu);
+        int pa_bits;
+
+        if (object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL)) {
+            pa_bits = arm_pamax(armcpu);
+        } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
+            /* v7 with LPAE */
+            pa_bits = 40;
+        } else {
+            /* Anything else */
+            pa_bits = 32;
+        }
+
+        virt_set_memmap(vms, pa_bits);
     }
 
     /* We can probe only here because during property set
@@ -2596,7 +2620,7 @@ static int virt_kvm_type(MachineState *ms, const char *type_str)
     max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms, &fixed_ipa);
 
     /* we freeze the memory map to compute the highest gpa */
-    virt_set_memmap(vms);
+    virt_set_memmap(vms, max_vm_pa_size);
 
     requested_pa_size = 64 - clz64(vms->highest_gpa);
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: kvm, kernel-team, kvmarm

The highmem attribute is nothing but another way to express the
PA range of a VM. To support HW that has a smaller PA range then
what QEMU assumes, pass this PA range to the virt_set_memmap()
function, allowing it to correctly exclude highmem devices
if they are outside of the PA range.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 35 insertions(+), 11 deletions(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 9d2abdbd5f..a572e0c9d9 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
     return arm_cpu_mp_affinity(idx, clustersz);
 }
 
-static void virt_set_memmap(VirtMachineState *vms)
+static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
 {
     MachineState *ms = MACHINE(vms);
-    hwaddr base, device_memory_base, device_memory_size;
+    hwaddr base, device_memory_base, device_memory_size, memtop;
     int i;
 
     vms->memmap = extended_memmap;
@@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
         exit(EXIT_FAILURE);
     }
 
-    if (!vms->highmem &&
-        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
-        error_report("highmem=off, but memory crosses the 4GiB limit\n");
+    if (!vms->highmem)
+	    pa_bits = 32;
+
+    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
+	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
+			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
         exit(EXIT_FAILURE);
     }
     /*
@@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
     device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
 
     /* Base address of the high IO region */
-    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
+    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
     if (base < device_memory_base) {
         error_report("maxmem/slots too huge");
         exit(EXIT_FAILURE);
@@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
         vms->memmap[i].size = size;
         base += size;
     }
-    vms->highest_gpa = (vms->highmem ?
-                        base :
-                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
+
+    /*
+     * If base fits within pa_bits, all good. If it doesn't, limit it
+     * to the end of RAM, which is guaranteed to fit within pa_bits.
+     */
+    if (base <= BIT_ULL(pa_bits)) {
+        vms->highest_gpa = base -1;
+    } else {
+        vms->highest_gpa = memtop - 1;
+    }
+
     if (device_memory_size > 0) {
         ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
         ms->device_memory->base = device_memory_base;
@@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
      * to create a VM with the right number of IPA bits.
      */
     if (!vms->memmap) {
-        virt_set_memmap(vms);
+        ARMCPU *armcpu = ARM_CPU(first_cpu);
+        int pa_bits;
+
+        if (object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL)) {
+            pa_bits = arm_pamax(armcpu);
+        } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
+            /* v7 with LPAE */
+            pa_bits = 40;
+        } else {
+            /* Anything else */
+            pa_bits = 32;
+        }
+
+        virt_set_memmap(vms, pa_bits);
     }
 
     /* We can probe only here because during property set
@@ -2596,7 +2620,7 @@ static int virt_kvm_type(MachineState *ms, const char *type_str)
     max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms, &fixed_ipa);
 
     /* we freeze the memory map to compute the highest gpa */
-    virt_set_memmap(vms);
+    virt_set_memmap(vms, max_vm_pa_size);
 
     requested_pa_size = 64 - clz64(vms->highest_gpa);
 
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Andrew Jones, kvm, Eric Auger, kernel-team, kvmarm

The highmem attribute is nothing but another way to express the
PA range of a VM. To support HW that has a smaller PA range then
what QEMU assumes, pass this PA range to the virt_set_memmap()
function, allowing it to correctly exclude highmem devices
if they are outside of the PA range.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 35 insertions(+), 11 deletions(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 9d2abdbd5f..a572e0c9d9 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
     return arm_cpu_mp_affinity(idx, clustersz);
 }
 
-static void virt_set_memmap(VirtMachineState *vms)
+static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
 {
     MachineState *ms = MACHINE(vms);
-    hwaddr base, device_memory_base, device_memory_size;
+    hwaddr base, device_memory_base, device_memory_size, memtop;
     int i;
 
     vms->memmap = extended_memmap;
@@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
         exit(EXIT_FAILURE);
     }
 
-    if (!vms->highmem &&
-        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
-        error_report("highmem=off, but memory crosses the 4GiB limit\n");
+    if (!vms->highmem)
+	    pa_bits = 32;
+
+    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
+	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
+			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
         exit(EXIT_FAILURE);
     }
     /*
@@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
     device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
 
     /* Base address of the high IO region */
-    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
+    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
     if (base < device_memory_base) {
         error_report("maxmem/slots too huge");
         exit(EXIT_FAILURE);
@@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
         vms->memmap[i].size = size;
         base += size;
     }
-    vms->highest_gpa = (vms->highmem ?
-                        base :
-                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
+
+    /*
+     * If base fits within pa_bits, all good. If it doesn't, limit it
+     * to the end of RAM, which is guaranteed to fit within pa_bits.
+     */
+    if (base <= BIT_ULL(pa_bits)) {
+        vms->highest_gpa = base -1;
+    } else {
+        vms->highest_gpa = memtop - 1;
+    }
+
     if (device_memory_size > 0) {
         ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
         ms->device_memory->base = device_memory_base;
@@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
      * to create a VM with the right number of IPA bits.
      */
     if (!vms->memmap) {
-        virt_set_memmap(vms);
+        ARMCPU *armcpu = ARM_CPU(first_cpu);
+        int pa_bits;
+
+        if (object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL)) {
+            pa_bits = arm_pamax(armcpu);
+        } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
+            /* v7 with LPAE */
+            pa_bits = 40;
+        } else {
+            /* Anything else */
+            pa_bits = 32;
+        }
+
+        virt_set_memmap(vms, pa_bits);
     }
 
     /* We can probe only here because during property set
@@ -2596,7 +2620,7 @@ static int virt_kvm_type(MachineState *ms, const char *type_str)
     max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms, &fixed_ipa);
 
     /* we freeze the memory map to compute the highest gpa */
-    virt_set_memmap(vms);
+    virt_set_memmap(vms, max_vm_pa_size);
 
     requested_pa_size = 64 - clz64(vms->highest_gpa);
 
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 5/5] hw/arm/virt: Disable highmem devices that don't fit in the PA range
  2021-10-03 16:46 ` Marc Zyngier
  (?)
@ 2021-10-03 16:46   ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Andrew Jones, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

Make sure both the highmem PCIe and GICv3 regions are disabled when
they don't fully fit in the PA range.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index a572e0c9d9..756f67b6c8 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1673,6 +1673,9 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
     if (base <= BIT_ULL(pa_bits)) {
         vms->highest_gpa = base -1;
     } else {
+        /* Advertise that we have disabled the highmem devices */
+        vms->highmem_ecam = false;
+        vms->highmem_redists = false;
         vms->highest_gpa = memtop - 1;
     }
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 5/5] hw/arm/virt: Disable highmem devices that don't fit in the PA range
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: kvm, kernel-team, kvmarm

Make sure both the highmem PCIe and GICv3 regions are disabled when
they don't fully fit in the PA range.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index a572e0c9d9..756f67b6c8 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1673,6 +1673,9 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
     if (base <= BIT_ULL(pa_bits)) {
         vms->highest_gpa = base -1;
     } else {
+        /* Advertise that we have disabled the highmem devices */
+        vms->highmem_ecam = false;
+        vms->highmem_redists = false;
         vms->highest_gpa = memtop - 1;
     }
 
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 5/5] hw/arm/virt: Disable highmem devices that don't fit in the PA range
@ 2021-10-03 16:46   ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-10-03 16:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Andrew Jones, kvm, Eric Auger, kernel-team, kvmarm

Make sure both the highmem PCIe and GICv3 regions are disabled when
they don't fully fit in the PA range.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 hw/arm/virt.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index a572e0c9d9..756f67b6c8 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1673,6 +1673,9 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
     if (base <= BIT_ULL(pa_bits)) {
         vms->highest_gpa = base -1;
     } else {
+        /* Advertise that we have disabled the highmem devices */
+        vms->highmem_ecam = false;
+        vms->highmem_redists = false;
         vms->highest_gpa = memtop - 1;
     }
 
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2021-10-03 16:46   ` Marc Zyngier
  (?)
@ 2021-10-04  9:41     ` Andrew Jones
  -1 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

On Sun, Oct 03, 2021 at 05:46:01PM +0100, Marc Zyngier wrote:
> Currently, the highmem PCIe region is oddly keyed on the highmem
> attribute instead of highmem_ecam. Move the enablement of this PCIe
> region over to highmem_ecam.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 10 ++++------
>  hw/arm/virt.c            |  4 ++--
>  2 files changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index 037cc1fd82..d7bef0e627 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>  }
>  
>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> -                              VirtMachineState *vms)
> +                              uint32_t irq, VirtMachineState *vms)
>  {
> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>      struct GPEXConfig cfg = {
>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>          .pio    = memmap[VIRT_PCIE_PIO],
> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>          .bus    = vms->bus,
>      };
>  
> -    if (use_highmem) {
> +    if (vms->highmem_ecam) {
>          cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
>      }
>  
> @@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>      acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
>      acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
>                      (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
> -    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
> -                      vms->highmem, vms->highmem_ecam, vms);
> +    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);

While tidying this interface, could also remove the superfluous ().

>      if (vms->acpi_dev) {
>          build_ged_aml(scope, "\\_SB."GED_DEVICE,
>                        HOTPLUG_HANDLER(vms->acpi_dev),
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 7170aaacd5..8021d545c3 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
>                               mmio_reg, base_mmio, size_mmio);
>      memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          /* Map high MMIO space */
>          MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
>  
> @@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
>      qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
>                                   2, base_ecam, 2, size_ecam);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
>                                       1, FDT_PCI_RANGE_IOPORT, 2, 0,
>                                       2, base_pio, 2, size_pio,
> -- 
> 2.30.2
> 


Reviewed-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2021-10-04  9:41     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:41 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:01PM +0100, Marc Zyngier wrote:
> Currently, the highmem PCIe region is oddly keyed on the highmem
> attribute instead of highmem_ecam. Move the enablement of this PCIe
> region over to highmem_ecam.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 10 ++++------
>  hw/arm/virt.c            |  4 ++--
>  2 files changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index 037cc1fd82..d7bef0e627 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>  }
>  
>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> -                              VirtMachineState *vms)
> +                              uint32_t irq, VirtMachineState *vms)
>  {
> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>      struct GPEXConfig cfg = {
>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>          .pio    = memmap[VIRT_PCIE_PIO],
> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>          .bus    = vms->bus,
>      };
>  
> -    if (use_highmem) {
> +    if (vms->highmem_ecam) {
>          cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
>      }
>  
> @@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>      acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
>      acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
>                      (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
> -    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
> -                      vms->highmem, vms->highmem_ecam, vms);
> +    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);

While tidying this interface, could also remove the superfluous ().

>      if (vms->acpi_dev) {
>          build_ged_aml(scope, "\\_SB."GED_DEVICE,
>                        HOTPLUG_HANDLER(vms->acpi_dev),
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 7170aaacd5..8021d545c3 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
>                               mmio_reg, base_mmio, size_mmio);
>      memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          /* Map high MMIO space */
>          MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
>  
> @@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
>      qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
>                                   2, base_ecam, 2, size_ecam);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
>                                       1, FDT_PCI_RANGE_IOPORT, 2, 0,
>                                       2, base_pio, 2, size_pio,
> -- 
> 2.30.2
> 


Reviewed-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2021-10-04  9:41     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, kvm, qemu-devel, Eric Auger, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:01PM +0100, Marc Zyngier wrote:
> Currently, the highmem PCIe region is oddly keyed on the highmem
> attribute instead of highmem_ecam. Move the enablement of this PCIe
> region over to highmem_ecam.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 10 ++++------
>  hw/arm/virt.c            |  4 ++--
>  2 files changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index 037cc1fd82..d7bef0e627 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>  }
>  
>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> -                              VirtMachineState *vms)
> +                              uint32_t irq, VirtMachineState *vms)
>  {
> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>      struct GPEXConfig cfg = {
>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>          .pio    = memmap[VIRT_PCIE_PIO],
> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>          .bus    = vms->bus,
>      };
>  
> -    if (use_highmem) {
> +    if (vms->highmem_ecam) {
>          cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
>      }
>  
> @@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>      acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
>      acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
>                      (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
> -    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
> -                      vms->highmem, vms->highmem_ecam, vms);
> +    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);

While tidying this interface, could also remove the superfluous ().

>      if (vms->acpi_dev) {
>          build_ged_aml(scope, "\\_SB."GED_DEVICE,
>                        HOTPLUG_HANDLER(vms->acpi_dev),
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 7170aaacd5..8021d545c3 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
>                               mmio_reg, base_mmio, size_mmio);
>      memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          /* Map high MMIO space */
>          MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
>  
> @@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
>      qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
>                                   2, base_ecam, 2, size_ecam);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
>                                       1, FDT_PCI_RANGE_IOPORT, 2, 0,
>                                       2, base_pio, 2, size_pio,
> -- 
> 2.30.2
> 


Reviewed-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
  2021-10-03 16:46   ` Marc Zyngier
  (?)
@ 2021-10-04  9:44     ` Andrew Jones
  -1 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:44 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

On Sun, Oct 03, 2021 at 05:46:02PM +0100, Marc Zyngier wrote:
> Just like we can control the enablement of the highmem PCIe region
> using highmem_ecam, let's add a control for the highmem GICv3
> redistributor region.
> 
> Similarily to highmem_ecam, these redistributors are disabled when
> highmem is off.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 2 ++
>  hw/arm/virt.c            | 3 +++
>  include/hw/arm/virt.h    | 4 +++-
>  3 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index d7bef0e627..f0d0b662b7 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -792,6 +792,8 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
>      acpi_add_table(table_offsets, tables_blob);
>      build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
>  
> +    vms->highmem_redists &= vms->highmem;
> +
>      acpi_add_table(table_offsets, tables_blob);
>      build_madt(tables_blob, tables->linker, vms);
>  
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 8021d545c3..bcf58f677d 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -2053,6 +2053,8 @@ static void machvirt_init(MachineState *machine)
>  
>      virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
>  
> +    vms->highmem_redists &= vms->highmem;
> +
>      create_gic(vms, sysmem);
>  
>      virt_cpu_post_init(vms, sysmem);
> @@ -2750,6 +2752,7 @@ static void virt_instance_init(Object *obj)
>      vms->gic_version = VIRT_GIC_VERSION_NOSEL;
>  
>      vms->highmem_ecam = !vmc->no_highmem_ecam;
> +    vms->highmem_redists = true;
>  
>      if (vmc->no_its) {
>          vms->its = false;
> diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
> index b461b8d261..787cc8a27d 100644
> --- a/include/hw/arm/virt.h
> +++ b/include/hw/arm/virt.h
> @@ -141,6 +141,7 @@ struct VirtMachineState {
>      bool secure;
>      bool highmem;
>      bool highmem_ecam;
> +    bool highmem_redists;
>      bool its;
>      bool tcg_its;
>      bool virt;
> @@ -187,7 +188,8 @@ static inline int virt_gicv3_redist_region_count(VirtMachineState *vms)
>  
>      assert(vms->gic_version == VIRT_GIC_VERSION_3);
>  
> -    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
> +    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
> +            vms->highmem_redists) ? 2 : 1;

Wouldn't it be equivalent to just use vms->highmem here?

>  }
>  
>  #endif /* QEMU_ARM_VIRT_H */
> -- 
> 2.30.2
> 

Thanks,
drew


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
@ 2021-10-04  9:44     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:44 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:02PM +0100, Marc Zyngier wrote:
> Just like we can control the enablement of the highmem PCIe region
> using highmem_ecam, let's add a control for the highmem GICv3
> redistributor region.
> 
> Similarily to highmem_ecam, these redistributors are disabled when
> highmem is off.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 2 ++
>  hw/arm/virt.c            | 3 +++
>  include/hw/arm/virt.h    | 4 +++-
>  3 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index d7bef0e627..f0d0b662b7 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -792,6 +792,8 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
>      acpi_add_table(table_offsets, tables_blob);
>      build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
>  
> +    vms->highmem_redists &= vms->highmem;
> +
>      acpi_add_table(table_offsets, tables_blob);
>      build_madt(tables_blob, tables->linker, vms);
>  
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 8021d545c3..bcf58f677d 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -2053,6 +2053,8 @@ static void machvirt_init(MachineState *machine)
>  
>      virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
>  
> +    vms->highmem_redists &= vms->highmem;
> +
>      create_gic(vms, sysmem);
>  
>      virt_cpu_post_init(vms, sysmem);
> @@ -2750,6 +2752,7 @@ static void virt_instance_init(Object *obj)
>      vms->gic_version = VIRT_GIC_VERSION_NOSEL;
>  
>      vms->highmem_ecam = !vmc->no_highmem_ecam;
> +    vms->highmem_redists = true;
>  
>      if (vmc->no_its) {
>          vms->its = false;
> diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
> index b461b8d261..787cc8a27d 100644
> --- a/include/hw/arm/virt.h
> +++ b/include/hw/arm/virt.h
> @@ -141,6 +141,7 @@ struct VirtMachineState {
>      bool secure;
>      bool highmem;
>      bool highmem_ecam;
> +    bool highmem_redists;
>      bool its;
>      bool tcg_its;
>      bool virt;
> @@ -187,7 +188,8 @@ static inline int virt_gicv3_redist_region_count(VirtMachineState *vms)
>  
>      assert(vms->gic_version == VIRT_GIC_VERSION_3);
>  
> -    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
> +    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
> +            vms->highmem_redists) ? 2 : 1;

Wouldn't it be equivalent to just use vms->highmem here?

>  }
>  
>  #endif /* QEMU_ARM_VIRT_H */
> -- 
> 2.30.2
> 

Thanks,
drew

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
@ 2021-10-04  9:44     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:44 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, kvm, qemu-devel, Eric Auger, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:02PM +0100, Marc Zyngier wrote:
> Just like we can control the enablement of the highmem PCIe region
> using highmem_ecam, let's add a control for the highmem GICv3
> redistributor region.
> 
> Similarily to highmem_ecam, these redistributors are disabled when
> highmem is off.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 2 ++
>  hw/arm/virt.c            | 3 +++
>  include/hw/arm/virt.h    | 4 +++-
>  3 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index d7bef0e627..f0d0b662b7 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -792,6 +792,8 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
>      acpi_add_table(table_offsets, tables_blob);
>      build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
>  
> +    vms->highmem_redists &= vms->highmem;
> +
>      acpi_add_table(table_offsets, tables_blob);
>      build_madt(tables_blob, tables->linker, vms);
>  
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 8021d545c3..bcf58f677d 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -2053,6 +2053,8 @@ static void machvirt_init(MachineState *machine)
>  
>      virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
>  
> +    vms->highmem_redists &= vms->highmem;
> +
>      create_gic(vms, sysmem);
>  
>      virt_cpu_post_init(vms, sysmem);
> @@ -2750,6 +2752,7 @@ static void virt_instance_init(Object *obj)
>      vms->gic_version = VIRT_GIC_VERSION_NOSEL;
>  
>      vms->highmem_ecam = !vmc->no_highmem_ecam;
> +    vms->highmem_redists = true;
>  
>      if (vmc->no_its) {
>          vms->its = false;
> diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
> index b461b8d261..787cc8a27d 100644
> --- a/include/hw/arm/virt.h
> +++ b/include/hw/arm/virt.h
> @@ -141,6 +141,7 @@ struct VirtMachineState {
>      bool secure;
>      bool highmem;
>      bool highmem_ecam;
> +    bool highmem_redists;
>      bool its;
>      bool tcg_its;
>      bool virt;
> @@ -187,7 +188,8 @@ static inline int virt_gicv3_redist_region_count(VirtMachineState *vms)
>  
>      assert(vms->gic_version == VIRT_GIC_VERSION_3);
>  
> -    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
> +    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
> +            vms->highmem_redists) ? 2 : 1;

Wouldn't it be equivalent to just use vms->highmem here?

>  }
>  
>  #endif /* QEMU_ARM_VIRT_H */
> -- 
> 2.30.2
> 

Thanks,
drew



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
  2021-10-03 16:46   ` Marc Zyngier
  (?)
@ 2021-10-04  9:44     ` Andrew Jones
  -1 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:44 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

On Sun, Oct 03, 2021 at 05:46:03PM +0100, Marc Zyngier wrote:
> Even when the VM is configured with highmem=off, the highest_gpa
> field includes devices that are above the 4GiB limit.
> Similarily, nothing seem to check that the memory is within
> the limit set by the highmem=off option.
> 
> This leads to failures in virt_kvm_type() on systems that have
> a crippled IPA range, as the reported IPA space is larger than
> what it should be.
> 
> Instead, honor the user-specified limit to only use the devices
> at the lowest end of the spectrum, and fail if we have memory
> crossing the 4GiB limit.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index bcf58f677d..9d2abdbd5f 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> +    if (!vms->highmem &&
> +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +        exit(EXIT_FAILURE);
> +    }
>      /*
>       * We compute the base of the high IO region depending on the
>       * amount of initial and device memory. The device memory start/size
> @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = base - 1;
> +    vms->highest_gpa = (vms->highmem ?
> +                        base :
> +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;
> -- 
> 2.30.2
>

Reviewed-by: Andrew Jones <drjones@redhat.com>


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
@ 2021-10-04  9:44     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:44 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:03PM +0100, Marc Zyngier wrote:
> Even when the VM is configured with highmem=off, the highest_gpa
> field includes devices that are above the 4GiB limit.
> Similarily, nothing seem to check that the memory is within
> the limit set by the highmem=off option.
> 
> This leads to failures in virt_kvm_type() on systems that have
> a crippled IPA range, as the reported IPA space is larger than
> what it should be.
> 
> Instead, honor the user-specified limit to only use the devices
> at the lowest end of the spectrum, and fail if we have memory
> crossing the 4GiB limit.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index bcf58f677d..9d2abdbd5f 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> +    if (!vms->highmem &&
> +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +        exit(EXIT_FAILURE);
> +    }
>      /*
>       * We compute the base of the high IO region depending on the
>       * amount of initial and device memory. The device memory start/size
> @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = base - 1;
> +    vms->highest_gpa = (vms->highmem ?
> +                        base :
> +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;
> -- 
> 2.30.2
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
@ 2021-10-04  9:44     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04  9:44 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, kvm, qemu-devel, Eric Auger, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:03PM +0100, Marc Zyngier wrote:
> Even when the VM is configured with highmem=off, the highest_gpa
> field includes devices that are above the 4GiB limit.
> Similarily, nothing seem to check that the memory is within
> the limit set by the highmem=off option.
> 
> This leads to failures in virt_kvm_type() on systems that have
> a crippled IPA range, as the reported IPA space is larger than
> what it should be.
> 
> Instead, honor the user-specified limit to only use the devices
> at the lowest end of the spectrum, and fail if we have memory
> crossing the 4GiB limit.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index bcf58f677d..9d2abdbd5f 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> +    if (!vms->highmem &&
> +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +        exit(EXIT_FAILURE);
> +    }
>      /*
>       * We compute the base of the high IO region depending on the
>       * amount of initial and device memory. The device memory start/size
> @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = base - 1;
> +    vms->highest_gpa = (vms->highmem ?
> +                        base :
> +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;
> -- 
> 2.30.2
>

Reviewed-by: Andrew Jones <drjones@redhat.com>



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
  2021-10-03 16:46   ` Marc Zyngier
  (?)
@ 2021-10-04 10:11     ` Andrew Jones
  -1 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:11 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
> The highmem attribute is nothing but another way to express the
> PA range of a VM. To support HW that has a smaller PA range then
> what QEMU assumes, pass this PA range to the virt_set_memmap()
> function, allowing it to correctly exclude highmem devices
> if they are outside of the PA range.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 35 insertions(+), 11 deletions(-)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 9d2abdbd5f..a572e0c9d9 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
>      return arm_cpu_mp_affinity(idx, clustersz);
>  }
>  
> -static void virt_set_memmap(VirtMachineState *vms)
> +static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
>  {
>      MachineState *ms = MACHINE(vms);
> -    hwaddr base, device_memory_base, device_memory_size;
> +    hwaddr base, device_memory_base, device_memory_size, memtop;
>      int i;
>  
>      vms->memmap = extended_memmap;
> @@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> -    if (!vms->highmem &&
> -        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> -        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +    if (!vms->highmem)
> +	    pa_bits = 32;
> +
> +    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
> +	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
> +			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
>          exit(EXIT_FAILURE);
>      }
>      /*
> @@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
>      device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
>  
>      /* Base address of the high IO region */
> -    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> +    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
>      if (base < device_memory_base) {
>          error_report("maxmem/slots too huge");
>          exit(EXIT_FAILURE);
> @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = (vms->highmem ?
> -                        base :
> -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> +
> +    /*
> +     * If base fits within pa_bits, all good. If it doesn't, limit it
> +     * to the end of RAM, which is guaranteed to fit within pa_bits.

We tested that

  vms->memmap[VIRT_MEM].base + ms->maxram_size

fits within pa_bits, but here we're setting highest_gpa to

  ROUND_UP(vms->memmap[VIRT_MEM].base + ms->ram_size, GiB) +
  ROUND_UP(ms->maxram_size - ms->ram_size + ms->ram_slots * GiB, GiB)

which will be larger. Shouldn't we test memtop instead to make this
guarantee?


> +     */
> +    if (base <= BIT_ULL(pa_bits)) {
> +        vms->highest_gpa = base -1;
> +    } else {
> +        vms->highest_gpa = memtop - 1;
> +    }
> +
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;
> @@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
>       * to create a VM with the right number of IPA bits.
>       */
>      if (!vms->memmap) {
> -        virt_set_memmap(vms);
> +        ARMCPU *armcpu = ARM_CPU(first_cpu);


I think it's too early to use first_cpu here (although, I'll admit I'm
always confused as to what gets initialized when...) Assuming we need to
realize the cpus first, then we don't do that until a bit further down
in this function. I wonder if it's possible to delay this memmap setup
until after cpu realization. I see the memmap getting used prior when
calculating virt_max_cpus, but that looks like it needs to be updated
anyway to take highmem into account as to whether or not we should
consider the high gicv3 redist region in the calculation.

> +        int pa_bits;
> +
> +        if (object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL)) {
> +            pa_bits = arm_pamax(armcpu);
> +        } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
> +            /* v7 with LPAE */
> +            pa_bits = 40;
> +        } else {
> +            /* Anything else */
> +            pa_bits = 32;
> +        }
> +
> +        virt_set_memmap(vms, pa_bits);
>      }
>  
>      /* We can probe only here because during property set
> @@ -2596,7 +2620,7 @@ static int virt_kvm_type(MachineState *ms, const char *type_str)
>      max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms, &fixed_ipa);
>  
>      /* we freeze the memory map to compute the highest gpa */
> -    virt_set_memmap(vms);
> +    virt_set_memmap(vms, max_vm_pa_size);
>  
>      requested_pa_size = 64 - clz64(vms->highest_gpa);
>  
> -- 
> 2.30.2
> 

Thanks,
drew


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
@ 2021-10-04 10:11     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:11 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
> The highmem attribute is nothing but another way to express the
> PA range of a VM. To support HW that has a smaller PA range then
> what QEMU assumes, pass this PA range to the virt_set_memmap()
> function, allowing it to correctly exclude highmem devices
> if they are outside of the PA range.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 35 insertions(+), 11 deletions(-)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 9d2abdbd5f..a572e0c9d9 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
>      return arm_cpu_mp_affinity(idx, clustersz);
>  }
>  
> -static void virt_set_memmap(VirtMachineState *vms)
> +static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
>  {
>      MachineState *ms = MACHINE(vms);
> -    hwaddr base, device_memory_base, device_memory_size;
> +    hwaddr base, device_memory_base, device_memory_size, memtop;
>      int i;
>  
>      vms->memmap = extended_memmap;
> @@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> -    if (!vms->highmem &&
> -        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> -        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +    if (!vms->highmem)
> +	    pa_bits = 32;
> +
> +    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
> +	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
> +			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
>          exit(EXIT_FAILURE);
>      }
>      /*
> @@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
>      device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
>  
>      /* Base address of the high IO region */
> -    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> +    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
>      if (base < device_memory_base) {
>          error_report("maxmem/slots too huge");
>          exit(EXIT_FAILURE);
> @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = (vms->highmem ?
> -                        base :
> -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> +
> +    /*
> +     * If base fits within pa_bits, all good. If it doesn't, limit it
> +     * to the end of RAM, which is guaranteed to fit within pa_bits.

We tested that

  vms->memmap[VIRT_MEM].base + ms->maxram_size

fits within pa_bits, but here we're setting highest_gpa to

  ROUND_UP(vms->memmap[VIRT_MEM].base + ms->ram_size, GiB) +
  ROUND_UP(ms->maxram_size - ms->ram_size + ms->ram_slots * GiB, GiB)

which will be larger. Shouldn't we test memtop instead to make this
guarantee?


> +     */
> +    if (base <= BIT_ULL(pa_bits)) {
> +        vms->highest_gpa = base -1;
> +    } else {
> +        vms->highest_gpa = memtop - 1;
> +    }
> +
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;
> @@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
>       * to create a VM with the right number of IPA bits.
>       */
>      if (!vms->memmap) {
> -        virt_set_memmap(vms);
> +        ARMCPU *armcpu = ARM_CPU(first_cpu);


I think it's too early to use first_cpu here (although, I'll admit I'm
always confused as to what gets initialized when...) Assuming we need to
realize the cpus first, then we don't do that until a bit further down
in this function. I wonder if it's possible to delay this memmap setup
until after cpu realization. I see the memmap getting used prior when
calculating virt_max_cpus, but that looks like it needs to be updated
anyway to take highmem into account as to whether or not we should
consider the high gicv3 redist region in the calculation.

> +        int pa_bits;
> +
> +        if (object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL)) {
> +            pa_bits = arm_pamax(armcpu);
> +        } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
> +            /* v7 with LPAE */
> +            pa_bits = 40;
> +        } else {
> +            /* Anything else */
> +            pa_bits = 32;
> +        }
> +
> +        virt_set_memmap(vms, pa_bits);
>      }
>  
>      /* We can probe only here because during property set
> @@ -2596,7 +2620,7 @@ static int virt_kvm_type(MachineState *ms, const char *type_str)
>      max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms, &fixed_ipa);
>  
>      /* we freeze the memory map to compute the highest gpa */
> -    virt_set_memmap(vms);
> +    virt_set_memmap(vms, max_vm_pa_size);
>  
>      requested_pa_size = 64 - clz64(vms->highest_gpa);
>  
> -- 
> 2.30.2
> 

Thanks,
drew

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
@ 2021-10-04 10:11     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:11 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, kvm, qemu-devel, Eric Auger, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
> The highmem attribute is nothing but another way to express the
> PA range of a VM. To support HW that has a smaller PA range then
> what QEMU assumes, pass this PA range to the virt_set_memmap()
> function, allowing it to correctly exclude highmem devices
> if they are outside of the PA range.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 35 insertions(+), 11 deletions(-)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 9d2abdbd5f..a572e0c9d9 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
>      return arm_cpu_mp_affinity(idx, clustersz);
>  }
>  
> -static void virt_set_memmap(VirtMachineState *vms)
> +static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
>  {
>      MachineState *ms = MACHINE(vms);
> -    hwaddr base, device_memory_base, device_memory_size;
> +    hwaddr base, device_memory_base, device_memory_size, memtop;
>      int i;
>  
>      vms->memmap = extended_memmap;
> @@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> -    if (!vms->highmem &&
> -        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> -        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +    if (!vms->highmem)
> +	    pa_bits = 32;
> +
> +    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
> +	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
> +			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
>          exit(EXIT_FAILURE);
>      }
>      /*
> @@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
>      device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
>  
>      /* Base address of the high IO region */
> -    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> +    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
>      if (base < device_memory_base) {
>          error_report("maxmem/slots too huge");
>          exit(EXIT_FAILURE);
> @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = (vms->highmem ?
> -                        base :
> -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> +
> +    /*
> +     * If base fits within pa_bits, all good. If it doesn't, limit it
> +     * to the end of RAM, which is guaranteed to fit within pa_bits.

We tested that

  vms->memmap[VIRT_MEM].base + ms->maxram_size

fits within pa_bits, but here we're setting highest_gpa to

  ROUND_UP(vms->memmap[VIRT_MEM].base + ms->ram_size, GiB) +
  ROUND_UP(ms->maxram_size - ms->ram_size + ms->ram_slots * GiB, GiB)

which will be larger. Shouldn't we test memtop instead to make this
guarantee?


> +     */
> +    if (base <= BIT_ULL(pa_bits)) {
> +        vms->highest_gpa = base -1;
> +    } else {
> +        vms->highest_gpa = memtop - 1;
> +    }
> +
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;
> @@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
>       * to create a VM with the right number of IPA bits.
>       */
>      if (!vms->memmap) {
> -        virt_set_memmap(vms);
> +        ARMCPU *armcpu = ARM_CPU(first_cpu);


I think it's too early to use first_cpu here (although, I'll admit I'm
always confused as to what gets initialized when...) Assuming we need to
realize the cpus first, then we don't do that until a bit further down
in this function. I wonder if it's possible to delay this memmap setup
until after cpu realization. I see the memmap getting used prior when
calculating virt_max_cpus, but that looks like it needs to be updated
anyway to take highmem into account as to whether or not we should
consider the high gicv3 redist region in the calculation.

> +        int pa_bits;
> +
> +        if (object_property_get_bool(OBJECT(first_cpu), "aarch64", NULL)) {
> +            pa_bits = arm_pamax(armcpu);
> +        } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
> +            /* v7 with LPAE */
> +            pa_bits = 40;
> +        } else {
> +            /* Anything else */
> +            pa_bits = 32;
> +        }
> +
> +        virt_set_memmap(vms, pa_bits);
>      }
>  
>      /* We can probe only here because during property set
> @@ -2596,7 +2620,7 @@ static int virt_kvm_type(MachineState *ms, const char *type_str)
>      max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms, &fixed_ipa);
>  
>      /* we freeze the memory map to compute the highest gpa */
> -    virt_set_memmap(vms);
> +    virt_set_memmap(vms, max_vm_pa_size);
>  
>      requested_pa_size = 64 - clz64(vms->highest_gpa);
>  
> -- 
> 2.30.2
> 

Thanks,
drew



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 5/5] hw/arm/virt: Disable highmem devices that don't fit in the PA range
  2021-10-03 16:46   ` Marc Zyngier
  (?)
@ 2021-10-04 10:12     ` Andrew Jones
  -1 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:12 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

On Sun, Oct 03, 2021 at 05:46:05PM +0100, Marc Zyngier wrote:
> Make sure both the highmem PCIe and GICv3 regions are disabled when
> they don't fully fit in the PA range.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index a572e0c9d9..756f67b6c8 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1673,6 +1673,9 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
>      if (base <= BIT_ULL(pa_bits)) {
>          vms->highest_gpa = base -1;
>      } else {
> +        /* Advertise that we have disabled the highmem devices */
> +        vms->highmem_ecam = false;
> +        vms->highmem_redists = false;
>          vms->highest_gpa = memtop - 1;
>      }
>  
> -- 
> 2.30.2
>

Reviewed-by: Andrew Jones <drjones@redhat.com>


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 5/5] hw/arm/virt: Disable highmem devices that don't fit in the PA range
@ 2021-10-04 10:12     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:12 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:05PM +0100, Marc Zyngier wrote:
> Make sure both the highmem PCIe and GICv3 regions are disabled when
> they don't fully fit in the PA range.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index a572e0c9d9..756f67b6c8 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1673,6 +1673,9 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
>      if (base <= BIT_ULL(pa_bits)) {
>          vms->highest_gpa = base -1;
>      } else {
> +        /* Advertise that we have disabled the highmem devices */
> +        vms->highmem_ecam = false;
> +        vms->highmem_redists = false;
>          vms->highest_gpa = memtop - 1;
>      }
>  
> -- 
> 2.30.2
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 5/5] hw/arm/virt: Disable highmem devices that don't fit in the PA range
@ 2021-10-04 10:12     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:12 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, kvm, qemu-devel, Eric Auger, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:05PM +0100, Marc Zyngier wrote:
> Make sure both the highmem PCIe and GICv3 regions are disabled when
> they don't fully fit in the PA range.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index a572e0c9d9..756f67b6c8 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1673,6 +1673,9 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
>      if (base <= BIT_ULL(pa_bits)) {
>          vms->highest_gpa = base -1;
>      } else {
> +        /* Advertise that we have disabled the highmem devices */
> +        vms->highmem_ecam = false;
> +        vms->highmem_redists = false;
>          vms->highest_gpa = memtop - 1;
>      }
>  
> -- 
> 2.30.2
>

Reviewed-by: Andrew Jones <drjones@redhat.com>



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
  2021-10-04  9:44     ` Andrew Jones
  (?)
@ 2021-10-04 10:14       ` Andrew Jones
  -1 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

On Mon, Oct 04, 2021 at 11:44:08AM +0200, Andrew Jones wrote:
> On Sun, Oct 03, 2021 at 05:46:02PM +0100, Marc Zyngier wrote:
...
> >  
> > -    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
> > +    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
> > +            vms->highmem_redists) ? 2 : 1;
> 
> Wouldn't it be equivalent to just use vms->highmem here?

OK, I see in the last patch that we may disable highmem_redists
but not highmem.

In that case,

Reviewed-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
@ 2021-10-04 10:14       ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:14 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Mon, Oct 04, 2021 at 11:44:08AM +0200, Andrew Jones wrote:
> On Sun, Oct 03, 2021 at 05:46:02PM +0100, Marc Zyngier wrote:
...
> >  
> > -    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
> > +    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
> > +            vms->highmem_redists) ? 2 : 1;
> 
> Wouldn't it be equivalent to just use vms->highmem here?

OK, I see in the last patch that we may disable highmem_redists
but not highmem.

In that case,

Reviewed-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors
@ 2021-10-04 10:14       ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, kvm, qemu-devel, Eric Auger, kernel-team, kvmarm

On Mon, Oct 04, 2021 at 11:44:08AM +0200, Andrew Jones wrote:
> On Sun, Oct 03, 2021 at 05:46:02PM +0100, Marc Zyngier wrote:
...
> >  
> > -    return MACHINE(vms)->smp.cpus > redist0_capacity ? 2 : 1;
> > +    return (MACHINE(vms)->smp.cpus > redist0_capacity &&
> > +            vms->highmem_redists) ? 2 : 1;
> 
> Wouldn't it be equivalent to just use vms->highmem here?

OK, I see in the last patch that we may disable highmem_redists
but not highmem.

In that case,

Reviewed-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
  2021-10-03 16:46   ` Marc Zyngier
  (?)
@ 2021-10-04 10:15     ` Andrew Jones
  -1 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
...
> @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = (vms->highmem ?
> -                        base :
> -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> +
> +    /*
> +     * If base fits within pa_bits, all good. If it doesn't, limit it
> +     * to the end of RAM, which is guaranteed to fit within pa_bits.
> +     */
> +    if (base <= BIT_ULL(pa_bits)) {
> +        vms->highest_gpa = base -1;
                                    ^ missing space here

> +    } else {
> +        vms->highest_gpa = memtop - 1;
> +    }
> +
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;

Thanks,
drew


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
@ 2021-10-04 10:15     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:15 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
...
> @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = (vms->highmem ?
> -                        base :
> -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> +
> +    /*
> +     * If base fits within pa_bits, all good. If it doesn't, limit it
> +     * to the end of RAM, which is guaranteed to fit within pa_bits.
> +     */
> +    if (base <= BIT_ULL(pa_bits)) {
> +        vms->highest_gpa = base -1;
                                    ^ missing space here

> +    } else {
> +        vms->highest_gpa = memtop - 1;
> +    }
> +
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;

Thanks,
drew

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
@ 2021-10-04 10:15     ` Andrew Jones
  0 siblings, 0 replies; 69+ messages in thread
From: Andrew Jones @ 2021-10-04 10:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, kvm, qemu-devel, Eric Auger, kernel-team, kvmarm

On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
...
> @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = (vms->highmem ?
> -                        base :
> -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> +
> +    /*
> +     * If base fits within pa_bits, all good. If it doesn't, limit it
> +     * to the end of RAM, which is guaranteed to fit within pa_bits.
> +     */
> +    if (base <= BIT_ULL(pa_bits)) {
> +        vms->highest_gpa = base -1;
                                    ^ missing space here

> +    } else {
> +        vms->highest_gpa = memtop - 1;
> +    }
> +
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;

Thanks,
drew



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2021-10-03 16:46   ` Marc Zyngier
  (?)
@ 2021-10-04 12:00     ` Eric Auger
  -1 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2021-10-04 12:00 UTC (permalink / raw)
  To: Marc Zyngier, qemu-devel
  Cc: Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

Hi Marc,

On 10/3/21 6:46 PM, Marc Zyngier wrote:
> Currently, the highmem PCIe region is oddly keyed on the highmem
> attribute instead of highmem_ecam. Move the enablement of this PCIe
> region over to highmem_ecam.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 10 ++++------
>  hw/arm/virt.c            |  4 ++--
>  2 files changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index 037cc1fd82..d7bef0e627 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>  }
>  
>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> -                              VirtMachineState *vms)
> +                              uint32_t irq, VirtMachineState *vms)
>  {
> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>      struct GPEXConfig cfg = {
>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>          .pio    = memmap[VIRT_PCIE_PIO],
> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>          .bus    = vms->bus,
>      };
>  
> -    if (use_highmem) {
> +    if (vms->highmem_ecam) {
highmem_ecam is more restrictive than use_highmem:
vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);

If I remember correctly there was a problem using highmem ECAM with 32b
AAVMF FW.

However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
size") introduced high MMIO PCI region without this constraint.

So to me we should keep vms->highmem here

Eric

>          cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
>      }
>  
> @@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>      acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
>      acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
>                      (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
> -    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
> -                      vms->highmem, vms->highmem_ecam, vms);
> +    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);
>      if (vms->acpi_dev) {
>          build_ged_aml(scope, "\\_SB."GED_DEVICE,
>                        HOTPLUG_HANDLER(vms->acpi_dev),
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 7170aaacd5..8021d545c3 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
>                               mmio_reg, base_mmio, size_mmio);
>      memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          /* Map high MMIO space */
>          MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
>  
> @@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
>      qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
>                                   2, base_ecam, 2, size_ecam);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
>                                       1, FDT_PCI_RANGE_IOPORT, 2, 0,
>                                       2, base_pio, 2, size_pio,


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2021-10-04 12:00     ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2021-10-04 12:00 UTC (permalink / raw)
  To: Marc Zyngier, qemu-devel; +Cc: kernel-team, kvmarm, kvm

Hi Marc,

On 10/3/21 6:46 PM, Marc Zyngier wrote:
> Currently, the highmem PCIe region is oddly keyed on the highmem
> attribute instead of highmem_ecam. Move the enablement of this PCIe
> region over to highmem_ecam.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 10 ++++------
>  hw/arm/virt.c            |  4 ++--
>  2 files changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index 037cc1fd82..d7bef0e627 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>  }
>  
>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> -                              VirtMachineState *vms)
> +                              uint32_t irq, VirtMachineState *vms)
>  {
> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>      struct GPEXConfig cfg = {
>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>          .pio    = memmap[VIRT_PCIE_PIO],
> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>          .bus    = vms->bus,
>      };
>  
> -    if (use_highmem) {
> +    if (vms->highmem_ecam) {
highmem_ecam is more restrictive than use_highmem:
vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);

If I remember correctly there was a problem using highmem ECAM with 32b
AAVMF FW.

However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
size") introduced high MMIO PCI region without this constraint.

So to me we should keep vms->highmem here

Eric

>          cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
>      }
>  
> @@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>      acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
>      acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
>                      (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
> -    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
> -                      vms->highmem, vms->highmem_ecam, vms);
> +    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);
>      if (vms->acpi_dev) {
>          build_ged_aml(scope, "\\_SB."GED_DEVICE,
>                        HOTPLUG_HANDLER(vms->acpi_dev),
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 7170aaacd5..8021d545c3 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
>                               mmio_reg, base_mmio, size_mmio);
>      memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          /* Map high MMIO space */
>          MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
>  
> @@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
>      qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
>                                   2, base_ecam, 2, size_ecam);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
>                                       1, FDT_PCI_RANGE_IOPORT, 2, 0,
>                                       2, base_pio, 2, size_pio,

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2021-10-04 12:00     ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2021-10-04 12:00 UTC (permalink / raw)
  To: Marc Zyngier, qemu-devel
  Cc: Peter Maydell, Andrew Jones, kernel-team, kvmarm, kvm

Hi Marc,

On 10/3/21 6:46 PM, Marc Zyngier wrote:
> Currently, the highmem PCIe region is oddly keyed on the highmem
> attribute instead of highmem_ecam. Move the enablement of this PCIe
> region over to highmem_ecam.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt-acpi-build.c | 10 ++++------
>  hw/arm/virt.c            |  4 ++--
>  2 files changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index 037cc1fd82..d7bef0e627 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>  }
>  
>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> -                              VirtMachineState *vms)
> +                              uint32_t irq, VirtMachineState *vms)
>  {
> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>      struct GPEXConfig cfg = {
>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>          .pio    = memmap[VIRT_PCIE_PIO],
> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>          .bus    = vms->bus,
>      };
>  
> -    if (use_highmem) {
> +    if (vms->highmem_ecam) {
highmem_ecam is more restrictive than use_highmem:
vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);

If I remember correctly there was a problem using highmem ECAM with 32b
AAVMF FW.

However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
size") introduced high MMIO PCI region without this constraint.

So to me we should keep vms->highmem here

Eric

>          cfg.mmio64 = memmap[VIRT_HIGH_PCIE_MMIO];
>      }
>  
> @@ -712,8 +711,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>      acpi_dsdt_add_fw_cfg(scope, &memmap[VIRT_FW_CFG]);
>      acpi_dsdt_add_virtio(scope, &memmap[VIRT_MMIO],
>                      (irqmap[VIRT_MMIO] + ARM_SPI_BASE), NUM_VIRTIO_TRANSPORTS);
> -    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE),
> -                      vms->highmem, vms->highmem_ecam, vms);
> +    acpi_dsdt_add_pci(scope, memmap, (irqmap[VIRT_PCIE] + ARM_SPI_BASE), vms);
>      if (vms->acpi_dev) {
>          build_ged_aml(scope, "\\_SB."GED_DEVICE,
>                        HOTPLUG_HANDLER(vms->acpi_dev),
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 7170aaacd5..8021d545c3 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1362,7 +1362,7 @@ static void create_pcie(VirtMachineState *vms)
>                               mmio_reg, base_mmio, size_mmio);
>      memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          /* Map high MMIO space */
>          MemoryRegion *high_mmio_alias = g_new0(MemoryRegion, 1);
>  
> @@ -1416,7 +1416,7 @@ static void create_pcie(VirtMachineState *vms)
>      qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
>                                   2, base_ecam, 2, size_ecam);
>  
> -    if (vms->highmem) {
> +    if (vms->highmem_ecam) {
>          qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "ranges",
>                                       1, FDT_PCI_RANGE_IOPORT, 2, 0,
>                                       2, base_pio, 2, size_pio,



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
  2021-10-03 16:46   ` Marc Zyngier
  (?)
@ 2021-10-04 12:23     ` Eric Auger
  -1 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2021-10-04 12:23 UTC (permalink / raw)
  To: Marc Zyngier, qemu-devel
  Cc: Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

Hi Marc,

On 10/3/21 6:46 PM, Marc Zyngier wrote:
> Even when the VM is configured with highmem=off, the highest_gpa
> field includes devices that are above the 4GiB limit.
> Similarily, nothing seem to check that the memory is within
> the limit set by the highmem=off option.
>
> This leads to failures in virt_kvm_type() on systems that have
> a crippled IPA range, as the reported IPA space is larger than
> what it should be.
>
> Instead, honor the user-specified limit to only use the devices
> at the lowest end of the spectrum, and fail if we have memory
> crossing the 4GiB limit.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index bcf58f677d..9d2abdbd5f 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> +    if (!vms->highmem &&
> +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +        exit(EXIT_FAILURE);
> +    }
>      /*
>       * We compute the base of the high IO region depending on the
>       * amount of initial and device memory. The device memory start/size
> @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = base - 1;
> +    vms->highest_gpa = (vms->highmem ?
> +                        base :
> +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
I think I would have preferred to have

if (vms->highmem) {
   for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
        hwaddr size = extended_memmap[i].size;

        base = ROUND_UP(base, size);
        vms->memmap[i].base = base;
        vms->memmap[i].size = size;
        base += size;
    }
}
as it is useless to execute that code and create new memmap entries in
case of !highmem.

But nevertheless, this looks correct

Eric
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
@ 2021-10-04 12:23     ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2021-10-04 12:23 UTC (permalink / raw)
  To: Marc Zyngier, qemu-devel; +Cc: kernel-team, kvmarm, kvm

Hi Marc,

On 10/3/21 6:46 PM, Marc Zyngier wrote:
> Even when the VM is configured with highmem=off, the highest_gpa
> field includes devices that are above the 4GiB limit.
> Similarily, nothing seem to check that the memory is within
> the limit set by the highmem=off option.
>
> This leads to failures in virt_kvm_type() on systems that have
> a crippled IPA range, as the reported IPA space is larger than
> what it should be.
>
> Instead, honor the user-specified limit to only use the devices
> at the lowest end of the spectrum, and fail if we have memory
> crossing the 4GiB limit.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index bcf58f677d..9d2abdbd5f 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> +    if (!vms->highmem &&
> +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +        exit(EXIT_FAILURE);
> +    }
>      /*
>       * We compute the base of the high IO region depending on the
>       * amount of initial and device memory. The device memory start/size
> @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = base - 1;
> +    vms->highest_gpa = (vms->highmem ?
> +                        base :
> +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
I think I would have preferred to have

if (vms->highmem) {
   for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
        hwaddr size = extended_memmap[i].size;

        base = ROUND_UP(base, size);
        vms->memmap[i].base = base;
        vms->memmap[i].size = size;
        base += size;
    }
}
as it is useless to execute that code and create new memmap entries in
case of !highmem.

But nevertheless, this looks correct

Eric
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
@ 2021-10-04 12:23     ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2021-10-04 12:23 UTC (permalink / raw)
  To: Marc Zyngier, qemu-devel
  Cc: Peter Maydell, Andrew Jones, kernel-team, kvmarm, kvm

Hi Marc,

On 10/3/21 6:46 PM, Marc Zyngier wrote:
> Even when the VM is configured with highmem=off, the highest_gpa
> field includes devices that are above the 4GiB limit.
> Similarily, nothing seem to check that the memory is within
> the limit set by the highmem=off option.
>
> This leads to failures in virt_kvm_type() on systems that have
> a crippled IPA range, as the reported IPA space is larger than
> what it should be.
>
> Instead, honor the user-specified limit to only use the devices
> at the lowest end of the spectrum, and fail if we have memory
> crossing the 4GiB limit.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  hw/arm/virt.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index bcf58f677d..9d2abdbd5f 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
>          exit(EXIT_FAILURE);
>      }
>  
> +    if (!vms->highmem &&
> +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> +        exit(EXIT_FAILURE);
> +    }
>      /*
>       * We compute the base of the high IO region depending on the
>       * amount of initial and device memory. The device memory start/size
> @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
>          vms->memmap[i].size = size;
>          base += size;
>      }
> -    vms->highest_gpa = base - 1;
> +    vms->highest_gpa = (vms->highmem ?
> +                        base :
> +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
I think I would have preferred to have

if (vms->highmem) {
   for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
        hwaddr size = extended_memmap[i].size;

        base = ROUND_UP(base, size);
        vms->memmap[i].base = base;
        vms->memmap[i].size = size;
        base += size;
    }
}
as it is useless to execute that code and create new memmap entries in
case of !highmem.

But nevertheless, this looks correct

Eric
>      if (device_memory_size > 0) {
>          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
>          ms->device_memory->base = device_memory_base;



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2021-10-04 12:00     ` Eric Auger
  (?)
@ 2021-12-27 15:53       ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 15:53 UTC (permalink / raw)
  To: eric.auger; +Cc: kvm, qemu-devel, kernel-team, kvmarm

Hi Eric,

Picking this up again after a stupidly long time...

On Mon, 04 Oct 2021 13:00:21 +0100,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> > Currently, the highmem PCIe region is oddly keyed on the highmem
> > attribute instead of highmem_ecam. Move the enablement of this PCIe
> > region over to highmem_ecam.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt-acpi-build.c | 10 ++++------
> >  hw/arm/virt.c            |  4 ++--
> >  2 files changed, 6 insertions(+), 8 deletions(-)
> >
> > diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> > index 037cc1fd82..d7bef0e627 100644
> > --- a/hw/arm/virt-acpi-build.c
> > +++ b/hw/arm/virt-acpi-build.c
> > @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
> >  }
> >  
> >  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> > -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> > -                              VirtMachineState *vms)
> > +                              uint32_t irq, VirtMachineState *vms)
> >  {
> > -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> > +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
> >      struct GPEXConfig cfg = {
> >          .mmio32 = memmap[VIRT_PCIE_MMIO],
> >          .pio    = memmap[VIRT_PCIE_PIO],
> > @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >          .bus    = vms->bus,
> >      };
> >  
> > -    if (use_highmem) {
> > +    if (vms->highmem_ecam) {
> highmem_ecam is more restrictive than use_highmem:
> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
> 
> If I remember correctly there was a problem using highmem ECAM with 32b
> AAVMF FW.
>
> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
> size") introduced high MMIO PCI region without this constraint.

Then I really don't understand the point of this highmem_ecam. We only
register the highmem version if highmem_ecam is set (see the use of
VIRT_ECAM_ID() to pick the right ECAM window).

So keying this on highmem makes it expose a device that may not be
there the first place since, as you pointed out that highmem_ecam can
be false in cases where highmem is true.

> So to me we should keep vms->highmem here

I really must be missing how this is supposed to work.

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2021-12-27 15:53       ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 15:53 UTC (permalink / raw)
  To: eric.auger
  Cc: qemu-devel, Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

Hi Eric,

Picking this up again after a stupidly long time...

On Mon, 04 Oct 2021 13:00:21 +0100,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> > Currently, the highmem PCIe region is oddly keyed on the highmem
> > attribute instead of highmem_ecam. Move the enablement of this PCIe
> > region over to highmem_ecam.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt-acpi-build.c | 10 ++++------
> >  hw/arm/virt.c            |  4 ++--
> >  2 files changed, 6 insertions(+), 8 deletions(-)
> >
> > diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> > index 037cc1fd82..d7bef0e627 100644
> > --- a/hw/arm/virt-acpi-build.c
> > +++ b/hw/arm/virt-acpi-build.c
> > @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
> >  }
> >  
> >  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> > -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> > -                              VirtMachineState *vms)
> > +                              uint32_t irq, VirtMachineState *vms)
> >  {
> > -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> > +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
> >      struct GPEXConfig cfg = {
> >          .mmio32 = memmap[VIRT_PCIE_MMIO],
> >          .pio    = memmap[VIRT_PCIE_PIO],
> > @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >          .bus    = vms->bus,
> >      };
> >  
> > -    if (use_highmem) {
> > +    if (vms->highmem_ecam) {
> highmem_ecam is more restrictive than use_highmem:
> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
> 
> If I remember correctly there was a problem using highmem ECAM with 32b
> AAVMF FW.
>
> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
> size") introduced high MMIO PCI region without this constraint.

Then I really don't understand the point of this highmem_ecam. We only
register the highmem version if highmem_ecam is set (see the use of
VIRT_ECAM_ID() to pick the right ECAM window).

So keying this on highmem makes it expose a device that may not be
there the first place since, as you pointed out that highmem_ecam can
be false in cases where highmem is true.

> So to me we should keep vms->highmem here

I really must be missing how this is supposed to work.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2021-12-27 15:53       ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 15:53 UTC (permalink / raw)
  To: eric.auger
  Cc: Peter Maydell, Andrew Jones, kvm, qemu-devel, kernel-team, kvmarm

Hi Eric,

Picking this up again after a stupidly long time...

On Mon, 04 Oct 2021 13:00:21 +0100,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> > Currently, the highmem PCIe region is oddly keyed on the highmem
> > attribute instead of highmem_ecam. Move the enablement of this PCIe
> > region over to highmem_ecam.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt-acpi-build.c | 10 ++++------
> >  hw/arm/virt.c            |  4 ++--
> >  2 files changed, 6 insertions(+), 8 deletions(-)
> >
> > diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> > index 037cc1fd82..d7bef0e627 100644
> > --- a/hw/arm/virt-acpi-build.c
> > +++ b/hw/arm/virt-acpi-build.c
> > @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
> >  }
> >  
> >  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> > -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> > -                              VirtMachineState *vms)
> > +                              uint32_t irq, VirtMachineState *vms)
> >  {
> > -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> > +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
> >      struct GPEXConfig cfg = {
> >          .mmio32 = memmap[VIRT_PCIE_MMIO],
> >          .pio    = memmap[VIRT_PCIE_PIO],
> > @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >          .bus    = vms->bus,
> >      };
> >  
> > -    if (use_highmem) {
> > +    if (vms->highmem_ecam) {
> highmem_ecam is more restrictive than use_highmem:
> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
> 
> If I remember correctly there was a problem using highmem ECAM with 32b
> AAVMF FW.
>
> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
> size") introduced high MMIO PCI region without this constraint.

Then I really don't understand the point of this highmem_ecam. We only
register the highmem version if highmem_ecam is set (see the use of
VIRT_ECAM_ID() to pick the right ECAM window).

So keying this on highmem makes it expose a device that may not be
there the first place since, as you pointed out that highmem_ecam can
be false in cases where highmem is true.

> So to me we should keep vms->highmem here

I really must be missing how this is supposed to work.

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
  2021-10-04 12:23     ` Eric Auger
  (?)
@ 2021-12-27 16:39       ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 16:39 UTC (permalink / raw)
  To: eric.auger
  Cc: qemu-devel, Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

On Mon, 04 Oct 2021 13:23:41 +0100,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> > Even when the VM is configured with highmem=off, the highest_gpa
> > field includes devices that are above the 4GiB limit.
> > Similarily, nothing seem to check that the memory is within
> > the limit set by the highmem=off option.
> >
> > This leads to failures in virt_kvm_type() on systems that have
> > a crippled IPA range, as the reported IPA space is larger than
> > what it should be.
> >
> > Instead, honor the user-specified limit to only use the devices
> > at the lowest end of the spectrum, and fail if we have memory
> > crossing the 4GiB limit.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index bcf58f677d..9d2abdbd5f 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          exit(EXIT_FAILURE);
> >      }
> >  
> > +    if (!vms->highmem &&
> > +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> > +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> > +        exit(EXIT_FAILURE);
> > +    }
> >      /*
> >       * We compute the base of the high IO region depending on the
> >       * amount of initial and device memory. The device memory start/size
> > @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          vms->memmap[i].size = size;
> >          base += size;
> >      }
> > -    vms->highest_gpa = base - 1;
> > +    vms->highest_gpa = (vms->highmem ?
> > +                        base :
> > +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> I think I would have preferred to have
> 
> if (vms->highmem) {
>    for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
>         hwaddr size = extended_memmap[i].size;
> 
>         base = ROUND_UP(base, size);
>         vms->memmap[i].base = base;
>         vms->memmap[i].size = size;
>         base += size;
>     }
> }
> as it is useless to execute that code and create new memmap entries in
> case of !highmem.

I agree that it is a bit useless when we only have highmem. But we
really want to deal with arbitrary IPA spaces (see how this changes in
the follow-up patches), and we need to check that everything fits in
the IPA space (and fix things up if they don't).

> 
> But nevertheless, this looks correct

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
@ 2021-12-27 16:39       ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 16:39 UTC (permalink / raw)
  To: eric.auger; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Mon, 04 Oct 2021 13:23:41 +0100,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> > Even when the VM is configured with highmem=off, the highest_gpa
> > field includes devices that are above the 4GiB limit.
> > Similarily, nothing seem to check that the memory is within
> > the limit set by the highmem=off option.
> >
> > This leads to failures in virt_kvm_type() on systems that have
> > a crippled IPA range, as the reported IPA space is larger than
> > what it should be.
> >
> > Instead, honor the user-specified limit to only use the devices
> > at the lowest end of the spectrum, and fail if we have memory
> > crossing the 4GiB limit.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index bcf58f677d..9d2abdbd5f 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          exit(EXIT_FAILURE);
> >      }
> >  
> > +    if (!vms->highmem &&
> > +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> > +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> > +        exit(EXIT_FAILURE);
> > +    }
> >      /*
> >       * We compute the base of the high IO region depending on the
> >       * amount of initial and device memory. The device memory start/size
> > @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          vms->memmap[i].size = size;
> >          base += size;
> >      }
> > -    vms->highest_gpa = base - 1;
> > +    vms->highest_gpa = (vms->highmem ?
> > +                        base :
> > +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> I think I would have preferred to have
> 
> if (vms->highmem) {
>    for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
>         hwaddr size = extended_memmap[i].size;
> 
>         base = ROUND_UP(base, size);
>         vms->memmap[i].base = base;
>         vms->memmap[i].size = size;
>         base += size;
>     }
> }
> as it is useless to execute that code and create new memmap entries in
> case of !highmem.

I agree that it is a bit useless when we only have highmem. But we
really want to deal with arbitrary IPA spaces (see how this changes in
the follow-up patches), and we need to check that everything fits in
the IPA space (and fix things up if they don't).

> 
> But nevertheless, this looks correct

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
@ 2021-12-27 16:39       ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 16:39 UTC (permalink / raw)
  To: eric.auger
  Cc: Peter Maydell, Andrew Jones, kvm, qemu-devel, kernel-team, kvmarm

On Mon, 04 Oct 2021 13:23:41 +0100,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> > Even when the VM is configured with highmem=off, the highest_gpa
> > field includes devices that are above the 4GiB limit.
> > Similarily, nothing seem to check that the memory is within
> > the limit set by the highmem=off option.
> >
> > This leads to failures in virt_kvm_type() on systems that have
> > a crippled IPA range, as the reported IPA space is larger than
> > what it should be.
> >
> > Instead, honor the user-specified limit to only use the devices
> > at the lowest end of the spectrum, and fail if we have memory
> > crossing the 4GiB limit.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index bcf58f677d..9d2abdbd5f 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -1628,6 +1628,11 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          exit(EXIT_FAILURE);
> >      }
> >  
> > +    if (!vms->highmem &&
> > +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> > +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> > +        exit(EXIT_FAILURE);
> > +    }
> >      /*
> >       * We compute the base of the high IO region depending on the
> >       * amount of initial and device memory. The device memory start/size
> > @@ -1657,7 +1662,9 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          vms->memmap[i].size = size;
> >          base += size;
> >      }
> > -    vms->highest_gpa = base - 1;
> > +    vms->highest_gpa = (vms->highmem ?
> > +                        base :
> > +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> I think I would have preferred to have
> 
> if (vms->highmem) {
>    for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
>         hwaddr size = extended_memmap[i].size;
> 
>         base = ROUND_UP(base, size);
>         vms->memmap[i].base = base;
>         vms->memmap[i].size = size;
>         base += size;
>     }
> }
> as it is useless to execute that code and create new memmap entries in
> case of !highmem.

I agree that it is a bit useless when we only have highmem. But we
really want to deal with arbitrary IPA spaces (see how this changes in
the follow-up patches), and we need to check that everything fits in
the IPA space (and fix things up if they don't).

> 
> But nevertheless, this looks correct

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
  2021-10-04 10:11     ` Andrew Jones
  (?)
@ 2021-12-27 20:13       ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 20:13 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, qemu-devel, kernel-team, kvmarm

On Mon, 04 Oct 2021 11:11:10 +0100,
Andrew Jones <drjones@redhat.com> wrote:
> 
> On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
> > The highmem attribute is nothing but another way to express the
> > PA range of a VM. To support HW that has a smaller PA range then
> > what QEMU assumes, pass this PA range to the virt_set_memmap()
> > function, allowing it to correctly exclude highmem devices
> > if they are outside of the PA range.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 35 insertions(+), 11 deletions(-)
> > 
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index 9d2abdbd5f..a572e0c9d9 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
> >      return arm_cpu_mp_affinity(idx, clustersz);
> >  }
> >  
> > -static void virt_set_memmap(VirtMachineState *vms)
> > +static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
> >  {
> >      MachineState *ms = MACHINE(vms);
> > -    hwaddr base, device_memory_base, device_memory_size;
> > +    hwaddr base, device_memory_base, device_memory_size, memtop;
> >      int i;
> >  
> >      vms->memmap = extended_memmap;
> > @@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          exit(EXIT_FAILURE);
> >      }
> >  
> > -    if (!vms->highmem &&
> > -        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> > -        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> > +    if (!vms->highmem)
> > +	    pa_bits = 32;
> > +
> > +    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
> > +	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
> > +			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
> >          exit(EXIT_FAILURE);
> >      }
> >      /*
> > @@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
> >      device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
> >  
> >      /* Base address of the high IO region */
> > -    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> > +    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> >      if (base < device_memory_base) {
> >          error_report("maxmem/slots too huge");
> >          exit(EXIT_FAILURE);
> > @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          vms->memmap[i].size = size;
> >          base += size;
> >      }
> > -    vms->highest_gpa = (vms->highmem ?
> > -                        base :
> > -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> > +
> > +    /*
> > +     * If base fits within pa_bits, all good. If it doesn't, limit it
> > +     * to the end of RAM, which is guaranteed to fit within pa_bits.
> 
> We tested that
> 
>   vms->memmap[VIRT_MEM].base + ms->maxram_size
> 
> fits within pa_bits, but here we're setting highest_gpa to
> 
>   ROUND_UP(vms->memmap[VIRT_MEM].base + ms->ram_size, GiB) +
>   ROUND_UP(ms->maxram_size - ms->ram_size + ms->ram_slots * GiB, GiB)
> 
> which will be larger. Shouldn't we test memtop instead to make this
> guarantee?

Yes, well spotted.

> 
> 
> > +     */
> > +    if (base <= BIT_ULL(pa_bits)) {
> > +        vms->highest_gpa = base -1;
> > +    } else {
> > +        vms->highest_gpa = memtop - 1;
> > +    }
> > +
> >      if (device_memory_size > 0) {
> >          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
> >          ms->device_memory->base = device_memory_base;
> > @@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
> >       * to create a VM with the right number of IPA bits.
> >       */
> >      if (!vms->memmap) {
> > -        virt_set_memmap(vms);
> > +        ARMCPU *armcpu = ARM_CPU(first_cpu);
> 
> 
> I think it's too early to use first_cpu here (although, I'll admit I'm
> always confused as to what gets initialized when...) Assuming we need to
> realize the cpus first, then we don't do that until a bit further down
> in this function. I wonder if it's possible to delay this memmap setup
> until after cpu realization. I see the memmap getting used prior when
> calculating virt_max_cpus, but that looks like it needs to be updated
> anyway to take highmem into account as to whether or not we should
> consider the high gicv3 redist region in the calculation.

OK, this is nothing short of total hell. You can't create the memory
map later, as MTE and the secure world both get in the way (they
really want a valid memory map). And as you pointed out, using
first_cpu is not appropriate here (obviously, I didn't test this
nearly enough). I could split the creation of the CPUs in two
sequences with the memory map creation in between, but this quickly
becomes quite invasive.

My current approach is to keep the current flow, but to create a
temporary CPU, find whatever I need to know about it, and free
it. Yes, this is a bit overkill, but it solves the chicken and egg
issue simply enough.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
@ 2021-12-27 20:13       ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 20:13 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Peter Maydell, kvm, qemu-devel, Eric Auger, kernel-team, kvmarm

On Mon, 04 Oct 2021 11:11:10 +0100,
Andrew Jones <drjones@redhat.com> wrote:
> 
> On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
> > The highmem attribute is nothing but another way to express the
> > PA range of a VM. To support HW that has a smaller PA range then
> > what QEMU assumes, pass this PA range to the virt_set_memmap()
> > function, allowing it to correctly exclude highmem devices
> > if they are outside of the PA range.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 35 insertions(+), 11 deletions(-)
> > 
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index 9d2abdbd5f..a572e0c9d9 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
> >      return arm_cpu_mp_affinity(idx, clustersz);
> >  }
> >  
> > -static void virt_set_memmap(VirtMachineState *vms)
> > +static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
> >  {
> >      MachineState *ms = MACHINE(vms);
> > -    hwaddr base, device_memory_base, device_memory_size;
> > +    hwaddr base, device_memory_base, device_memory_size, memtop;
> >      int i;
> >  
> >      vms->memmap = extended_memmap;
> > @@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          exit(EXIT_FAILURE);
> >      }
> >  
> > -    if (!vms->highmem &&
> > -        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> > -        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> > +    if (!vms->highmem)
> > +	    pa_bits = 32;
> > +
> > +    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
> > +	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
> > +			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
> >          exit(EXIT_FAILURE);
> >      }
> >      /*
> > @@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
> >      device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
> >  
> >      /* Base address of the high IO region */
> > -    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> > +    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> >      if (base < device_memory_base) {
> >          error_report("maxmem/slots too huge");
> >          exit(EXIT_FAILURE);
> > @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          vms->memmap[i].size = size;
> >          base += size;
> >      }
> > -    vms->highest_gpa = (vms->highmem ?
> > -                        base :
> > -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> > +
> > +    /*
> > +     * If base fits within pa_bits, all good. If it doesn't, limit it
> > +     * to the end of RAM, which is guaranteed to fit within pa_bits.
> 
> We tested that
> 
>   vms->memmap[VIRT_MEM].base + ms->maxram_size
> 
> fits within pa_bits, but here we're setting highest_gpa to
> 
>   ROUND_UP(vms->memmap[VIRT_MEM].base + ms->ram_size, GiB) +
>   ROUND_UP(ms->maxram_size - ms->ram_size + ms->ram_slots * GiB, GiB)
> 
> which will be larger. Shouldn't we test memtop instead to make this
> guarantee?

Yes, well spotted.

> 
> 
> > +     */
> > +    if (base <= BIT_ULL(pa_bits)) {
> > +        vms->highest_gpa = base -1;
> > +    } else {
> > +        vms->highest_gpa = memtop - 1;
> > +    }
> > +
> >      if (device_memory_size > 0) {
> >          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
> >          ms->device_memory->base = device_memory_base;
> > @@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
> >       * to create a VM with the right number of IPA bits.
> >       */
> >      if (!vms->memmap) {
> > -        virt_set_memmap(vms);
> > +        ARMCPU *armcpu = ARM_CPU(first_cpu);
> 
> 
> I think it's too early to use first_cpu here (although, I'll admit I'm
> always confused as to what gets initialized when...) Assuming we need to
> realize the cpus first, then we don't do that until a bit further down
> in this function. I wonder if it's possible to delay this memmap setup
> until after cpu realization. I see the memmap getting used prior when
> calculating virt_max_cpus, but that looks like it needs to be updated
> anyway to take highmem into account as to whether or not we should
> consider the high gicv3 redist region in the calculation.

OK, this is nothing short of total hell. You can't create the memory
map later, as MTE and the secure world both get in the way (they
really want a valid memory map). And as you pointed out, using
first_cpu is not appropriate here (obviously, I didn't test this
nearly enough). I could split the creation of the CPUs in two
sequences with the memory map creation in between, but this quickly
becomes quite invasive.

My current approach is to keep the current flow, but to create a
temporary CPU, find whatever I need to know about it, and free
it. Yes, this is a bit overkill, but it solves the chicken and egg
issue simply enough.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute the memory map
@ 2021-12-27 20:13       ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2021-12-27 20:13 UTC (permalink / raw)
  To: Andrew Jones
  Cc: qemu-devel, Eric Auger, Peter Maydell, kvmarm, kvm, kernel-team

On Mon, 04 Oct 2021 11:11:10 +0100,
Andrew Jones <drjones@redhat.com> wrote:
> 
> On Sun, Oct 03, 2021 at 05:46:04PM +0100, Marc Zyngier wrote:
> > The highmem attribute is nothing but another way to express the
> > PA range of a VM. To support HW that has a smaller PA range then
> > what QEMU assumes, pass this PA range to the virt_set_memmap()
> > function, allowing it to correctly exclude highmem devices
> > if they are outside of the PA range.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt.c | 46 +++++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 35 insertions(+), 11 deletions(-)
> > 
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index 9d2abdbd5f..a572e0c9d9 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -1610,10 +1610,10 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
> >      return arm_cpu_mp_affinity(idx, clustersz);
> >  }
> >  
> > -static void virt_set_memmap(VirtMachineState *vms)
> > +static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
> >  {
> >      MachineState *ms = MACHINE(vms);
> > -    hwaddr base, device_memory_base, device_memory_size;
> > +    hwaddr base, device_memory_base, device_memory_size, memtop;
> >      int i;
> >  
> >      vms->memmap = extended_memmap;
> > @@ -1628,9 +1628,12 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          exit(EXIT_FAILURE);
> >      }
> >  
> > -    if (!vms->highmem &&
> > -        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> > -        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> > +    if (!vms->highmem)
> > +	    pa_bits = 32;
> > +
> > +    if (vms->memmap[VIRT_MEM].base + ms->maxram_size > BIT_ULL(pa_bits)) {
> > +	    error_report("Addressing limited to %d bits, but memory exceeds it by %llu bytes\n",
> > +			 pa_bits, vms->memmap[VIRT_MEM].base + ms->maxram_size - BIT_ULL(pa_bits));
> >          exit(EXIT_FAILURE);
> >      }
> >      /*
> > @@ -1645,7 +1648,7 @@ static void virt_set_memmap(VirtMachineState *vms)
> >      device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
> >  
> >      /* Base address of the high IO region */
> > -    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> > +    memtop = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
> >      if (base < device_memory_base) {
> >          error_report("maxmem/slots too huge");
> >          exit(EXIT_FAILURE);
> > @@ -1662,9 +1665,17 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          vms->memmap[i].size = size;
> >          base += size;
> >      }
> > -    vms->highest_gpa = (vms->highmem ?
> > -                        base :
> > -                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> > +
> > +    /*
> > +     * If base fits within pa_bits, all good. If it doesn't, limit it
> > +     * to the end of RAM, which is guaranteed to fit within pa_bits.
> 
> We tested that
> 
>   vms->memmap[VIRT_MEM].base + ms->maxram_size
> 
> fits within pa_bits, but here we're setting highest_gpa to
> 
>   ROUND_UP(vms->memmap[VIRT_MEM].base + ms->ram_size, GiB) +
>   ROUND_UP(ms->maxram_size - ms->ram_size + ms->ram_slots * GiB, GiB)
> 
> which will be larger. Shouldn't we test memtop instead to make this
> guarantee?

Yes, well spotted.

> 
> 
> > +     */
> > +    if (base <= BIT_ULL(pa_bits)) {
> > +        vms->highest_gpa = base -1;
> > +    } else {
> > +        vms->highest_gpa = memtop - 1;
> > +    }
> > +
> >      if (device_memory_size > 0) {
> >          ms->device_memory = g_malloc0(sizeof(*ms->device_memory));
> >          ms->device_memory->base = device_memory_base;
> > @@ -1860,7 +1871,20 @@ static void machvirt_init(MachineState *machine)
> >       * to create a VM with the right number of IPA bits.
> >       */
> >      if (!vms->memmap) {
> > -        virt_set_memmap(vms);
> > +        ARMCPU *armcpu = ARM_CPU(first_cpu);
> 
> 
> I think it's too early to use first_cpu here (although, I'll admit I'm
> always confused as to what gets initialized when...) Assuming we need to
> realize the cpus first, then we don't do that until a bit further down
> in this function. I wonder if it's possible to delay this memmap setup
> until after cpu realization. I see the memmap getting used prior when
> calculating virt_max_cpus, but that looks like it needs to be updated
> anyway to take highmem into account as to whether or not we should
> consider the high gicv3 redist region in the calculation.

OK, this is nothing short of total hell. You can't create the memory
map later, as MTE and the secure world both get in the way (they
really want a valid memory map). And as you pointed out, using
first_cpu is not appropriate here (obviously, I didn't test this
nearly enough). I could split the creation of the CPUs in two
sequences with the memory map creation in between, but this quickly
becomes quite invasive.

My current approach is to keep the current flow, but to create a
temporary CPU, find whatever I need to know about it, and free
it. Yes, this is a bit overkill, but it solves the chicken and egg
issue simply enough.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2021-12-27 15:53       ` Marc Zyngier
  (?)
@ 2022-01-04 15:31         ` Eric Auger
  -1 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-04 15:31 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

Hi Marc,

On 12/27/21 4:53 PM, Marc Zyngier wrote:
> Hi Eric,
>
> Picking this up again after a stupidly long time...
>
> On Mon, 04 Oct 2021 13:00:21 +0100,
> Eric Auger <eric.auger@redhat.com> wrote:
>> Hi Marc,
>>
>> On 10/3/21 6:46 PM, Marc Zyngier wrote:
>>> Currently, the highmem PCIe region is oddly keyed on the highmem
>>> attribute instead of highmem_ecam. Move the enablement of this PCIe
>>> region over to highmem_ecam.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>  hw/arm/virt-acpi-build.c | 10 ++++------
>>>  hw/arm/virt.c            |  4 ++--
>>>  2 files changed, 6 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
>>> index 037cc1fd82..d7bef0e627 100644
>>> --- a/hw/arm/virt-acpi-build.c
>>> +++ b/hw/arm/virt-acpi-build.c
>>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>>>  }
>>>  
>>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
>>> -                              VirtMachineState *vms)
>>> +                              uint32_t irq, VirtMachineState *vms)
>>>  {
>>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
>>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>>>      struct GPEXConfig cfg = {
>>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>>>          .pio    = memmap[VIRT_PCIE_PIO],
>>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>          .bus    = vms->bus,
>>>      };
>>>  
>>> -    if (use_highmem) {
>>> +    if (vms->highmem_ecam) {
>> highmem_ecam is more restrictive than use_highmem:
>> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
>>
>> If I remember correctly there was a problem using highmem ECAM with 32b
>> AAVMF FW.
>>
>> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
>> size") introduced high MMIO PCI region without this constraint.
> Then I really don't understand the point of this highmem_ecam. We only
> register the highmem version if highmem_ecam is set (see the use of
> VIRT_ECAM_ID() to pick the right ECAM window).

but aren't we talking about different regions? On one hand the [high]
MMIO region (512GB wide) and the [high] ECAM region (256MB large).
To me you can enable either independently. High MMIO region is used by
some devices likes ivshmem/video cards while high ECAM was introduced to
extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
new 256MB ECAM region).

with the above change the high MMIO region won't be set with 32b
FW+kernel and LPAE whereas it is currently.

high ECAM was not supported by 32b FW, hence the highmem_ecam.

but maybe I miss your point?

Eric
>
> So keying this on highmem makes it expose a device that may not be
> there the first place since, as you pointed out that highmem_ecam can
> be false in cases where highmem is true.
>
>> So to me we should keep vms->highmem here
> I really must be missing how this is supposed to work.
>
> 	M.
>


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-04 15:31         ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-04 15:31 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

Hi Marc,

On 12/27/21 4:53 PM, Marc Zyngier wrote:
> Hi Eric,
>
> Picking this up again after a stupidly long time...
>
> On Mon, 04 Oct 2021 13:00:21 +0100,
> Eric Auger <eric.auger@redhat.com> wrote:
>> Hi Marc,
>>
>> On 10/3/21 6:46 PM, Marc Zyngier wrote:
>>> Currently, the highmem PCIe region is oddly keyed on the highmem
>>> attribute instead of highmem_ecam. Move the enablement of this PCIe
>>> region over to highmem_ecam.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>  hw/arm/virt-acpi-build.c | 10 ++++------
>>>  hw/arm/virt.c            |  4 ++--
>>>  2 files changed, 6 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
>>> index 037cc1fd82..d7bef0e627 100644
>>> --- a/hw/arm/virt-acpi-build.c
>>> +++ b/hw/arm/virt-acpi-build.c
>>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>>>  }
>>>  
>>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
>>> -                              VirtMachineState *vms)
>>> +                              uint32_t irq, VirtMachineState *vms)
>>>  {
>>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
>>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>>>      struct GPEXConfig cfg = {
>>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>>>          .pio    = memmap[VIRT_PCIE_PIO],
>>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>          .bus    = vms->bus,
>>>      };
>>>  
>>> -    if (use_highmem) {
>>> +    if (vms->highmem_ecam) {
>> highmem_ecam is more restrictive than use_highmem:
>> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
>>
>> If I remember correctly there was a problem using highmem ECAM with 32b
>> AAVMF FW.
>>
>> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
>> size") introduced high MMIO PCI region without this constraint.
> Then I really don't understand the point of this highmem_ecam. We only
> register the highmem version if highmem_ecam is set (see the use of
> VIRT_ECAM_ID() to pick the right ECAM window).

but aren't we talking about different regions? On one hand the [high]
MMIO region (512GB wide) and the [high] ECAM region (256MB large).
To me you can enable either independently. High MMIO region is used by
some devices likes ivshmem/video cards while high ECAM was introduced to
extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
new 256MB ECAM region).

with the above change the high MMIO region won't be set with 32b
FW+kernel and LPAE whereas it is currently.

high ECAM was not supported by 32b FW, hence the highmem_ecam.

but maybe I miss your point?

Eric
>
> So keying this on highmem makes it expose a device that may not be
> there the first place since, as you pointed out that highmem_ecam can
> be false in cases where highmem is true.
>
>> So to me we should keep vms->highmem here
> I really must be missing how this is supposed to work.
>
> 	M.
>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-04 15:31         ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-04 15:31 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, Andrew Jones, kvm, qemu-devel, kernel-team, kvmarm

Hi Marc,

On 12/27/21 4:53 PM, Marc Zyngier wrote:
> Hi Eric,
>
> Picking this up again after a stupidly long time...
>
> On Mon, 04 Oct 2021 13:00:21 +0100,
> Eric Auger <eric.auger@redhat.com> wrote:
>> Hi Marc,
>>
>> On 10/3/21 6:46 PM, Marc Zyngier wrote:
>>> Currently, the highmem PCIe region is oddly keyed on the highmem
>>> attribute instead of highmem_ecam. Move the enablement of this PCIe
>>> region over to highmem_ecam.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>  hw/arm/virt-acpi-build.c | 10 ++++------
>>>  hw/arm/virt.c            |  4 ++--
>>>  2 files changed, 6 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
>>> index 037cc1fd82..d7bef0e627 100644
>>> --- a/hw/arm/virt-acpi-build.c
>>> +++ b/hw/arm/virt-acpi-build.c
>>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>>>  }
>>>  
>>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
>>> -                              VirtMachineState *vms)
>>> +                              uint32_t irq, VirtMachineState *vms)
>>>  {
>>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
>>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>>>      struct GPEXConfig cfg = {
>>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>>>          .pio    = memmap[VIRT_PCIE_PIO],
>>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>          .bus    = vms->bus,
>>>      };
>>>  
>>> -    if (use_highmem) {
>>> +    if (vms->highmem_ecam) {
>> highmem_ecam is more restrictive than use_highmem:
>> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
>>
>> If I remember correctly there was a problem using highmem ECAM with 32b
>> AAVMF FW.
>>
>> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
>> size") introduced high MMIO PCI region without this constraint.
> Then I really don't understand the point of this highmem_ecam. We only
> register the highmem version if highmem_ecam is set (see the use of
> VIRT_ECAM_ID() to pick the right ECAM window).

but aren't we talking about different regions? On one hand the [high]
MMIO region (512GB wide) and the [high] ECAM region (256MB large).
To me you can enable either independently. High MMIO region is used by
some devices likes ivshmem/video cards while high ECAM was introduced to
extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
new 256MB ECAM region).

with the above change the high MMIO region won't be set with 32b
FW+kernel and LPAE whereas it is currently.

high ECAM was not supported by 32b FW, hence the highmem_ecam.

but maybe I miss your point?

Eric
>
> So keying this on highmem makes it expose a device that may not be
> there the first place since, as you pointed out that highmem_ecam can
> be false in cases where highmem is true.
>
>> So to me we should keep vms->highmem here
> I really must be missing how this is supposed to work.
>
> 	M.
>



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2022-01-04 15:31         ` Eric Auger
  (?)
@ 2022-01-04 22:15           ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2022-01-04 22:15 UTC (permalink / raw)
  To: eric.auger
  Cc: qemu-devel, Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

Hi Eric,

On Tue, 04 Jan 2022 15:31:33 +0000,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 12/27/21 4:53 PM, Marc Zyngier wrote:
> > Hi Eric,
> >
> > Picking this up again after a stupidly long time...
> >
> > On Mon, 04 Oct 2021 13:00:21 +0100,
> > Eric Auger <eric.auger@redhat.com> wrote:
> >> Hi Marc,
> >>
> >> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> >>> Currently, the highmem PCIe region is oddly keyed on the highmem
> >>> attribute instead of highmem_ecam. Move the enablement of this PCIe
> >>> region over to highmem_ecam.
> >>>
> >>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> >>> ---
> >>>  hw/arm/virt-acpi-build.c | 10 ++++------
> >>>  hw/arm/virt.c            |  4 ++--
> >>>  2 files changed, 6 insertions(+), 8 deletions(-)
> >>>
> >>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> >>> index 037cc1fd82..d7bef0e627 100644
> >>> --- a/hw/arm/virt-acpi-build.c
> >>> +++ b/hw/arm/virt-acpi-build.c
> >>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
> >>>  }
> >>>  
> >>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> >>> -                              VirtMachineState *vms)
> >>> +                              uint32_t irq, VirtMachineState *vms)
> >>>  {
> >>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> >>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
> >>>      struct GPEXConfig cfg = {
> >>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
> >>>          .pio    = memmap[VIRT_PCIE_PIO],
> >>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >>>          .bus    = vms->bus,
> >>>      };
> >>>  
> >>> -    if (use_highmem) {
> >>> +    if (vms->highmem_ecam) {
> >> highmem_ecam is more restrictive than use_highmem:
> >> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
> >>
> >> If I remember correctly there was a problem using highmem ECAM with 32b
> >> AAVMF FW.
> >>
> >> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
> >> size") introduced high MMIO PCI region without this constraint.
> > Then I really don't understand the point of this highmem_ecam. We only
> > register the highmem version if highmem_ecam is set (see the use of
> > VIRT_ECAM_ID() to pick the right ECAM window).
> 
> but aren't we talking about different regions? On one hand the [high]
> MMIO region (512GB wide) and the [high] ECAM region (256MB large).
> To me you can enable either independently. High MMIO region is used by
> some devices likes ivshmem/video cards while high ECAM was introduced to
> extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
> new 256MB ECAM region).
> 
> with the above change the high MMIO region won't be set with 32b
> FW+kernel and LPAE whereas it is currently.
> 
> high ECAM was not supported by 32b FW, hence the highmem_ecam.
> 
> but maybe I miss your point?

There are two issues.

First, I have been conflating the ECAM and MMIO ranges, and you only
made me realise that they were supposed to be independent.  I still
think the keying on highmem is wrong, but the main issue is that the
highmem* flags don't quite describe the shape of the platform.

All these booleans indicate is whether the feature they describe (the
high MMIO range, the high ECAM range, and in one of my patches the
high RD range) are *allowed* to live above 4GB, but do not express
whether then are actually usable (i.e. fit in the PA range).

Maybe we need to be more thorough in the way we describe the extended
region in the VirtMachineState structure:

- highmem: overall control for anything that *can* live above 4GB
- highmem_ecam: Has a PCIe ECAM region above 256GB
- highmem_mmio: Has a PCIe MMIO region above 256GB
- highmem_redist: Has 512 RDs above 256GB

Crucially, the last 3 items must fit in the PA range or be disabled.

We have highmem_ecam which is keyed on highmem, but not on the PA
range.  highmem_mmio doesn't exist at all (we use highmem instead),
and I'm only introducing highmem_redist.

For these 3 ranges, we should have something like

vms->highmem_xxx &= (vms->highmem &&
		     (vms->memmap[XXX].base + vms->vms->memmap[XXX].size) < vms->highest_gpa);

and treat them as independent entities.  Unless someone shouts, I'm
going to go ahead and implement this logic.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-04 22:15           ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2022-01-04 22:15 UTC (permalink / raw)
  To: eric.auger; +Cc: kvm, qemu-devel, kernel-team, kvmarm

Hi Eric,

On Tue, 04 Jan 2022 15:31:33 +0000,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 12/27/21 4:53 PM, Marc Zyngier wrote:
> > Hi Eric,
> >
> > Picking this up again after a stupidly long time...
> >
> > On Mon, 04 Oct 2021 13:00:21 +0100,
> > Eric Auger <eric.auger@redhat.com> wrote:
> >> Hi Marc,
> >>
> >> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> >>> Currently, the highmem PCIe region is oddly keyed on the highmem
> >>> attribute instead of highmem_ecam. Move the enablement of this PCIe
> >>> region over to highmem_ecam.
> >>>
> >>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> >>> ---
> >>>  hw/arm/virt-acpi-build.c | 10 ++++------
> >>>  hw/arm/virt.c            |  4 ++--
> >>>  2 files changed, 6 insertions(+), 8 deletions(-)
> >>>
> >>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> >>> index 037cc1fd82..d7bef0e627 100644
> >>> --- a/hw/arm/virt-acpi-build.c
> >>> +++ b/hw/arm/virt-acpi-build.c
> >>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
> >>>  }
> >>>  
> >>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> >>> -                              VirtMachineState *vms)
> >>> +                              uint32_t irq, VirtMachineState *vms)
> >>>  {
> >>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> >>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
> >>>      struct GPEXConfig cfg = {
> >>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
> >>>          .pio    = memmap[VIRT_PCIE_PIO],
> >>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >>>          .bus    = vms->bus,
> >>>      };
> >>>  
> >>> -    if (use_highmem) {
> >>> +    if (vms->highmem_ecam) {
> >> highmem_ecam is more restrictive than use_highmem:
> >> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
> >>
> >> If I remember correctly there was a problem using highmem ECAM with 32b
> >> AAVMF FW.
> >>
> >> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
> >> size") introduced high MMIO PCI region without this constraint.
> > Then I really don't understand the point of this highmem_ecam. We only
> > register the highmem version if highmem_ecam is set (see the use of
> > VIRT_ECAM_ID() to pick the right ECAM window).
> 
> but aren't we talking about different regions? On one hand the [high]
> MMIO region (512GB wide) and the [high] ECAM region (256MB large).
> To me you can enable either independently. High MMIO region is used by
> some devices likes ivshmem/video cards while high ECAM was introduced to
> extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
> new 256MB ECAM region).
> 
> with the above change the high MMIO region won't be set with 32b
> FW+kernel and LPAE whereas it is currently.
> 
> high ECAM was not supported by 32b FW, hence the highmem_ecam.
> 
> but maybe I miss your point?

There are two issues.

First, I have been conflating the ECAM and MMIO ranges, and you only
made me realise that they were supposed to be independent.  I still
think the keying on highmem is wrong, but the main issue is that the
highmem* flags don't quite describe the shape of the platform.

All these booleans indicate is whether the feature they describe (the
high MMIO range, the high ECAM range, and in one of my patches the
high RD range) are *allowed* to live above 4GB, but do not express
whether then are actually usable (i.e. fit in the PA range).

Maybe we need to be more thorough in the way we describe the extended
region in the VirtMachineState structure:

- highmem: overall control for anything that *can* live above 4GB
- highmem_ecam: Has a PCIe ECAM region above 256GB
- highmem_mmio: Has a PCIe MMIO region above 256GB
- highmem_redist: Has 512 RDs above 256GB

Crucially, the last 3 items must fit in the PA range or be disabled.

We have highmem_ecam which is keyed on highmem, but not on the PA
range.  highmem_mmio doesn't exist at all (we use highmem instead),
and I'm only introducing highmem_redist.

For these 3 ranges, we should have something like

vms->highmem_xxx &= (vms->highmem &&
		     (vms->memmap[XXX].base + vms->vms->memmap[XXX].size) < vms->highest_gpa);

and treat them as independent entities.  Unless someone shouts, I'm
going to go ahead and implement this logic.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-04 22:15           ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2022-01-04 22:15 UTC (permalink / raw)
  To: eric.auger
  Cc: Peter Maydell, Andrew Jones, kvm, qemu-devel, kernel-team, kvmarm

Hi Eric,

On Tue, 04 Jan 2022 15:31:33 +0000,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 12/27/21 4:53 PM, Marc Zyngier wrote:
> > Hi Eric,
> >
> > Picking this up again after a stupidly long time...
> >
> > On Mon, 04 Oct 2021 13:00:21 +0100,
> > Eric Auger <eric.auger@redhat.com> wrote:
> >> Hi Marc,
> >>
> >> On 10/3/21 6:46 PM, Marc Zyngier wrote:
> >>> Currently, the highmem PCIe region is oddly keyed on the highmem
> >>> attribute instead of highmem_ecam. Move the enablement of this PCIe
> >>> region over to highmem_ecam.
> >>>
> >>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> >>> ---
> >>>  hw/arm/virt-acpi-build.c | 10 ++++------
> >>>  hw/arm/virt.c            |  4 ++--
> >>>  2 files changed, 6 insertions(+), 8 deletions(-)
> >>>
> >>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> >>> index 037cc1fd82..d7bef0e627 100644
> >>> --- a/hw/arm/virt-acpi-build.c
> >>> +++ b/hw/arm/virt-acpi-build.c
> >>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
> >>>  }
> >>>  
> >>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
> >>> -                              VirtMachineState *vms)
> >>> +                              uint32_t irq, VirtMachineState *vms)
> >>>  {
> >>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
> >>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
> >>>      struct GPEXConfig cfg = {
> >>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
> >>>          .pio    = memmap[VIRT_PCIE_PIO],
> >>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
> >>>          .bus    = vms->bus,
> >>>      };
> >>>  
> >>> -    if (use_highmem) {
> >>> +    if (vms->highmem_ecam) {
> >> highmem_ecam is more restrictive than use_highmem:
> >> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
> >>
> >> If I remember correctly there was a problem using highmem ECAM with 32b
> >> AAVMF FW.
> >>
> >> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
> >> size") introduced high MMIO PCI region without this constraint.
> > Then I really don't understand the point of this highmem_ecam. We only
> > register the highmem version if highmem_ecam is set (see the use of
> > VIRT_ECAM_ID() to pick the right ECAM window).
> 
> but aren't we talking about different regions? On one hand the [high]
> MMIO region (512GB wide) and the [high] ECAM region (256MB large).
> To me you can enable either independently. High MMIO region is used by
> some devices likes ivshmem/video cards while high ECAM was introduced to
> extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
> new 256MB ECAM region).
> 
> with the above change the high MMIO region won't be set with 32b
> FW+kernel and LPAE whereas it is currently.
> 
> high ECAM was not supported by 32b FW, hence the highmem_ecam.
> 
> but maybe I miss your point?

There are two issues.

First, I have been conflating the ECAM and MMIO ranges, and you only
made me realise that they were supposed to be independent.  I still
think the keying on highmem is wrong, but the main issue is that the
highmem* flags don't quite describe the shape of the platform.

All these booleans indicate is whether the feature they describe (the
high MMIO range, the high ECAM range, and in one of my patches the
high RD range) are *allowed* to live above 4GB, but do not express
whether then are actually usable (i.e. fit in the PA range).

Maybe we need to be more thorough in the way we describe the extended
region in the VirtMachineState structure:

- highmem: overall control for anything that *can* live above 4GB
- highmem_ecam: Has a PCIe ECAM region above 256GB
- highmem_mmio: Has a PCIe MMIO region above 256GB
- highmem_redist: Has 512 RDs above 256GB

Crucially, the last 3 items must fit in the PA range or be disabled.

We have highmem_ecam which is keyed on highmem, but not on the PA
range.  highmem_mmio doesn't exist at all (we use highmem instead),
and I'm only introducing highmem_redist.

For these 3 ranges, we should have something like

vms->highmem_xxx &= (vms->highmem &&
		     (vms->memmap[XXX].base + vms->vms->memmap[XXX].size) < vms->highest_gpa);

and treat them as independent entities.  Unless someone shouts, I'm
going to go ahead and implement this logic.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2022-01-04 22:15           ` Marc Zyngier
  (?)
@ 2022-01-05  9:41             ` Eric Auger
  -1 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-05  9:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

Hi Marc,

On 1/4/22 11:15 PM, Marc Zyngier wrote:
> Hi Eric,
>
> On Tue, 04 Jan 2022 15:31:33 +0000,
> Eric Auger <eric.auger@redhat.com> wrote:
>> Hi Marc,
>>
>> On 12/27/21 4:53 PM, Marc Zyngier wrote:
>>> Hi Eric,
>>>
>>> Picking this up again after a stupidly long time...
>>>
>>> On Mon, 04 Oct 2021 13:00:21 +0100,
>>> Eric Auger <eric.auger@redhat.com> wrote:
>>>> Hi Marc,
>>>>
>>>> On 10/3/21 6:46 PM, Marc Zyngier wrote:
>>>>> Currently, the highmem PCIe region is oddly keyed on the highmem
>>>>> attribute instead of highmem_ecam. Move the enablement of this PCIe
>>>>> region over to highmem_ecam.
>>>>>
>>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>>> ---
>>>>>  hw/arm/virt-acpi-build.c | 10 ++++------
>>>>>  hw/arm/virt.c            |  4 ++--
>>>>>  2 files changed, 6 insertions(+), 8 deletions(-)
>>>>>
>>>>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
>>>>> index 037cc1fd82..d7bef0e627 100644
>>>>> --- a/hw/arm/virt-acpi-build.c
>>>>> +++ b/hw/arm/virt-acpi-build.c
>>>>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>>>>>  }
>>>>>  
>>>>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
>>>>> -                              VirtMachineState *vms)
>>>>> +                              uint32_t irq, VirtMachineState *vms)
>>>>>  {
>>>>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
>>>>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>>>>>      struct GPEXConfig cfg = {
>>>>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>>>>>          .pio    = memmap[VIRT_PCIE_PIO],
>>>>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>>>          .bus    = vms->bus,
>>>>>      };
>>>>>  
>>>>> -    if (use_highmem) {
>>>>> +    if (vms->highmem_ecam) {
>>>> highmem_ecam is more restrictive than use_highmem:
>>>> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
>>>>
>>>> If I remember correctly there was a problem using highmem ECAM with 32b
>>>> AAVMF FW.
>>>>
>>>> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
>>>> size") introduced high MMIO PCI region without this constraint.
>>> Then I really don't understand the point of this highmem_ecam. We only
>>> register the highmem version if highmem_ecam is set (see the use of
>>> VIRT_ECAM_ID() to pick the right ECAM window).
>> but aren't we talking about different regions? On one hand the [high]
>> MMIO region (512GB wide) and the [high] ECAM region (256MB large).
>> To me you can enable either independently. High MMIO region is used by
>> some devices likes ivshmem/video cards while high ECAM was introduced to
>> extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
>> new 256MB ECAM region).
>>
>> with the above change the high MMIO region won't be set with 32b
>> FW+kernel and LPAE whereas it is currently.
>>
>> high ECAM was not supported by 32b FW, hence the highmem_ecam.
>>
>> but maybe I miss your point?
> There are two issues.
>
> First, I have been conflating the ECAM and MMIO ranges, and you only
> made me realise that they were supposed to be independent.  I still
> think the keying on highmem is wrong, but the main issue is that the
> highmem* flags don't quite describe the shape of the platform.
>
> All these booleans indicate is whether the feature they describe (the
> high MMIO range, the high ECAM range, and in one of my patches the
> high RD range) are *allowed* to live above 4GB, but do not express
> whether then are actually usable (i.e. fit in the PA range).
>
> Maybe we need to be more thorough in the way we describe the extended
> region in the VirtMachineState structure:
>
> - highmem: overall control for anything that *can* live above 4GB
> - highmem_ecam: Has a PCIe ECAM region above 256GB
> - highmem_mmio: Has a PCIe MMIO region above 256GB
> - highmem_redist: Has 512 RDs above 256GB
>
> Crucially, the last 3 items must fit in the PA range or be disabled.
>
> We have highmem_ecam which is keyed on highmem, but not on the PA
> range.  highmem_mmio doesn't exist at all (we use highmem instead),
"highmem_ecam is keyed on highmem but not on the PA range". True but it
is properly taken into account in highest_gpa computation so eventually
we make sure it does not overflow the IPA limit. Same for the high mmio
region which is keyed on highmem.
> and I'm only introducing highmem_redist.
>
> For these 3 ranges, we should have something like
>
> vms->highmem_xxx &= (vms->highmem &&
> 		     (vms->memmap[XXX].base + vms->vms->memmap[XXX].size) < vms->highest_gpa);

couldn't you simply introduce highmem_redist which is truly missing. You
could set it in virt_set_memmap() in case you skip extended_map overlay
and use it in virt_gicv3_redist_region_count() as you did?
In addition to the device memory top address check against the 4GB limit
if !highmem, we should be fine then?

Eric
>
> and treat them as independent entities.  Unless someone shouts, I'm
> going to go ahead and implement this logic.
>
> Thanks,
>
> 	M.
>


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-05  9:41             ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-05  9:41 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

Hi Marc,

On 1/4/22 11:15 PM, Marc Zyngier wrote:
> Hi Eric,
>
> On Tue, 04 Jan 2022 15:31:33 +0000,
> Eric Auger <eric.auger@redhat.com> wrote:
>> Hi Marc,
>>
>> On 12/27/21 4:53 PM, Marc Zyngier wrote:
>>> Hi Eric,
>>>
>>> Picking this up again after a stupidly long time...
>>>
>>> On Mon, 04 Oct 2021 13:00:21 +0100,
>>> Eric Auger <eric.auger@redhat.com> wrote:
>>>> Hi Marc,
>>>>
>>>> On 10/3/21 6:46 PM, Marc Zyngier wrote:
>>>>> Currently, the highmem PCIe region is oddly keyed on the highmem
>>>>> attribute instead of highmem_ecam. Move the enablement of this PCIe
>>>>> region over to highmem_ecam.
>>>>>
>>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>>> ---
>>>>>  hw/arm/virt-acpi-build.c | 10 ++++------
>>>>>  hw/arm/virt.c            |  4 ++--
>>>>>  2 files changed, 6 insertions(+), 8 deletions(-)
>>>>>
>>>>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
>>>>> index 037cc1fd82..d7bef0e627 100644
>>>>> --- a/hw/arm/virt-acpi-build.c
>>>>> +++ b/hw/arm/virt-acpi-build.c
>>>>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>>>>>  }
>>>>>  
>>>>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
>>>>> -                              VirtMachineState *vms)
>>>>> +                              uint32_t irq, VirtMachineState *vms)
>>>>>  {
>>>>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
>>>>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>>>>>      struct GPEXConfig cfg = {
>>>>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>>>>>          .pio    = memmap[VIRT_PCIE_PIO],
>>>>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>>>          .bus    = vms->bus,
>>>>>      };
>>>>>  
>>>>> -    if (use_highmem) {
>>>>> +    if (vms->highmem_ecam) {
>>>> highmem_ecam is more restrictive than use_highmem:
>>>> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
>>>>
>>>> If I remember correctly there was a problem using highmem ECAM with 32b
>>>> AAVMF FW.
>>>>
>>>> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
>>>> size") introduced high MMIO PCI region without this constraint.
>>> Then I really don't understand the point of this highmem_ecam. We only
>>> register the highmem version if highmem_ecam is set (see the use of
>>> VIRT_ECAM_ID() to pick the right ECAM window).
>> but aren't we talking about different regions? On one hand the [high]
>> MMIO region (512GB wide) and the [high] ECAM region (256MB large).
>> To me you can enable either independently. High MMIO region is used by
>> some devices likes ivshmem/video cards while high ECAM was introduced to
>> extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
>> new 256MB ECAM region).
>>
>> with the above change the high MMIO region won't be set with 32b
>> FW+kernel and LPAE whereas it is currently.
>>
>> high ECAM was not supported by 32b FW, hence the highmem_ecam.
>>
>> but maybe I miss your point?
> There are two issues.
>
> First, I have been conflating the ECAM and MMIO ranges, and you only
> made me realise that they were supposed to be independent.  I still
> think the keying on highmem is wrong, but the main issue is that the
> highmem* flags don't quite describe the shape of the platform.
>
> All these booleans indicate is whether the feature they describe (the
> high MMIO range, the high ECAM range, and in one of my patches the
> high RD range) are *allowed* to live above 4GB, but do not express
> whether then are actually usable (i.e. fit in the PA range).
>
> Maybe we need to be more thorough in the way we describe the extended
> region in the VirtMachineState structure:
>
> - highmem: overall control for anything that *can* live above 4GB
> - highmem_ecam: Has a PCIe ECAM region above 256GB
> - highmem_mmio: Has a PCIe MMIO region above 256GB
> - highmem_redist: Has 512 RDs above 256GB
>
> Crucially, the last 3 items must fit in the PA range or be disabled.
>
> We have highmem_ecam which is keyed on highmem, but not on the PA
> range.  highmem_mmio doesn't exist at all (we use highmem instead),
"highmem_ecam is keyed on highmem but not on the PA range". True but it
is properly taken into account in highest_gpa computation so eventually
we make sure it does not overflow the IPA limit. Same for the high mmio
region which is keyed on highmem.
> and I'm only introducing highmem_redist.
>
> For these 3 ranges, we should have something like
>
> vms->highmem_xxx &= (vms->highmem &&
> 		     (vms->memmap[XXX].base + vms->vms->memmap[XXX].size) < vms->highest_gpa);

couldn't you simply introduce highmem_redist which is truly missing. You
could set it in virt_set_memmap() in case you skip extended_map overlay
and use it in virt_gicv3_redist_region_count() as you did?
In addition to the device memory top address check against the 4GB limit
if !highmem, we should be fine then?

Eric
>
> and treat them as independent entities.  Unless someone shouts, I'm
> going to go ahead and implement this logic.
>
> Thanks,
>
> 	M.
>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-05  9:41             ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-05  9:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, Andrew Jones, kvm, qemu-devel, kernel-team, kvmarm

Hi Marc,

On 1/4/22 11:15 PM, Marc Zyngier wrote:
> Hi Eric,
>
> On Tue, 04 Jan 2022 15:31:33 +0000,
> Eric Auger <eric.auger@redhat.com> wrote:
>> Hi Marc,
>>
>> On 12/27/21 4:53 PM, Marc Zyngier wrote:
>>> Hi Eric,
>>>
>>> Picking this up again after a stupidly long time...
>>>
>>> On Mon, 04 Oct 2021 13:00:21 +0100,
>>> Eric Auger <eric.auger@redhat.com> wrote:
>>>> Hi Marc,
>>>>
>>>> On 10/3/21 6:46 PM, Marc Zyngier wrote:
>>>>> Currently, the highmem PCIe region is oddly keyed on the highmem
>>>>> attribute instead of highmem_ecam. Move the enablement of this PCIe
>>>>> region over to highmem_ecam.
>>>>>
>>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>>> ---
>>>>>  hw/arm/virt-acpi-build.c | 10 ++++------
>>>>>  hw/arm/virt.c            |  4 ++--
>>>>>  2 files changed, 6 insertions(+), 8 deletions(-)
>>>>>
>>>>> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
>>>>> index 037cc1fd82..d7bef0e627 100644
>>>>> --- a/hw/arm/virt-acpi-build.c
>>>>> +++ b/hw/arm/virt-acpi-build.c
>>>>> @@ -157,10 +157,9 @@ static void acpi_dsdt_add_virtio(Aml *scope,
>>>>>  }
>>>>>  
>>>>>  static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>>> -                              uint32_t irq, bool use_highmem, bool highmem_ecam,
>>>>> -                              VirtMachineState *vms)
>>>>> +                              uint32_t irq, VirtMachineState *vms)
>>>>>  {
>>>>> -    int ecam_id = VIRT_ECAM_ID(highmem_ecam);
>>>>> +    int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
>>>>>      struct GPEXConfig cfg = {
>>>>>          .mmio32 = memmap[VIRT_PCIE_MMIO],
>>>>>          .pio    = memmap[VIRT_PCIE_PIO],
>>>>> @@ -169,7 +168,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
>>>>>          .bus    = vms->bus,
>>>>>      };
>>>>>  
>>>>> -    if (use_highmem) {
>>>>> +    if (vms->highmem_ecam) {
>>>> highmem_ecam is more restrictive than use_highmem:
>>>> vms->highmem_ecam &= vms->highmem && (!firmware_loaded || aarch64);
>>>>
>>>> If I remember correctly there was a problem using highmem ECAM with 32b
>>>> AAVMF FW.
>>>>
>>>> However 5125f9cd2532 ("hw/arm/virt: Add high MMIO PCI region, 512G in
>>>> size") introduced high MMIO PCI region without this constraint.
>>> Then I really don't understand the point of this highmem_ecam. We only
>>> register the highmem version if highmem_ecam is set (see the use of
>>> VIRT_ECAM_ID() to pick the right ECAM window).
>> but aren't we talking about different regions? On one hand the [high]
>> MMIO region (512GB wide) and the [high] ECAM region (256MB large).
>> To me you can enable either independently. High MMIO region is used by
>> some devices likes ivshmem/video cards while high ECAM was introduced to
>> extend the number of supported buses: 601d626d148a (hw/arm/virt: Add a
>> new 256MB ECAM region).
>>
>> with the above change the high MMIO region won't be set with 32b
>> FW+kernel and LPAE whereas it is currently.
>>
>> high ECAM was not supported by 32b FW, hence the highmem_ecam.
>>
>> but maybe I miss your point?
> There are two issues.
>
> First, I have been conflating the ECAM and MMIO ranges, and you only
> made me realise that they were supposed to be independent.  I still
> think the keying on highmem is wrong, but the main issue is that the
> highmem* flags don't quite describe the shape of the platform.
>
> All these booleans indicate is whether the feature they describe (the
> high MMIO range, the high ECAM range, and in one of my patches the
> high RD range) are *allowed* to live above 4GB, but do not express
> whether then are actually usable (i.e. fit in the PA range).
>
> Maybe we need to be more thorough in the way we describe the extended
> region in the VirtMachineState structure:
>
> - highmem: overall control for anything that *can* live above 4GB
> - highmem_ecam: Has a PCIe ECAM region above 256GB
> - highmem_mmio: Has a PCIe MMIO region above 256GB
> - highmem_redist: Has 512 RDs above 256GB
>
> Crucially, the last 3 items must fit in the PA range or be disabled.
>
> We have highmem_ecam which is keyed on highmem, but not on the PA
> range.  highmem_mmio doesn't exist at all (we use highmem instead),
"highmem_ecam is keyed on highmem but not on the PA range". True but it
is properly taken into account in highest_gpa computation so eventually
we make sure it does not overflow the IPA limit. Same for the high mmio
region which is keyed on highmem.
> and I'm only introducing highmem_redist.
>
> For these 3 ranges, we should have something like
>
> vms->highmem_xxx &= (vms->highmem &&
> 		     (vms->memmap[XXX].base + vms->vms->memmap[XXX].size) < vms->highest_gpa);

couldn't you simply introduce highmem_redist which is truly missing. You
could set it in virt_set_memmap() in case you skip extended_map overlay
and use it in virt_gicv3_redist_region_count() as you did?
In addition to the device memory top address check against the 4GB limit
if !highmem, we should be fine then?

Eric
>
> and treat them as independent entities.  Unless someone shouts, I'm
> going to go ahead and implement this logic.
>
> Thanks,
>
> 	M.
>



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2022-01-05  9:41             ` Eric Auger
  (?)
@ 2022-01-06 19:34               ` Marc Zyngier
  -1 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2022-01-06 19:34 UTC (permalink / raw)
  To: eric.auger
  Cc: qemu-devel, Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

Hi Eric,

On Wed, 05 Jan 2022 09:41:19 +0000,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> couldn't you simply introduce highmem_redist which is truly missing. You
> could set it in virt_set_memmap() in case you skip extended_map overlay
> and use it in virt_gicv3_redist_region_count() as you did?
> In addition to the device memory top address check against the 4GB limit
> if !highmem, we should be fine then?

No, highmem really isn't nearly enough.

Imagine you have (like I do) a system with 36 bits of IPA space.
Create a VM with 8GB of RAM (which means the low-end of IPA space is
already 9GB). Obviously, highmem=true here. With the current code, we
will try to expose this PCI MMIO range, which falls way out of the IPA
space (you need at least 40 bits of IPA to even cover it with the
smallest configuration).

highmem really is a control that says 'things may live above 4GB'. It
doesn't say *how far* above 4GB it can be placed. Which is what I am
trying to address.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-06 19:34               ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2022-01-06 19:34 UTC (permalink / raw)
  To: eric.auger; +Cc: kvm, qemu-devel, kernel-team, kvmarm

Hi Eric,

On Wed, 05 Jan 2022 09:41:19 +0000,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> couldn't you simply introduce highmem_redist which is truly missing. You
> could set it in virt_set_memmap() in case you skip extended_map overlay
> and use it in virt_gicv3_redist_region_count() as you did?
> In addition to the device memory top address check against the 4GB limit
> if !highmem, we should be fine then?

No, highmem really isn't nearly enough.

Imagine you have (like I do) a system with 36 bits of IPA space.
Create a VM with 8GB of RAM (which means the low-end of IPA space is
already 9GB). Obviously, highmem=true here. With the current code, we
will try to expose this PCI MMIO range, which falls way out of the IPA
space (you need at least 40 bits of IPA to even cover it with the
smallest configuration).

highmem really is a control that says 'things may live above 4GB'. It
doesn't say *how far* above 4GB it can be placed. Which is what I am
trying to address.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-06 19:34               ` Marc Zyngier
  0 siblings, 0 replies; 69+ messages in thread
From: Marc Zyngier @ 2022-01-06 19:34 UTC (permalink / raw)
  To: eric.auger
  Cc: Peter Maydell, Andrew Jones, kvm, qemu-devel, kernel-team, kvmarm

Hi Eric,

On Wed, 05 Jan 2022 09:41:19 +0000,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> couldn't you simply introduce highmem_redist which is truly missing. You
> could set it in virt_set_memmap() in case you skip extended_map overlay
> and use it in virt_gicv3_redist_region_count() as you did?
> In addition to the device memory top address check against the 4GB limit
> if !highmem, we should be fine then?

No, highmem really isn't nearly enough.

Imagine you have (like I do) a system with 36 bits of IPA space.
Create a VM with 8GB of RAM (which means the low-end of IPA space is
already 9GB). Obviously, highmem=true here. With the current code, we
will try to expose this PCI MMIO range, which falls way out of the IPA
space (you need at least 40 bits of IPA to even cover it with the
smallest configuration).

highmem really is a control that says 'things may live above 4GB'. It
doesn't say *how far* above 4GB it can be placed. Which is what I am
trying to address.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
  2022-01-06 19:34               ` Marc Zyngier
  (?)
@ 2022-01-07 17:10                 ` Eric Auger
  -1 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-07 17:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: qemu-devel, Andrew Jones, Peter Maydell, kvmarm, kvm, kernel-team

Hi Marc,

On 1/6/22 8:34 PM, Marc Zyngier wrote:
> Hi Eric,
>
> On Wed, 05 Jan 2022 09:41:19 +0000,
> Eric Auger <eric.auger@redhat.com> wrote:
>> couldn't you simply introduce highmem_redist which is truly missing. You
>> could set it in virt_set_memmap() in case you skip extended_map overlay
>> and use it in virt_gicv3_redist_region_count() as you did?
>> In addition to the device memory top address check against the 4GB limit
>> if !highmem, we should be fine then?
> No, highmem really isn't nearly enough.
>
> Imagine you have (like I do) a system with 36 bits of IPA space.
> Create a VM with 8GB of RAM (which means the low-end of IPA space is
> already 9GB). Obviously, highmem=true here. With the current code, we
> will try to expose this PCI MMIO range, which falls way out of the IPA
> space (you need at least 40 bits of IPA to even cover it with the
> smallest configuration).
In that case the he High MMIO region is accounted in the vms->highest_gpa:

    for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
        hwaddr size = extended_memmap[i].size;

        base = ROUND_UP(base, size);
        vms->memmap[i].base = base;
        vms->memmap[i].size = size;
        base += size;
    }
    vms->highest_gpa = base - 1;

and virt_kvm_type() should exit in that case at:
if (requested_pa_size > max_vm_pa_size) {
        error_report("-m and ,maxmem option values "
                     "require an IPA range (%d bits) larger than "
                     "the one supported by the host (%d bits)",
                     requested_pa_size, max_vm_pa_size);
        exit(1);
    }

?
>
> highmem really is a control that says 'things may live above 4GB'. It
> doesn't say *how far* above 4GB it can be placed. Which is what I am
> trying to address.

OK I will look at your v4 ;-)

Thanks

Eric
>
> Thanks,
>
> 	M.
>


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-07 17:10                 ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-07 17:10 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, qemu-devel, kernel-team, kvmarm

Hi Marc,

On 1/6/22 8:34 PM, Marc Zyngier wrote:
> Hi Eric,
>
> On Wed, 05 Jan 2022 09:41:19 +0000,
> Eric Auger <eric.auger@redhat.com> wrote:
>> couldn't you simply introduce highmem_redist which is truly missing. You
>> could set it in virt_set_memmap() in case you skip extended_map overlay
>> and use it in virt_gicv3_redist_region_count() as you did?
>> In addition to the device memory top address check against the 4GB limit
>> if !highmem, we should be fine then?
> No, highmem really isn't nearly enough.
>
> Imagine you have (like I do) a system with 36 bits of IPA space.
> Create a VM with 8GB of RAM (which means the low-end of IPA space is
> already 9GB). Obviously, highmem=true here. With the current code, we
> will try to expose this PCI MMIO range, which falls way out of the IPA
> space (you need at least 40 bits of IPA to even cover it with the
> smallest configuration).
In that case the he High MMIO region is accounted in the vms->highest_gpa:

    for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
        hwaddr size = extended_memmap[i].size;

        base = ROUND_UP(base, size);
        vms->memmap[i].base = base;
        vms->memmap[i].size = size;
        base += size;
    }
    vms->highest_gpa = base - 1;

and virt_kvm_type() should exit in that case at:
if (requested_pa_size > max_vm_pa_size) {
        error_report("-m and ,maxmem option values "
                     "require an IPA range (%d bits) larger than "
                     "the one supported by the host (%d bits)",
                     requested_pa_size, max_vm_pa_size);
        exit(1);
    }

?
>
> highmem really is a control that says 'things may live above 4GB'. It
> doesn't say *how far* above 4GB it can be placed. Which is what I am
> trying to address.

OK I will look at your v4 ;-)

Thanks

Eric
>
> Thanks,
>
> 	M.
>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam
@ 2022-01-07 17:10                 ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2022-01-07 17:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Peter Maydell, Andrew Jones, kvm, qemu-devel, kernel-team, kvmarm

Hi Marc,

On 1/6/22 8:34 PM, Marc Zyngier wrote:
> Hi Eric,
>
> On Wed, 05 Jan 2022 09:41:19 +0000,
> Eric Auger <eric.auger@redhat.com> wrote:
>> couldn't you simply introduce highmem_redist which is truly missing. You
>> could set it in virt_set_memmap() in case you skip extended_map overlay
>> and use it in virt_gicv3_redist_region_count() as you did?
>> In addition to the device memory top address check against the 4GB limit
>> if !highmem, we should be fine then?
> No, highmem really isn't nearly enough.
>
> Imagine you have (like I do) a system with 36 bits of IPA space.
> Create a VM with 8GB of RAM (which means the low-end of IPA space is
> already 9GB). Obviously, highmem=true here. With the current code, we
> will try to expose this PCI MMIO range, which falls way out of the IPA
> space (you need at least 40 bits of IPA to even cover it with the
> smallest configuration).
In that case the he High MMIO region is accounted in the vms->highest_gpa:

    for (i = VIRT_LOWMEMMAP_LAST; i < ARRAY_SIZE(extended_memmap); i++) {
        hwaddr size = extended_memmap[i].size;

        base = ROUND_UP(base, size);
        vms->memmap[i].base = base;
        vms->memmap[i].size = size;
        base += size;
    }
    vms->highest_gpa = base - 1;

and virt_kvm_type() should exit in that case at:
if (requested_pa_size > max_vm_pa_size) {
        error_report("-m and ,maxmem option values "
                     "require an IPA range (%d bits) larger than "
                     "the one supported by the host (%d bits)",
                     requested_pa_size, max_vm_pa_size);
        exit(1);
    }

?
>
> highmem really is a control that says 'things may live above 4GB'. It
> doesn't say *how far* above 4GB it can be placed. Which is what I am
> trying to address.

OK I will look at your v4 ;-)

Thanks

Eric
>
> Thanks,
>
> 	M.
>



^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2022-01-07 17:14 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-03 16:46 [PATCH v2 0/5] target/arm: Reduced-IPA space and highmem=off fixes Marc Zyngier
2021-10-03 16:46 ` Marc Zyngier
2021-10-03 16:46 ` Marc Zyngier
2021-10-03 16:46 ` [PATCH v2 1/5] hw/arm/virt: Key enablement of highmem PCIe on highmem_ecam Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-04  9:41   ` Andrew Jones
2021-10-04  9:41     ` Andrew Jones
2021-10-04  9:41     ` Andrew Jones
2021-10-04 12:00   ` Eric Auger
2021-10-04 12:00     ` Eric Auger
2021-10-04 12:00     ` Eric Auger
2021-12-27 15:53     ` Marc Zyngier
2021-12-27 15:53       ` Marc Zyngier
2021-12-27 15:53       ` Marc Zyngier
2022-01-04 15:31       ` Eric Auger
2022-01-04 15:31         ` Eric Auger
2022-01-04 15:31         ` Eric Auger
2022-01-04 22:15         ` Marc Zyngier
2022-01-04 22:15           ` Marc Zyngier
2022-01-04 22:15           ` Marc Zyngier
2022-01-05  9:41           ` Eric Auger
2022-01-05  9:41             ` Eric Auger
2022-01-05  9:41             ` Eric Auger
2022-01-06 19:34             ` Marc Zyngier
2022-01-06 19:34               ` Marc Zyngier
2022-01-06 19:34               ` Marc Zyngier
2022-01-07 17:10               ` Eric Auger
2022-01-07 17:10                 ` Eric Auger
2022-01-07 17:10                 ` Eric Auger
2021-10-03 16:46 ` [PATCH v2 2/5] hw/arm/virt: Add a control for the the highmem redistributors Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-04  9:44   ` Andrew Jones
2021-10-04  9:44     ` Andrew Jones
2021-10-04  9:44     ` Andrew Jones
2021-10-04 10:14     ` Andrew Jones
2021-10-04 10:14       ` Andrew Jones
2021-10-04 10:14       ` Andrew Jones
2021-10-03 16:46 ` [PATCH v2 3/5] hw/arm/virt: Honor highmem setting when computing the memory map Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-04  9:44   ` Andrew Jones
2021-10-04  9:44     ` Andrew Jones
2021-10-04  9:44     ` Andrew Jones
2021-10-04 12:23   ` Eric Auger
2021-10-04 12:23     ` Eric Auger
2021-10-04 12:23     ` Eric Auger
2021-12-27 16:39     ` Marc Zyngier
2021-12-27 16:39       ` Marc Zyngier
2021-12-27 16:39       ` Marc Zyngier
2021-10-03 16:46 ` [PATCH v2 4/5] hw/arm/virt: Use the PA range to compute " Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-04 10:11   ` Andrew Jones
2021-10-04 10:11     ` Andrew Jones
2021-10-04 10:11     ` Andrew Jones
2021-12-27 20:13     ` Marc Zyngier
2021-12-27 20:13       ` Marc Zyngier
2021-12-27 20:13       ` Marc Zyngier
2021-10-04 10:15   ` Andrew Jones
2021-10-04 10:15     ` Andrew Jones
2021-10-04 10:15     ` Andrew Jones
2021-10-03 16:46 ` [PATCH v2 5/5] hw/arm/virt: Disable highmem devices that don't fit in the PA range Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-03 16:46   ` Marc Zyngier
2021-10-04 10:12   ` Andrew Jones
2021-10-04 10:12     ` Andrew Jones
2021-10-04 10:12     ` Andrew Jones

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.