All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v11 0/2] arm/virt: CXL support via pxb_cxl
@ 2022-06-16 14:19 ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-16 14:19 UTC (permalink / raw)
  To: qemu-devel, alex.bennee, Peter Maydell, qemu-arm,
	Michael S . Tsirkin, Ben Widawsky
  Cc: Paolo Bonzini, linux-cxl, linuxarm, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, Mark Cave-Ayland,
	Adam Manzanares, Tong Zhang, Shameerali Kolothum Thodi

Previously patches 40 and 41 of
[PATCH v10 00/45] CXl 2.0 emulation Support
https://lore.kernel.org/qemu-devel/20220429144110.25167-45-Jonathan.Cameron@huawei.com/#r

Now the base CXL support including for x86/pc is upstream (patches 1-39)
there are no dependencies between the next few CXL elements in my queue
so they can be reviewed / merged in parallel.  Hence I'll be sending switch
support (43-45) separately and hopefully DOE / CDAT support in a few days.
I'm assuming this particular series should go through the arm tree if
the maintainers are happy?

Changes since v10:
 - CXL machine setup is now entirely from the supporting machines rather
   than via code in machine.c and vl.c.  Change made for x86 in:
   https://lore.kernel.org/qemu-devel/20220608145440.26106-1-Jonathan.Cameron@huawei.com/
 - Dropped Ben's sign off from patch 1 which resulted from him carrying
   these patches of mine for a while. It isn't a useful bit of history
   to carry now they are back to me.

This short series adds support for CXL host bridges and CXL fixed memory
windows on arm/virt.  Two types of memory region are needed:
1. Register space for CXL host bridges (static allowance for 16)
2. CXL fixed memory windows: Ranges of host PA space which
   are statically mapped to an interleave across 1 or more CXL host
   bridges.

Both of these types of region are described via appropriate ACPI tables.
As the CEDT table is created with the same code as for x86 I don't think
there is much value in duplicating the existing CXL bios-tables test.

The second patch adds a single complex test. We test a lot more configurations
on x86 but it does not seem useful to duplicate them all on ARM and this single
test should act as a smoke test for any problems that occur.

Run through CI at:
https://gitlab.com/jic23/qemu/-/pipelines/564934276
Intermittent (unrelated I assume) failure in msys64 aio-test resolved
with a retry.

Thanks,

Jonathan

Jonathan Cameron (2):
  hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
    pxb-cxl
  qtest/cxl: Add aarch64 virt test for CXL

 hw/arm/virt-acpi-build.c | 34 ++++++++++++++++++++++++++++
 hw/arm/virt.c            | 44 ++++++++++++++++++++++++++++++++++++
 include/hw/arm/virt.h    |  3 +++
 tests/qtest/cxl-test.c   | 48 ++++++++++++++++++++++++++++++++--------
 tests/qtest/meson.build  |  1 +
 5 files changed, 121 insertions(+), 9 deletions(-)

-- 
2.32.0


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v11 0/2] arm/virt: CXL support via pxb_cxl
@ 2022-06-16 14:19 ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-16 14:19 UTC (permalink / raw)
  To: qemu-devel, alex.bennee, Peter Maydell, qemu-arm,
	Michael S . Tsirkin, Ben Widawsky
  Cc: Paolo Bonzini, linux-cxl, linuxarm, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, Mark Cave-Ayland,
	Adam Manzanares, Tong Zhang, Shameerali Kolothum Thodi

Previously patches 40 and 41 of
[PATCH v10 00/45] CXl 2.0 emulation Support
https://lore.kernel.org/qemu-devel/20220429144110.25167-45-Jonathan.Cameron@huawei.com/#r

Now the base CXL support including for x86/pc is upstream (patches 1-39)
there are no dependencies between the next few CXL elements in my queue
so they can be reviewed / merged in parallel.  Hence I'll be sending switch
support (43-45) separately and hopefully DOE / CDAT support in a few days.
I'm assuming this particular series should go through the arm tree if
the maintainers are happy?

Changes since v10:
 - CXL machine setup is now entirely from the supporting machines rather
   than via code in machine.c and vl.c.  Change made for x86 in:
   https://lore.kernel.org/qemu-devel/20220608145440.26106-1-Jonathan.Cameron@huawei.com/
 - Dropped Ben's sign off from patch 1 which resulted from him carrying
   these patches of mine for a while. It isn't a useful bit of history
   to carry now they are back to me.

This short series adds support for CXL host bridges and CXL fixed memory
windows on arm/virt.  Two types of memory region are needed:
1. Register space for CXL host bridges (static allowance for 16)
2. CXL fixed memory windows: Ranges of host PA space which
   are statically mapped to an interleave across 1 or more CXL host
   bridges.

Both of these types of region are described via appropriate ACPI tables.
As the CEDT table is created with the same code as for x86 I don't think
there is much value in duplicating the existing CXL bios-tables test.

The second patch adds a single complex test. We test a lot more configurations
on x86 but it does not seem useful to duplicate them all on ARM and this single
test should act as a smoke test for any problems that occur.

Run through CI at:
https://gitlab.com/jic23/qemu/-/pipelines/564934276
Intermittent (unrelated I assume) failure in msys64 aio-test resolved
with a retry.

Thanks,

Jonathan

Jonathan Cameron (2):
  hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
    pxb-cxl
  qtest/cxl: Add aarch64 virt test for CXL

 hw/arm/virt-acpi-build.c | 34 ++++++++++++++++++++++++++++
 hw/arm/virt.c            | 44 ++++++++++++++++++++++++++++++++++++
 include/hw/arm/virt.h    |  3 +++
 tests/qtest/cxl-test.c   | 48 ++++++++++++++++++++++++++++++++--------
 tests/qtest/meson.build  |  1 +
 5 files changed, 121 insertions(+), 9 deletions(-)

-- 
2.32.0



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-06-16 14:19 ` Jonathan Cameron via
@ 2022-06-16 14:19   ` Jonathan Cameron via
  -1 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-16 14:19 UTC (permalink / raw)
  To: qemu-devel, alex.bennee, Peter Maydell, qemu-arm,
	Michael S . Tsirkin, Ben Widawsky
  Cc: Paolo Bonzini, linux-cxl, linuxarm, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, Mark Cave-Ayland,
	Adam Manzanares, Tong Zhang, Shameerali Kolothum Thodi

Code based on i386/pc enablement.
The memory layout places space for 16 host bridge register regions after
the GIC_REDIST2 in the extended memmap.
The CFMWs are placed above the extended memmap.

Only create the CEDT table if cxl=on set for the machine.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

---
 hw/arm/virt-acpi-build.c | 34 +++++++++++++++++++++++++++++++
 hw/arm/virt.c            | 44 ++++++++++++++++++++++++++++++++++++++++
 include/hw/arm/virt.h    |  3 +++
 3 files changed, 81 insertions(+)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 449fab0080..d47efcb297 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -39,9 +39,11 @@
 #include "hw/acpi/aml-build.h"
 #include "hw/acpi/utils.h"
 #include "hw/acpi/pci.h"
+#include "hw/acpi/cxl.h"
 #include "hw/acpi/memory_hotplug.h"
 #include "hw/acpi/generic_event_device.h"
 #include "hw/acpi/tpm.h"
+#include "hw/cxl/cxl.h"
 #include "hw/pci/pcie_host.h"
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_bus.h"
@@ -157,10 +159,29 @@ static void acpi_dsdt_add_virtio(Aml *scope,
     }
 }
 
+/* Uses local definition of AcpiBuildState so can't easily be common code */
+static void build_acpi0017(Aml *table)
+{
+    Aml *dev, *scope, *method;
+
+    scope =  aml_scope("_SB");
+    dev = aml_device("CXLM");
+    aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0017")));
+
+    method = aml_method("_STA", 0, AML_NOTSERIALIZED);
+    aml_append(method, aml_return(aml_int(0x01)));
+    aml_append(dev, method);
+
+    aml_append(scope, dev);
+    aml_append(table, scope);
+}
+
 static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
                               uint32_t irq, VirtMachineState *vms)
 {
     int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
+    bool cxl_present = false;
+    PCIBus *bus = vms->bus;
     struct GPEXConfig cfg = {
         .mmio32 = memmap[VIRT_PCIE_MMIO],
         .pio    = memmap[VIRT_PCIE_PIO],
@@ -174,6 +195,14 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
     }
 
     acpi_dsdt_add_gpex(scope, &cfg);
+    QLIST_FOREACH(bus, &vms->bus->child, sibling) {
+        if (pci_bus_is_cxl(bus)) {
+            cxl_present = true;
+        }
+    }
+    if (cxl_present) {
+        build_acpi0017(scope);
+    }
 }
 
 static void acpi_dsdt_add_gpio(Aml *scope, const MemMapEntry *gpio_memmap,
@@ -992,6 +1021,11 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
         }
     }
 
+    if (vms->cxl_devices_state.is_enabled) {
+        cxl_build_cedt(table_offsets, tables_blob, tables->linker,
+                       vms->oem_id, vms->oem_table_id, &vms->cxl_devices_state);
+    }
+
     if (ms->nvdimms_state->is_enabled) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
                           ms->nvdimms_state, ms->ram_slots, vms->oem_id,
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 097238faa7..a8a2d79f19 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -55,6 +55,7 @@
 #include "qemu/error-report.h"
 #include "qemu/module.h"
 #include "hw/pci-host/gpex.h"
+#include "hw/pci-bridge/pci_expander_bridge.h"
 #include "hw/virtio/virtio-pci.h"
 #include "hw/core/sysbus-fdt.h"
 #include "hw/platform-bus.h"
@@ -78,6 +79,8 @@
 #include "hw/virtio/virtio-mem-pci.h"
 #include "hw/virtio/virtio-iommu.h"
 #include "hw/char/pl011.h"
+#include "hw/cxl/cxl.h"
+#include "hw/cxl/cxl_host.h"
 #include "qemu/guest-random.h"
 
 #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
@@ -178,6 +181,7 @@ static const MemMapEntry base_memmap[] = {
 static MemMapEntry extended_memmap[] = {
     /* Additional 64 MB redist region (can contain up to 512 redistributors) */
     [VIRT_HIGH_GIC_REDIST2] =   { 0x0, 64 * MiB },
+    [VIRT_CXL_HOST] =           { 0x0, 64 * KiB * 16 }, /* 16 UID */
     [VIRT_HIGH_PCIE_ECAM] =     { 0x0, 256 * MiB },
     /* Second PCIe window */
     [VIRT_HIGH_PCIE_MMIO] =     { 0x0, 512 * GiB },
@@ -1525,6 +1529,16 @@ static void create_pcie(VirtMachineState *vms)
     }
 }
 
+static void create_cxl_host_reg_region(VirtMachineState *vms)
+{
+    MemoryRegion *sysmem = get_system_memory();
+    MemoryRegion *mr = &vms->cxl_devices_state.host_mr;
+
+    memory_region_init(mr, OBJECT(vms), "cxl_host_reg",
+                       vms->memmap[VIRT_CXL_HOST].size);
+    memory_region_add_subregion(sysmem, vms->memmap[VIRT_CXL_HOST].base, mr);
+}
+
 static void create_platform_bus(VirtMachineState *vms)
 {
     DeviceState *dev;
@@ -1638,6 +1652,12 @@ void virt_machine_done(Notifier *notifier, void *data)
     struct arm_boot_info *info = &vms->bootinfo;
     AddressSpace *as = arm_boot_address_space(cpu, info);
 
+    cxl_hook_up_pxb_registers(vms->bus, &vms->cxl_devices_state,
+                              &error_fatal);
+
+    if (vms->cxl_devices_state.is_enabled) {
+        cxl_fmws_link_targets(&vms->cxl_devices_state, &error_fatal);
+    }
     /*
      * If the user provided a dtb, we assume the dynamic sysbus nodes
      * already are integrated there. This corresponds to a use case where
@@ -1779,6 +1799,20 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
         memory_region_init(&ms->device_memory->mr, OBJECT(vms),
                            "device-memory", device_memory_size);
     }
+
+    if (vms->cxl_devices_state.fixed_windows) {
+        GList *it;
+
+        base = ROUND_UP(base, 256 * MiB);
+        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
+            CXLFixedWindow *fw = it->data;
+
+            fw->base = base;
+            memory_region_init_io(&fw->mr, OBJECT(vms), &cfmws_ops, fw,
+                                  "cxl-fixed-memory-region", fw->size);
+            base += fw->size;
+        }
+    }
 }
 
 /*
@@ -2215,6 +2249,14 @@ static void machvirt_init(MachineState *machine)
         memory_region_add_subregion(sysmem, machine->device_memory->base,
                                     &machine->device_memory->mr);
     }
+    if (vms->cxl_devices_state.fixed_windows) {
+        GList *it;
+        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
+            CXLFixedWindow *fw = it->data;
+
+            memory_region_add_subregion(sysmem, fw->base, &fw->mr);
+        }
+    }
 
     virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
 
@@ -2241,6 +2283,7 @@ static void machvirt_init(MachineState *machine)
     create_rtc(vms);
 
     create_pcie(vms);
+    create_cxl_host_reg_region(vms);
 
     if (has_ged && aarch64 && firmware_loaded && virt_is_acpi_enabled(vms)) {
         vms->acpi_dev = create_acpi_ged(vms);
@@ -3070,6 +3113,7 @@ static void virt_instance_init(Object *obj)
 
     vms->oem_id = g_strndup(ACPI_BUILD_APPNAME6, 6);
     vms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8);
+    cxl_machine_init(obj, &vms->cxl_devices_state);
 }
 
 static const TypeInfo virt_machine_info = {
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index 15feabac63..403c9107e5 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -35,6 +35,7 @@
 #include "hw/boards.h"
 #include "hw/arm/boot.h"
 #include "hw/block/flash.h"
+#include "hw/cxl/cxl.h"
 #include "sysemu/kvm.h"
 #include "hw/intc/arm_gicv3_common.h"
 #include "qom/object.h"
@@ -92,6 +93,7 @@ enum {
 /* indices of IO regions located after the RAM */
 enum {
     VIRT_HIGH_GIC_REDIST2 =  VIRT_LOWMEMMAP_LAST,
+    VIRT_CXL_HOST,
     VIRT_HIGH_PCIE_ECAM,
     VIRT_HIGH_PCIE_MMIO,
 };
@@ -176,6 +178,7 @@ struct VirtMachineState {
     PCIBus *bus;
     char *oem_id;
     char *oem_table_id;
+    CXLState cxl_devices_state;
 };
 
 #define VIRT_ECAM_ID(high) (high ? VIRT_HIGH_PCIE_ECAM : VIRT_PCIE_ECAM)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
@ 2022-06-16 14:19   ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-16 14:19 UTC (permalink / raw)
  To: qemu-devel, alex.bennee, Peter Maydell, qemu-arm,
	Michael S . Tsirkin, Ben Widawsky
  Cc: Paolo Bonzini, linux-cxl, linuxarm, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, Mark Cave-Ayland,
	Adam Manzanares, Tong Zhang, Shameerali Kolothum Thodi

Code based on i386/pc enablement.
The memory layout places space for 16 host bridge register regions after
the GIC_REDIST2 in the extended memmap.
The CFMWs are placed above the extended memmap.

Only create the CEDT table if cxl=on set for the machine.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

---
 hw/arm/virt-acpi-build.c | 34 +++++++++++++++++++++++++++++++
 hw/arm/virt.c            | 44 ++++++++++++++++++++++++++++++++++++++++
 include/hw/arm/virt.h    |  3 +++
 3 files changed, 81 insertions(+)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 449fab0080..d47efcb297 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -39,9 +39,11 @@
 #include "hw/acpi/aml-build.h"
 #include "hw/acpi/utils.h"
 #include "hw/acpi/pci.h"
+#include "hw/acpi/cxl.h"
 #include "hw/acpi/memory_hotplug.h"
 #include "hw/acpi/generic_event_device.h"
 #include "hw/acpi/tpm.h"
+#include "hw/cxl/cxl.h"
 #include "hw/pci/pcie_host.h"
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_bus.h"
@@ -157,10 +159,29 @@ static void acpi_dsdt_add_virtio(Aml *scope,
     }
 }
 
+/* Uses local definition of AcpiBuildState so can't easily be common code */
+static void build_acpi0017(Aml *table)
+{
+    Aml *dev, *scope, *method;
+
+    scope =  aml_scope("_SB");
+    dev = aml_device("CXLM");
+    aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0017")));
+
+    method = aml_method("_STA", 0, AML_NOTSERIALIZED);
+    aml_append(method, aml_return(aml_int(0x01)));
+    aml_append(dev, method);
+
+    aml_append(scope, dev);
+    aml_append(table, scope);
+}
+
 static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
                               uint32_t irq, VirtMachineState *vms)
 {
     int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
+    bool cxl_present = false;
+    PCIBus *bus = vms->bus;
     struct GPEXConfig cfg = {
         .mmio32 = memmap[VIRT_PCIE_MMIO],
         .pio    = memmap[VIRT_PCIE_PIO],
@@ -174,6 +195,14 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
     }
 
     acpi_dsdt_add_gpex(scope, &cfg);
+    QLIST_FOREACH(bus, &vms->bus->child, sibling) {
+        if (pci_bus_is_cxl(bus)) {
+            cxl_present = true;
+        }
+    }
+    if (cxl_present) {
+        build_acpi0017(scope);
+    }
 }
 
 static void acpi_dsdt_add_gpio(Aml *scope, const MemMapEntry *gpio_memmap,
@@ -992,6 +1021,11 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
         }
     }
 
+    if (vms->cxl_devices_state.is_enabled) {
+        cxl_build_cedt(table_offsets, tables_blob, tables->linker,
+                       vms->oem_id, vms->oem_table_id, &vms->cxl_devices_state);
+    }
+
     if (ms->nvdimms_state->is_enabled) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
                           ms->nvdimms_state, ms->ram_slots, vms->oem_id,
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 097238faa7..a8a2d79f19 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -55,6 +55,7 @@
 #include "qemu/error-report.h"
 #include "qemu/module.h"
 #include "hw/pci-host/gpex.h"
+#include "hw/pci-bridge/pci_expander_bridge.h"
 #include "hw/virtio/virtio-pci.h"
 #include "hw/core/sysbus-fdt.h"
 #include "hw/platform-bus.h"
@@ -78,6 +79,8 @@
 #include "hw/virtio/virtio-mem-pci.h"
 #include "hw/virtio/virtio-iommu.h"
 #include "hw/char/pl011.h"
+#include "hw/cxl/cxl.h"
+#include "hw/cxl/cxl_host.h"
 #include "qemu/guest-random.h"
 
 #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
@@ -178,6 +181,7 @@ static const MemMapEntry base_memmap[] = {
 static MemMapEntry extended_memmap[] = {
     /* Additional 64 MB redist region (can contain up to 512 redistributors) */
     [VIRT_HIGH_GIC_REDIST2] =   { 0x0, 64 * MiB },
+    [VIRT_CXL_HOST] =           { 0x0, 64 * KiB * 16 }, /* 16 UID */
     [VIRT_HIGH_PCIE_ECAM] =     { 0x0, 256 * MiB },
     /* Second PCIe window */
     [VIRT_HIGH_PCIE_MMIO] =     { 0x0, 512 * GiB },
@@ -1525,6 +1529,16 @@ static void create_pcie(VirtMachineState *vms)
     }
 }
 
+static void create_cxl_host_reg_region(VirtMachineState *vms)
+{
+    MemoryRegion *sysmem = get_system_memory();
+    MemoryRegion *mr = &vms->cxl_devices_state.host_mr;
+
+    memory_region_init(mr, OBJECT(vms), "cxl_host_reg",
+                       vms->memmap[VIRT_CXL_HOST].size);
+    memory_region_add_subregion(sysmem, vms->memmap[VIRT_CXL_HOST].base, mr);
+}
+
 static void create_platform_bus(VirtMachineState *vms)
 {
     DeviceState *dev;
@@ -1638,6 +1652,12 @@ void virt_machine_done(Notifier *notifier, void *data)
     struct arm_boot_info *info = &vms->bootinfo;
     AddressSpace *as = arm_boot_address_space(cpu, info);
 
+    cxl_hook_up_pxb_registers(vms->bus, &vms->cxl_devices_state,
+                              &error_fatal);
+
+    if (vms->cxl_devices_state.is_enabled) {
+        cxl_fmws_link_targets(&vms->cxl_devices_state, &error_fatal);
+    }
     /*
      * If the user provided a dtb, we assume the dynamic sysbus nodes
      * already are integrated there. This corresponds to a use case where
@@ -1779,6 +1799,20 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
         memory_region_init(&ms->device_memory->mr, OBJECT(vms),
                            "device-memory", device_memory_size);
     }
+
+    if (vms->cxl_devices_state.fixed_windows) {
+        GList *it;
+
+        base = ROUND_UP(base, 256 * MiB);
+        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
+            CXLFixedWindow *fw = it->data;
+
+            fw->base = base;
+            memory_region_init_io(&fw->mr, OBJECT(vms), &cfmws_ops, fw,
+                                  "cxl-fixed-memory-region", fw->size);
+            base += fw->size;
+        }
+    }
 }
 
 /*
@@ -2215,6 +2249,14 @@ static void machvirt_init(MachineState *machine)
         memory_region_add_subregion(sysmem, machine->device_memory->base,
                                     &machine->device_memory->mr);
     }
+    if (vms->cxl_devices_state.fixed_windows) {
+        GList *it;
+        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
+            CXLFixedWindow *fw = it->data;
+
+            memory_region_add_subregion(sysmem, fw->base, &fw->mr);
+        }
+    }
 
     virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
 
@@ -2241,6 +2283,7 @@ static void machvirt_init(MachineState *machine)
     create_rtc(vms);
 
     create_pcie(vms);
+    create_cxl_host_reg_region(vms);
 
     if (has_ged && aarch64 && firmware_loaded && virt_is_acpi_enabled(vms)) {
         vms->acpi_dev = create_acpi_ged(vms);
@@ -3070,6 +3113,7 @@ static void virt_instance_init(Object *obj)
 
     vms->oem_id = g_strndup(ACPI_BUILD_APPNAME6, 6);
     vms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8);
+    cxl_machine_init(obj, &vms->cxl_devices_state);
 }
 
 static const TypeInfo virt_machine_info = {
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index 15feabac63..403c9107e5 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -35,6 +35,7 @@
 #include "hw/boards.h"
 #include "hw/arm/boot.h"
 #include "hw/block/flash.h"
+#include "hw/cxl/cxl.h"
 #include "sysemu/kvm.h"
 #include "hw/intc/arm_gicv3_common.h"
 #include "qom/object.h"
@@ -92,6 +93,7 @@ enum {
 /* indices of IO regions located after the RAM */
 enum {
     VIRT_HIGH_GIC_REDIST2 =  VIRT_LOWMEMMAP_LAST,
+    VIRT_CXL_HOST,
     VIRT_HIGH_PCIE_ECAM,
     VIRT_HIGH_PCIE_MMIO,
 };
@@ -176,6 +178,7 @@ struct VirtMachineState {
     PCIBus *bus;
     char *oem_id;
     char *oem_table_id;
+    CXLState cxl_devices_state;
 };
 
 #define VIRT_ECAM_ID(high) (high ? VIRT_HIGH_PCIE_ECAM : VIRT_PCIE_ECAM)
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v11 2/2] qtest/cxl: Add aarch64 virt test for CXL
  2022-06-16 14:19 ` Jonathan Cameron via
@ 2022-06-16 14:19   ` Jonathan Cameron via
  -1 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-16 14:19 UTC (permalink / raw)
  To: qemu-devel, alex.bennee, Peter Maydell, qemu-arm,
	Michael S . Tsirkin, Ben Widawsky
  Cc: Paolo Bonzini, linux-cxl, linuxarm, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, Mark Cave-Ayland,
	Adam Manzanares, Tong Zhang, Shameerali Kolothum Thodi

Add a single complex case for aarch64 virt machine.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/qtest/cxl-test.c  | 48 +++++++++++++++++++++++++++++++++--------
 tests/qtest/meson.build |  1 +
 2 files changed, 40 insertions(+), 9 deletions(-)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
index 2133e973f4..1015d0e7c2 100644
--- a/tests/qtest/cxl-test.c
+++ b/tests/qtest/cxl-test.c
@@ -17,6 +17,11 @@
                       "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
                       "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
 
+#define QEMU_VIRT_2PXB_CMD "-machine virt,cxl=on "                      \
+                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
+                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "  \
+                      "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
+
 #define QEMU_RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 "
 
 /* Dual ports on first pxb */
@@ -134,18 +139,43 @@ static void cxl_2pxb_4rp_4t3d(void)
     qtest_end();
 }
 
+static void cxl_virt_2pxb_4rp_4t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_VIRT_2PXB_CMD QEMU_4RP QEMU_4T3D,
+                    tmpfs, tmpfs, tmpfs, tmpfs, tmpfs, tmpfs,
+                    tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
 int main(int argc, char **argv)
 {
+    const char *arch = qtest_get_arch();
+
     g_test_init(&argc, &argv, NULL);
 
-    qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
-    qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
-    qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
-    qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
-    qtest_add_func("/pci/cxl/rp", cxl_root_port);
-    qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
-    qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
-    qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
-    qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4", cxl_2pxb_4rp_4t3d);
+    if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
+        qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
+        qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
+        qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
+        qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
+        qtest_add_func("/pci/cxl/rp", cxl_root_port);
+        qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
+        qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
+        qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
+        qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4",
+                       cxl_2pxb_4rp_4t3d);
+    } else if (strcmp(arch, "aarch64") == 0) {
+        qtest_add_func("/pci/cxl/virt/pxb_x2_root_port_x4_type3_x4",
+                       cxl_virt_2pxb_4rp_4t3d);
+    }
+
     return g_test_run();
 }
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 31287a9173..0fa93da13a 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -215,6 +215,7 @@ qtests_aarch64 = \
   (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) +        \
   (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) +  \
   (config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
+  qtests_cxl +                                                                                  \
   ['arm-cpu-features',
    'numa-test',
    'boot-serial-test',
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v11 2/2] qtest/cxl: Add aarch64 virt test for CXL
@ 2022-06-16 14:19   ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-16 14:19 UTC (permalink / raw)
  To: qemu-devel, alex.bennee, Peter Maydell, qemu-arm,
	Michael S . Tsirkin, Ben Widawsky
  Cc: Paolo Bonzini, linux-cxl, linuxarm, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, Mark Cave-Ayland,
	Adam Manzanares, Tong Zhang, Shameerali Kolothum Thodi

Add a single complex case for aarch64 virt machine.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/qtest/cxl-test.c  | 48 +++++++++++++++++++++++++++++++++--------
 tests/qtest/meson.build |  1 +
 2 files changed, 40 insertions(+), 9 deletions(-)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
index 2133e973f4..1015d0e7c2 100644
--- a/tests/qtest/cxl-test.c
+++ b/tests/qtest/cxl-test.c
@@ -17,6 +17,11 @@
                       "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
                       "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
 
+#define QEMU_VIRT_2PXB_CMD "-machine virt,cxl=on "                      \
+                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
+                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "  \
+                      "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
+
 #define QEMU_RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 "
 
 /* Dual ports on first pxb */
@@ -134,18 +139,43 @@ static void cxl_2pxb_4rp_4t3d(void)
     qtest_end();
 }
 
+static void cxl_virt_2pxb_4rp_4t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_VIRT_2PXB_CMD QEMU_4RP QEMU_4T3D,
+                    tmpfs, tmpfs, tmpfs, tmpfs, tmpfs, tmpfs,
+                    tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
 int main(int argc, char **argv)
 {
+    const char *arch = qtest_get_arch();
+
     g_test_init(&argc, &argv, NULL);
 
-    qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
-    qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
-    qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
-    qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
-    qtest_add_func("/pci/cxl/rp", cxl_root_port);
-    qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
-    qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
-    qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
-    qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4", cxl_2pxb_4rp_4t3d);
+    if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
+        qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
+        qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
+        qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
+        qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
+        qtest_add_func("/pci/cxl/rp", cxl_root_port);
+        qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
+        qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
+        qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
+        qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4",
+                       cxl_2pxb_4rp_4t3d);
+    } else if (strcmp(arch, "aarch64") == 0) {
+        qtest_add_func("/pci/cxl/virt/pxb_x2_root_port_x4_type3_x4",
+                       cxl_virt_2pxb_4rp_4t3d);
+    }
+
     return g_test_run();
 }
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 31287a9173..0fa93da13a 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -215,6 +215,7 @@ qtests_aarch64 = \
   (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) +        \
   (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) +  \
   (config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
+  qtests_cxl +                                                                                  \
   ['arm-cpu-features',
    'numa-test',
    'boot-serial-test',
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 0/2] arm/virt: CXL support via pxb_cxl
  2022-06-16 14:19 ` Jonathan Cameron via
@ 2022-06-24  9:07   ` Jonathan Cameron via
  -1 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-24  9:07 UTC (permalink / raw)
  To: Jonathan Cameron via
  Cc: Jonathan Cameron, alex.bennee, Peter Maydell, qemu-arm,
	Michael S . Tsirkin, Ben Widawsky, Paolo Bonzini, linux-cxl,
	linuxarm, Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Thu, 16 Jun 2022 15:19:48 +0100
Jonathan Cameron via <qemu-devel@nongnu.org> wrote:

> Previously patches 40 and 41 of
> [PATCH v10 00/45] CXl 2.0 emulation Support
> https://lore.kernel.org/qemu-devel/20220429144110.25167-45-Jonathan.Cameron@huawei.com/#r
> 
> Now the base CXL support including for x86/pc is upstream (patches 1-39)
> there are no dependencies between the next few CXL elements in my queue
> so they can be reviewed / merged in parallel.  Hence I'll be sending switch
> support (43-45) separately and hopefully DOE / CDAT support in a few days.
> I'm assuming this particular series should go through the arm tree if
> the maintainers are happy?

Hi All,

If Peter or anyone else has time to look at this with a view to getting
ARM support on par with x86 that would be great.  I 'think' this should
be uncontroversial but I'm far from an expert!  I'm particularly keen on
getting this upstream as most of my testing is on ARM/virt.

Thanks,

Jonathan

> 
> Changes since v10:
>  - CXL machine setup is now entirely from the supporting machines rather
>    than via code in machine.c and vl.c.  Change made for x86 in:
>    https://lore.kernel.org/qemu-devel/20220608145440.26106-1-Jonathan.Cameron@huawei.com/
>  - Dropped Ben's sign off from patch 1 which resulted from him carrying
>    these patches of mine for a while. It isn't a useful bit of history
>    to carry now they are back to me.
> 
> This short series adds support for CXL host bridges and CXL fixed memory
> windows on arm/virt.  Two types of memory region are needed:
> 1. Register space for CXL host bridges (static allowance for 16)
> 2. CXL fixed memory windows: Ranges of host PA space which
>    are statically mapped to an interleave across 1 or more CXL host
>    bridges.
> 
> Both of these types of region are described via appropriate ACPI tables.
> As the CEDT table is created with the same code as for x86 I don't think
> there is much value in duplicating the existing CXL bios-tables test.
> 
> The second patch adds a single complex test. We test a lot more configurations
> on x86 but it does not seem useful to duplicate them all on ARM and this single
> test should act as a smoke test for any problems that occur.
> 
> Run through CI at:
> https://gitlab.com/jic23/qemu/-/pipelines/564934276
> Intermittent (unrelated I assume) failure in msys64 aio-test resolved
> with a retry.
> 
> Thanks,
> 
> Jonathan
> 
> Jonathan Cameron (2):
>   hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
>     pxb-cxl
>   qtest/cxl: Add aarch64 virt test for CXL
> 
>  hw/arm/virt-acpi-build.c | 34 ++++++++++++++++++++++++++++
>  hw/arm/virt.c            | 44 ++++++++++++++++++++++++++++++++++++
>  include/hw/arm/virt.h    |  3 +++
>  tests/qtest/cxl-test.c   | 48 ++++++++++++++++++++++++++++++++--------
>  tests/qtest/meson.build  |  1 +
>  5 files changed, 121 insertions(+), 9 deletions(-)
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 0/2] arm/virt: CXL support via pxb_cxl
@ 2022-06-24  9:07   ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-24  9:07 UTC (permalink / raw)
  To: Jonathan Cameron via
  Cc: Jonathan Cameron, alex.bennee, Peter Maydell, qemu-arm,
	Michael S . Tsirkin, Ben Widawsky, Paolo Bonzini, linux-cxl,
	linuxarm, Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Thu, 16 Jun 2022 15:19:48 +0100
Jonathan Cameron via <qemu-devel@nongnu.org> wrote:

> Previously patches 40 and 41 of
> [PATCH v10 00/45] CXl 2.0 emulation Support
> https://lore.kernel.org/qemu-devel/20220429144110.25167-45-Jonathan.Cameron@huawei.com/#r
> 
> Now the base CXL support including for x86/pc is upstream (patches 1-39)
> there are no dependencies between the next few CXL elements in my queue
> so they can be reviewed / merged in parallel.  Hence I'll be sending switch
> support (43-45) separately and hopefully DOE / CDAT support in a few days.
> I'm assuming this particular series should go through the arm tree if
> the maintainers are happy?

Hi All,

If Peter or anyone else has time to look at this with a view to getting
ARM support on par with x86 that would be great.  I 'think' this should
be uncontroversial but I'm far from an expert!  I'm particularly keen on
getting this upstream as most of my testing is on ARM/virt.

Thanks,

Jonathan

> 
> Changes since v10:
>  - CXL machine setup is now entirely from the supporting machines rather
>    than via code in machine.c and vl.c.  Change made for x86 in:
>    https://lore.kernel.org/qemu-devel/20220608145440.26106-1-Jonathan.Cameron@huawei.com/
>  - Dropped Ben's sign off from patch 1 which resulted from him carrying
>    these patches of mine for a while. It isn't a useful bit of history
>    to carry now they are back to me.
> 
> This short series adds support for CXL host bridges and CXL fixed memory
> windows on arm/virt.  Two types of memory region are needed:
> 1. Register space for CXL host bridges (static allowance for 16)
> 2. CXL fixed memory windows: Ranges of host PA space which
>    are statically mapped to an interleave across 1 or more CXL host
>    bridges.
> 
> Both of these types of region are described via appropriate ACPI tables.
> As the CEDT table is created with the same code as for x86 I don't think
> there is much value in duplicating the existing CXL bios-tables test.
> 
> The second patch adds a single complex test. We test a lot more configurations
> on x86 but it does not seem useful to duplicate them all on ARM and this single
> test should act as a smoke test for any problems that occur.
> 
> Run through CI at:
> https://gitlab.com/jic23/qemu/-/pipelines/564934276
> Intermittent (unrelated I assume) failure in msys64 aio-test resolved
> with a retry.
> 
> Thanks,
> 
> Jonathan
> 
> Jonathan Cameron (2):
>   hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
>     pxb-cxl
>   qtest/cxl: Add aarch64 virt test for CXL
> 
>  hw/arm/virt-acpi-build.c | 34 ++++++++++++++++++++++++++++
>  hw/arm/virt.c            | 44 ++++++++++++++++++++++++++++++++++++
>  include/hw/arm/virt.h    |  3 +++
>  tests/qtest/cxl-test.c   | 48 ++++++++++++++++++++++++++++++++--------
>  tests/qtest/meson.build  |  1 +
>  5 files changed, 121 insertions(+), 9 deletions(-)
> 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 2/2] qtest/cxl: Add aarch64 virt test for CXL
  2022-06-16 14:19   ` Jonathan Cameron via
  (?)
@ 2022-06-24 10:41   ` Peter Maydell
  -1 siblings, 0 replies; 23+ messages in thread
From: Peter Maydell @ 2022-06-24 10:41 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Thu, 16 Jun 2022 at 15:20, Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> Add a single complex case for aarch64 virt machine.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
>  tests/qtest/cxl-test.c  | 48 +++++++++++++++++++++++++++++++++--------
>  tests/qtest/meson.build |  1 +
>  2 files changed, 40 insertions(+), 9 deletions(-)
>

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-06-16 14:19   ` Jonathan Cameron via
  (?)
@ 2022-06-24 10:48   ` Peter Maydell
  2022-06-24 12:39       ` Jonathan Cameron via
  -1 siblings, 1 reply; 23+ messages in thread
From: Peter Maydell @ 2022-06-24 10:48 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Thu, 16 Jun 2022 at 15:20, Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> Code based on i386/pc enablement.
> The memory layout places space for 16 host bridge register regions after
> the GIC_REDIST2 in the extended memmap.
> The CFMWs are placed above the extended memmap.
>
> Only create the CEDT table if cxl=on set for the machine.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

This seems to be missing code to advertise the new devices in the
device tree.

Somebody else had better review the ACPI changes :-)

>  #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
> @@ -178,6 +181,7 @@ static const MemMapEntry base_memmap[] = {
>  static MemMapEntry extended_memmap[] = {
>      /* Additional 64 MB redist region (can contain up to 512 redistributors) */
>      [VIRT_HIGH_GIC_REDIST2] =   { 0x0, 64 * MiB },
> +    [VIRT_CXL_HOST] =           { 0x0, 64 * KiB * 16 }, /* 16 UID */

Does this reshuffle the memory layout so that all the stuff after it
moves up ? If so, that's an incompatible change and would need
versioning somehow.

More generally, should this new addition to the machine be
versioned? What did we do for the pc machine types?

>      [VIRT_HIGH_PCIE_ECAM] =     { 0x0, 256 * MiB },
>      /* Second PCIe window */
>      [VIRT_HIGH_PCIE_MMIO] =     { 0x0, 512 * GiB },
> @@ -1525,6 +1529,16 @@ static void create_pcie(VirtMachineState *vms)
>      }
>  }
>
> +static void create_cxl_host_reg_region(VirtMachineState *vms)
> +{
> +    MemoryRegion *sysmem = get_system_memory();
> +    MemoryRegion *mr = &vms->cxl_devices_state.host_mr;
> +
> +    memory_region_init(mr, OBJECT(vms), "cxl_host_reg",
> +                       vms->memmap[VIRT_CXL_HOST].size);
> +    memory_region_add_subregion(sysmem, vms->memmap[VIRT_CXL_HOST].base, mr);
> +}

> @@ -1779,6 +1799,20 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
>          memory_region_init(&ms->device_memory->mr, OBJECT(vms),
>                             "device-memory", device_memory_size);
>      }
> +
> +    if (vms->cxl_devices_state.fixed_windows) {
> +        GList *it;
> +
> +        base = ROUND_UP(base, 256 * MiB);
> +        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
> +            CXLFixedWindow *fw = it->data;
> +
> +            fw->base = base;
> +            memory_region_init_io(&fw->mr, OBJECT(vms), &cfmws_ops, fw,
> +                                  "cxl-fixed-memory-region", fw->size);

Why is board model code having to init memory regions for this
device? Shouldn't the device be creating the memory regions itself
and then exposing them for the board code to map them ?

> +            base += fw->size;
> +        }
> +    }
>  }
>
>  /*
> @@ -2215,6 +2249,14 @@ static void machvirt_init(MachineState *machine)
>          memory_region_add_subregion(sysmem, machine->device_memory->base,
>                                      &machine->device_memory->mr);
>      }
> +    if (vms->cxl_devices_state.fixed_windows) {
> +        GList *it;
> +        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
> +            CXLFixedWindow *fw = it->data;
> +
> +            memory_region_add_subregion(sysmem, fw->base, &fw->mr);
> +        }
> +    }
>
>      virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
>
> @@ -2241,6 +2283,7 @@ static void machvirt_init(MachineState *machine)
>      create_rtc(vms);
>
>      create_pcie(vms);
> +    create_cxl_host_reg_region(vms);
>
>      if (has_ged && aarch64 && firmware_loaded && virt_is_acpi_enabled(vms)) {
>          vms->acpi_dev = create_acpi_ged(vms);
> @@ -3070,6 +3113,7 @@ static void virt_instance_init(Object *obj)
>
>      vms->oem_id = g_strndup(ACPI_BUILD_APPNAME6, 6);
>      vms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8);
> +    cxl_machine_init(obj, &vms->cxl_devices_state);
>  }
>
>  static const TypeInfo virt_machine_info = {
> diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
> index 15feabac63..403c9107e5 100644
> --- a/include/hw/arm/virt.h
> +++ b/include/hw/arm/virt.h
> @@ -35,6 +35,7 @@
>  #include "hw/boards.h"
>  #include "hw/arm/boot.h"
>  #include "hw/block/flash.h"
> +#include "hw/cxl/cxl.h"
>  #include "sysemu/kvm.h"
>  #include "hw/intc/arm_gicv3_common.h"
>  #include "qom/object.h"
> @@ -92,6 +93,7 @@ enum {
>  /* indices of IO regions located after the RAM */
>  enum {
>      VIRT_HIGH_GIC_REDIST2 =  VIRT_LOWMEMMAP_LAST,
> +    VIRT_CXL_HOST,
>      VIRT_HIGH_PCIE_ECAM,
>      VIRT_HIGH_PCIE_MMIO,
>  };
> @@ -176,6 +178,7 @@ struct VirtMachineState {
>      PCIBus *bus;
>      char *oem_id;
>      char *oem_table_id;
> +    CXLState cxl_devices_state;
>  };

I just looked at the CXLState struct. Why isn't this a device object ?

thanks
-- PMM

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-06-24 10:48   ` Peter Maydell
@ 2022-06-24 12:39       ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-24 12:39 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Fri, 24 Jun 2022 11:48:47 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Thu, 16 Jun 2022 at 15:20, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > Code based on i386/pc enablement.
> > The memory layout places space for 16 host bridge register regions after
> > the GIC_REDIST2 in the extended memmap.
> > The CFMWs are placed above the extended memmap.
> >
> > Only create the CEDT table if cxl=on set for the machine.
> >
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>  

Hi Peter,

Thanks for the review,

> 
> This seems to be missing code to advertise the new devices in the
> device tree.

Intentionally. I am not aware of any current interest
in defining DT support CXL or supporting it in operating systems.
Easy enough to add if anyone does the leg work to figure out the
bindings, but that needs to come from someone who cares and
would need to be driven by OS support and a usecase. The ACPI
stuff here is defined as part of the main CXL spec because the
target class of machines simply doesn't generally use DT.

> 
> Somebody else had better review the ACPI changes :-)
> 
> >  #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
> > @@ -178,6 +181,7 @@ static const MemMapEntry base_memmap[] = {
> >  static MemMapEntry extended_memmap[] = {
> >      /* Additional 64 MB redist region (can contain up to 512 redistributors) */
> >      [VIRT_HIGH_GIC_REDIST2] =   { 0x0, 64 * MiB },
> > +    [VIRT_CXL_HOST] =           { 0x0, 64 * KiB * 16 }, /* 16 UID */  
> 
> Does this reshuffle the memory layout so that all the stuff after it
> moves up ? If so, that's an incompatible change and would need
> versioning somehow.

I slotted it into a hole in the existing memory map so unless I
have a bug, it shouldn't move any pre existing memory windows.
The comment just above extended_mmap states that the floating area
starts at 256GiB and the entries after that will be naturally aligned.

Hence existing map will be
256GiB Start of redist2
256GiB + 64MiB Start of gap -  we slot in a 16 * 64KiB region at start of this gap.
256GiB + 256MiB Start of PCIE_ECAM
256GiB + 512MiB Start of gap 2
512GiB Start of PCIE_MMIO

> 
> More generally, should this new addition to the machine be
> versioned? What did we do for the pc machine types?

So far not versioned and concern wasn't raised. I didn't think it was an
issue because there should be no effect on older machines where cxl=on
can't be set. Given the user has to explicitly request more hardware
for anything to change, I don't see a need for versioning (yet).

Not sure it's relevant but note the emulation here is not suitable for
anything other than testing code against (it's very slow and provides
nothing you'd actually want in a VM) so migrating a VM with this support
seems unlikely to ever happen.

> 
> >      [VIRT_HIGH_PCIE_ECAM] =     { 0x0, 256 * MiB },
> >      /* Second PCIe window */
> >      [VIRT_HIGH_PCIE_MMIO] =     { 0x0, 512 * GiB },
> > @@ -1525,6 +1529,16 @@ static void create_pcie(VirtMachineState *vms)
> >      }
> >  }
> >
> > +static void create_cxl_host_reg_region(VirtMachineState *vms)
> > +{
> > +    MemoryRegion *sysmem = get_system_memory();
> > +    MemoryRegion *mr = &vms->cxl_devices_state.host_mr;
> > +
> > +    memory_region_init(mr, OBJECT(vms), "cxl_host_reg",
> > +                       vms->memmap[VIRT_CXL_HOST].size);
> > +    memory_region_add_subregion(sysmem, vms->memmap[VIRT_CXL_HOST].base, mr);
> > +}  
> 
> > @@ -1779,6 +1799,20 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
> >          memory_region_init(&ms->device_memory->mr, OBJECT(vms),
> >                             "device-memory", device_memory_size);
> >      }
> > +
> > +    if (vms->cxl_devices_state.fixed_windows) {
> > +        GList *it;
> > +
> > +        base = ROUND_UP(base, 256 * MiB);
> > +        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
> > +            CXLFixedWindow *fw = it->data;
> > +
> > +            fw->base = base;
> > +            memory_region_init_io(&fw->mr, OBJECT(vms), &cfmws_ops, fw,
> > +                                  "cxl-fixed-memory-region", fw->size);  
> 
> Why is board model code having to init memory regions for this
> device? Shouldn't the device be creating the memory regions itself
> and then exposing them for the board code to map them ?

From a device point of view CFMWs are weird. They are is a region of Host PA
space that has fixed routing to devices (CXL host bridges). On a real system
they are a characteristic of the interconnect routing and if programmable it
is done well before any standard software (so might as well be hard coded).
Making them a device would be a little bit like saying the base_memmap should be
a device - all they are doing is saying this address here should route to this
particular device. The flexibility here is to allow testing of different setups
without having to know the actual PA range which is safe to use.

They are very like the device memory or secure memory initialization.
The ops are there to do the interleave decoding. Interleave decoding
can't sensibly use a tree of memory windows because there would need to
be an infeasible number of them hence the memops work out the routing at
access time. If we ever optimize/specialize the non interleaved direct
connected CXL device case, or implement more general memory interleaving
support, it will just be a memory_region_init_ram() call like any other ram.

This went through various approaches but in the end the sequence of what we
need to set up for CXL fixed memory windows made representing them as a device
hard so we didn't. Perhaps it could be done, but the device part would
be very slim and we'd have a bunch of code having to search for the devices
to stitch together all the information the need once it's available as
the rest of the system is built (e.g. after the host bridges are created).

> 
> > +            base += fw->size;
> > +        }
> > +    }
> >  }
> >
> >  /*
> > @@ -2215,6 +2249,14 @@ static void machvirt_init(MachineState *machine)
> >          memory_region_add_subregion(sysmem, machine->device_memory->base,
> >                                      &machine->device_memory->mr);
> >      }
> > +    if (vms->cxl_devices_state.fixed_windows) {
> > +        GList *it;
> > +        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
> > +            CXLFixedWindow *fw = it->data;
> > +
> > +            memory_region_add_subregion(sysmem, fw->base, &fw->mr);
> > +        }
> > +    }
> >
> >      virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
> >
> > @@ -2241,6 +2283,7 @@ static void machvirt_init(MachineState *machine)
> >      create_rtc(vms);
> >
> >      create_pcie(vms);
> > +    create_cxl_host_reg_region(vms);
> >
> >      if (has_ged && aarch64 && firmware_loaded && virt_is_acpi_enabled(vms)) {
> >          vms->acpi_dev = create_acpi_ged(vms);
> > @@ -3070,6 +3113,7 @@ static void virt_instance_init(Object *obj)
> >
> >      vms->oem_id = g_strndup(ACPI_BUILD_APPNAME6, 6);
> >      vms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8);
> > +    cxl_machine_init(obj, &vms->cxl_devices_state);
> >  }
> >
> >  static const TypeInfo virt_machine_info = {
> > diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
> > index 15feabac63..403c9107e5 100644
> > --- a/include/hw/arm/virt.h
> > +++ b/include/hw/arm/virt.h
> > @@ -35,6 +35,7 @@
> >  #include "hw/boards.h"
> >  #include "hw/arm/boot.h"
> >  #include "hw/block/flash.h"
> > +#include "hw/cxl/cxl.h"
> >  #include "sysemu/kvm.h"
> >  #include "hw/intc/arm_gicv3_common.h"
> >  #include "qom/object.h"
> > @@ -92,6 +93,7 @@ enum {
> >  /* indices of IO regions located after the RAM */
> >  enum {
> >      VIRT_HIGH_GIC_REDIST2 =  VIRT_LOWMEMMAP_LAST,
> > +    VIRT_CXL_HOST,
> >      VIRT_HIGH_PCIE_ECAM,
> >      VIRT_HIGH_PCIE_MMIO,
> >  };
> > @@ -176,6 +178,7 @@ struct VirtMachineState {
> >      PCIBus *bus;
> >      char *oem_id;
> >      char *oem_table_id;
> > +    CXLState cxl_devices_state;
> >  };  
> 
> I just looked at the CXLState struct. Why isn't this a device object ?

It can't be a pluggable device (e.g. added by -device ...)
because we need to know about it early enough to set up the underlying
PA regions (I tried that a long time back, and couldn't find a way to
make it work).

Possibly we could have machine parameters as today to specify
everything necessary to create a set of fixed memory window associated
devices. I'm not sure what the benefit would be however. When I originally
look at this it seemed much closer to NumaState, NVDimmstate etc and
that's what it was modeled on (those are in MachineState but Paolo asked
we move CXLState down into the machine specific state structures because it
wasn't relevant to the majority of boards).

I guess the simple answer to this is that as it's not a 'device' in the
sense we would normally expect to be called device on a physical system
it never occurred to me to consider making it one.  If there are
benefits to doing so (and we can avoid breaking x86 support, then sure
I can look at doing so).

struct CXLState {
	/*
	 * Used to gate creation of ACPI tables etc. You could use presence
	 * or absence of a device but boolean seems much simpler and similar
	 * too all the bools in Virt MachineState.
	 */
	bool is_enabled; 
	
	/*
	 * At time of PA init we don't know what device are present
	 * so need a trivial allocator that can be used later to
	 * poke the addresses at the devices that have been created
	 * via -device 
	 * Could make it a device, but seems overkill and would complicate
	 * the acpi table building by making us do multiple levels of
	 * address decoding to get to the actual PA of unrelated devices.
	 * This could be avoided if we hard coded the PXBs in the machine
	 * file, but that's a non starter because of how they work today.
	 */
	MemoryRegion host_mr;
   	MemoryRegion next_mr_index;

	/* State for the parameter parser of machine parameters.
	 * CXLState was in MachineState where there are lots of
	 * similar items (NumaState etc) but Paolo asked we move it into
	 * the state of individual machines that we have added
	 * CXL support to.
	 * The split of PA map building and setup of devices means
	 * we need to read this multiple times.
	 */
	GList *fixed_windows;
	CXLFixedMemoryWindowOptionsList *cfmw_list;
};

I just noticed we have a left over cfmw_list in MachineState that
should have gone in the rework set that moved this stuff into the
individual board states.  I'll send a patch to clear that stray
pointer out separately.

Thanks,

Jonathan
> 
> thanks
> -- PMM


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
@ 2022-06-24 12:39       ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-24 12:39 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Fri, 24 Jun 2022 11:48:47 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Thu, 16 Jun 2022 at 15:20, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > Code based on i386/pc enablement.
> > The memory layout places space for 16 host bridge register regions after
> > the GIC_REDIST2 in the extended memmap.
> > The CFMWs are placed above the extended memmap.
> >
> > Only create the CEDT table if cxl=on set for the machine.
> >
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>  

Hi Peter,

Thanks for the review,

> 
> This seems to be missing code to advertise the new devices in the
> device tree.

Intentionally. I am not aware of any current interest
in defining DT support CXL or supporting it in operating systems.
Easy enough to add if anyone does the leg work to figure out the
bindings, but that needs to come from someone who cares and
would need to be driven by OS support and a usecase. The ACPI
stuff here is defined as part of the main CXL spec because the
target class of machines simply doesn't generally use DT.

> 
> Somebody else had better review the ACPI changes :-)
> 
> >  #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
> > @@ -178,6 +181,7 @@ static const MemMapEntry base_memmap[] = {
> >  static MemMapEntry extended_memmap[] = {
> >      /* Additional 64 MB redist region (can contain up to 512 redistributors) */
> >      [VIRT_HIGH_GIC_REDIST2] =   { 0x0, 64 * MiB },
> > +    [VIRT_CXL_HOST] =           { 0x0, 64 * KiB * 16 }, /* 16 UID */  
> 
> Does this reshuffle the memory layout so that all the stuff after it
> moves up ? If so, that's an incompatible change and would need
> versioning somehow.

I slotted it into a hole in the existing memory map so unless I
have a bug, it shouldn't move any pre existing memory windows.
The comment just above extended_mmap states that the floating area
starts at 256GiB and the entries after that will be naturally aligned.

Hence existing map will be
256GiB Start of redist2
256GiB + 64MiB Start of gap -  we slot in a 16 * 64KiB region at start of this gap.
256GiB + 256MiB Start of PCIE_ECAM
256GiB + 512MiB Start of gap 2
512GiB Start of PCIE_MMIO

> 
> More generally, should this new addition to the machine be
> versioned? What did we do for the pc machine types?

So far not versioned and concern wasn't raised. I didn't think it was an
issue because there should be no effect on older machines where cxl=on
can't be set. Given the user has to explicitly request more hardware
for anything to change, I don't see a need for versioning (yet).

Not sure it's relevant but note the emulation here is not suitable for
anything other than testing code against (it's very slow and provides
nothing you'd actually want in a VM) so migrating a VM with this support
seems unlikely to ever happen.

> 
> >      [VIRT_HIGH_PCIE_ECAM] =     { 0x0, 256 * MiB },
> >      /* Second PCIe window */
> >      [VIRT_HIGH_PCIE_MMIO] =     { 0x0, 512 * GiB },
> > @@ -1525,6 +1529,16 @@ static void create_pcie(VirtMachineState *vms)
> >      }
> >  }
> >
> > +static void create_cxl_host_reg_region(VirtMachineState *vms)
> > +{
> > +    MemoryRegion *sysmem = get_system_memory();
> > +    MemoryRegion *mr = &vms->cxl_devices_state.host_mr;
> > +
> > +    memory_region_init(mr, OBJECT(vms), "cxl_host_reg",
> > +                       vms->memmap[VIRT_CXL_HOST].size);
> > +    memory_region_add_subregion(sysmem, vms->memmap[VIRT_CXL_HOST].base, mr);
> > +}  
> 
> > @@ -1779,6 +1799,20 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
> >          memory_region_init(&ms->device_memory->mr, OBJECT(vms),
> >                             "device-memory", device_memory_size);
> >      }
> > +
> > +    if (vms->cxl_devices_state.fixed_windows) {
> > +        GList *it;
> > +
> > +        base = ROUND_UP(base, 256 * MiB);
> > +        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
> > +            CXLFixedWindow *fw = it->data;
> > +
> > +            fw->base = base;
> > +            memory_region_init_io(&fw->mr, OBJECT(vms), &cfmws_ops, fw,
> > +                                  "cxl-fixed-memory-region", fw->size);  
> 
> Why is board model code having to init memory regions for this
> device? Shouldn't the device be creating the memory regions itself
> and then exposing them for the board code to map them ?

From a device point of view CFMWs are weird. They are is a region of Host PA
space that has fixed routing to devices (CXL host bridges). On a real system
they are a characteristic of the interconnect routing and if programmable it
is done well before any standard software (so might as well be hard coded).
Making them a device would be a little bit like saying the base_memmap should be
a device - all they are doing is saying this address here should route to this
particular device. The flexibility here is to allow testing of different setups
without having to know the actual PA range which is safe to use.

They are very like the device memory or secure memory initialization.
The ops are there to do the interleave decoding. Interleave decoding
can't sensibly use a tree of memory windows because there would need to
be an infeasible number of them hence the memops work out the routing at
access time. If we ever optimize/specialize the non interleaved direct
connected CXL device case, or implement more general memory interleaving
support, it will just be a memory_region_init_ram() call like any other ram.

This went through various approaches but in the end the sequence of what we
need to set up for CXL fixed memory windows made representing them as a device
hard so we didn't. Perhaps it could be done, but the device part would
be very slim and we'd have a bunch of code having to search for the devices
to stitch together all the information the need once it's available as
the rest of the system is built (e.g. after the host bridges are created).

> 
> > +            base += fw->size;
> > +        }
> > +    }
> >  }
> >
> >  /*
> > @@ -2215,6 +2249,14 @@ static void machvirt_init(MachineState *machine)
> >          memory_region_add_subregion(sysmem, machine->device_memory->base,
> >                                      &machine->device_memory->mr);
> >      }
> > +    if (vms->cxl_devices_state.fixed_windows) {
> > +        GList *it;
> > +        for (it = vms->cxl_devices_state.fixed_windows; it; it = it->next) {
> > +            CXLFixedWindow *fw = it->data;
> > +
> > +            memory_region_add_subregion(sysmem, fw->base, &fw->mr);
> > +        }
> > +    }
> >
> >      virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
> >
> > @@ -2241,6 +2283,7 @@ static void machvirt_init(MachineState *machine)
> >      create_rtc(vms);
> >
> >      create_pcie(vms);
> > +    create_cxl_host_reg_region(vms);
> >
> >      if (has_ged && aarch64 && firmware_loaded && virt_is_acpi_enabled(vms)) {
> >          vms->acpi_dev = create_acpi_ged(vms);
> > @@ -3070,6 +3113,7 @@ static void virt_instance_init(Object *obj)
> >
> >      vms->oem_id = g_strndup(ACPI_BUILD_APPNAME6, 6);
> >      vms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8);
> > +    cxl_machine_init(obj, &vms->cxl_devices_state);
> >  }
> >
> >  static const TypeInfo virt_machine_info = {
> > diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
> > index 15feabac63..403c9107e5 100644
> > --- a/include/hw/arm/virt.h
> > +++ b/include/hw/arm/virt.h
> > @@ -35,6 +35,7 @@
> >  #include "hw/boards.h"
> >  #include "hw/arm/boot.h"
> >  #include "hw/block/flash.h"
> > +#include "hw/cxl/cxl.h"
> >  #include "sysemu/kvm.h"
> >  #include "hw/intc/arm_gicv3_common.h"
> >  #include "qom/object.h"
> > @@ -92,6 +93,7 @@ enum {
> >  /* indices of IO regions located after the RAM */
> >  enum {
> >      VIRT_HIGH_GIC_REDIST2 =  VIRT_LOWMEMMAP_LAST,
> > +    VIRT_CXL_HOST,
> >      VIRT_HIGH_PCIE_ECAM,
> >      VIRT_HIGH_PCIE_MMIO,
> >  };
> > @@ -176,6 +178,7 @@ struct VirtMachineState {
> >      PCIBus *bus;
> >      char *oem_id;
> >      char *oem_table_id;
> > +    CXLState cxl_devices_state;
> >  };  
> 
> I just looked at the CXLState struct. Why isn't this a device object ?

It can't be a pluggable device (e.g. added by -device ...)
because we need to know about it early enough to set up the underlying
PA regions (I tried that a long time back, and couldn't find a way to
make it work).

Possibly we could have machine parameters as today to specify
everything necessary to create a set of fixed memory window associated
devices. I'm not sure what the benefit would be however. When I originally
look at this it seemed much closer to NumaState, NVDimmstate etc and
that's what it was modeled on (those are in MachineState but Paolo asked
we move CXLState down into the machine specific state structures because it
wasn't relevant to the majority of boards).

I guess the simple answer to this is that as it's not a 'device' in the
sense we would normally expect to be called device on a physical system
it never occurred to me to consider making it one.  If there are
benefits to doing so (and we can avoid breaking x86 support, then sure
I can look at doing so).

struct CXLState {
	/*
	 * Used to gate creation of ACPI tables etc. You could use presence
	 * or absence of a device but boolean seems much simpler and similar
	 * too all the bools in Virt MachineState.
	 */
	bool is_enabled; 
	
	/*
	 * At time of PA init we don't know what device are present
	 * so need a trivial allocator that can be used later to
	 * poke the addresses at the devices that have been created
	 * via -device 
	 * Could make it a device, but seems overkill and would complicate
	 * the acpi table building by making us do multiple levels of
	 * address decoding to get to the actual PA of unrelated devices.
	 * This could be avoided if we hard coded the PXBs in the machine
	 * file, but that's a non starter because of how they work today.
	 */
	MemoryRegion host_mr;
   	MemoryRegion next_mr_index;

	/* State for the parameter parser of machine parameters.
	 * CXLState was in MachineState where there are lots of
	 * similar items (NumaState etc) but Paolo asked we move it into
	 * the state of individual machines that we have added
	 * CXL support to.
	 * The split of PA map building and setup of devices means
	 * we need to read this multiple times.
	 */
	GList *fixed_windows;
	CXLFixedMemoryWindowOptionsList *cfmw_list;
};

I just noticed we have a left over cfmw_list in MachineState that
should have gone in the rework set that moved this stuff into the
individual board states.  I'll send a patch to clear that stray
pointer out separately.

Thanks,

Jonathan
> 
> thanks
> -- PMM



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-06-24 12:39       ` Jonathan Cameron via
  (?)
@ 2022-06-24 12:56       ` Peter Maydell
  2022-06-24 14:08           ` Jonathan Cameron via
  -1 siblings, 1 reply; 23+ messages in thread
From: Peter Maydell @ 2022-06-24 12:56 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Fri, 24 Jun 2022 at 13:39, Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Fri, 24 Jun 2022 11:48:47 +0100
> Peter Maydell <peter.maydell@linaro.org> wrote:
> >
> > This seems to be missing code to advertise the new devices in the
> > device tree.
>
> Intentionally. I am not aware of any current interest
> in defining DT support CXL or supporting it in operating systems.
> Easy enough to add if anyone does the leg work to figure out the
> bindings, but that needs to come from someone who cares and
> would need to be driven by OS support and a usecase. The ACPI
> stuff here is defined as part of the main CXL spec because the
> target class of machines simply doesn't generally use DT.

I don't really want new devices in the virt board that aren't
usable in the common use case of "just pass a kernel with -kernel"...

-- PMM

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-06-24 12:56       ` Peter Maydell
@ 2022-06-24 14:08           ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-24 14:08 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi, Dan Williams

On Fri, 24 Jun 2022 13:56:32 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Fri, 24 Jun 2022 at 13:39, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Fri, 24 Jun 2022 11:48:47 +0100
> > Peter Maydell <peter.maydell@linaro.org> wrote:  
> > >
> > > This seems to be missing code to advertise the new devices in the
> > > device tree.  
> >
> > Intentionally. I am not aware of any current interest
> > in defining DT support CXL or supporting it in operating systems.
> > Easy enough to add if anyone does the leg work to figure out the
> > bindings, but that needs to come from someone who cares and
> > would need to be driven by OS support and a usecase. The ACPI
> > stuff here is defined as part of the main CXL spec because the
> > target class of machines simply doesn't generally use DT.  
> 
> I don't really want new devices in the virt board that aren't
> usable in the common use case of "just pass a kernel with -kernel"...
> 
> -- PMM

Ok.  In that case, what do you suggest?  

Options I can think of:

1) I can come up with plausible DT bindings, but I'm not sure how
that will be received by the kernel community, It will add a bunch of
infrastructure to maintain that may be seen as useless at least in
the short to medium term. Hence is not in anyone's test matrices etc.

Dan, how open would you be to adding DT support? We'd have to ignore
some of the firmware query stuff like QTGs as there isn't an equivalent
in DT - or we'd have to go as far as defining actual devices with
mailboxes to query that info.

2) Add it to something like the SBSA machine, but that brings a large
burden in firmware etc and need to communicate everything via DT to
EDK2 that is needed to build the ACPI tables in a flexible fashion +
mass of EDK2 development.  Whilst the SBSA model is great for ARM
specific stuff, requiring the large additional complexity in
actually using it to test arch independent software is probably
going to just mean it bit rots.

3) Fork virt (obviously sharing as much as possible), potentially I
guess that could be pretty light weight by declaring a new
TypeInfo that is very nearly identical to virt with just the few extra
calls inserted. 

Any other routes open to me to getting this support available?

Jonathan



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
@ 2022-06-24 14:08           ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-24 14:08 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi, Dan Williams

On Fri, 24 Jun 2022 13:56:32 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Fri, 24 Jun 2022 at 13:39, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Fri, 24 Jun 2022 11:48:47 +0100
> > Peter Maydell <peter.maydell@linaro.org> wrote:  
> > >
> > > This seems to be missing code to advertise the new devices in the
> > > device tree.  
> >
> > Intentionally. I am not aware of any current interest
> > in defining DT support CXL or supporting it in operating systems.
> > Easy enough to add if anyone does the leg work to figure out the
> > bindings, but that needs to come from someone who cares and
> > would need to be driven by OS support and a usecase. The ACPI
> > stuff here is defined as part of the main CXL spec because the
> > target class of machines simply doesn't generally use DT.  
> 
> I don't really want new devices in the virt board that aren't
> usable in the common use case of "just pass a kernel with -kernel"...
> 
> -- PMM

Ok.  In that case, what do you suggest?  

Options I can think of:

1) I can come up with plausible DT bindings, but I'm not sure how
that will be received by the kernel community, It will add a bunch of
infrastructure to maintain that may be seen as useless at least in
the short to medium term. Hence is not in anyone's test matrices etc.

Dan, how open would you be to adding DT support? We'd have to ignore
some of the firmware query stuff like QTGs as there isn't an equivalent
in DT - or we'd have to go as far as defining actual devices with
mailboxes to query that info.

2) Add it to something like the SBSA machine, but that brings a large
burden in firmware etc and need to communicate everything via DT to
EDK2 that is needed to build the ACPI tables in a flexible fashion +
mass of EDK2 development.  Whilst the SBSA model is great for ARM
specific stuff, requiring the large additional complexity in
actually using it to test arch independent software is probably
going to just mean it bit rots.

3) Fork virt (obviously sharing as much as possible), potentially I
guess that could be pretty light weight by declaring a new
TypeInfo that is very nearly identical to virt with just the few extra
calls inserted. 

Any other routes open to me to getting this support available?

Jonathan




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-06-24 14:08           ` Jonathan Cameron via
@ 2022-06-24 14:54             ` Jonathan Cameron
  -1 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-24 14:54 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi, Dan Williams

On Fri, 24 Jun 2022 15:08:44 +0100
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Fri, 24 Jun 2022 13:56:32 +0100
> Peter Maydell <peter.maydell@linaro.org> wrote:
> 
> > On Fri, 24 Jun 2022 at 13:39, Jonathan Cameron
> > <Jonathan.Cameron@huawei.com> wrote:  
> > >
> > > On Fri, 24 Jun 2022 11:48:47 +0100
> > > Peter Maydell <peter.maydell@linaro.org> wrote:    
> > > >
> > > > This seems to be missing code to advertise the new devices in the
> > > > device tree.    
> > >
> > > Intentionally. I am not aware of any current interest
> > > in defining DT support CXL or supporting it in operating systems.
> > > Easy enough to add if anyone does the leg work to figure out the
> > > bindings, but that needs to come from someone who cares and
> > > would need to be driven by OS support and a usecase. The ACPI
> > > stuff here is defined as part of the main CXL spec because the
> > > target class of machines simply doesn't generally use DT.    
> > 
> > I don't really want new devices in the virt board that aren't
> > usable in the common use case of "just pass a kernel with -kernel"...
> > 
> > -- PMM  
> 
> Ok.  In that case, what do you suggest?  
> 
> Options I can think of:
> 
> 1) I can come up with plausible DT bindings, but I'm not sure how
> that will be received by the kernel community, It will add a bunch of
> infrastructure to maintain that may be seen as useless at least in
> the short to medium term. Hence is not in anyone's test matrices etc.

Just occurred to me there is another barrier to an approach that adds
DT bindings.
I fairly sure hw/pci-bridge/pci_expander_bridge.c (PXB)
only works on ACPI platforms and is the only host bridge supported
for CXL emulation in QEMU.

> 
> Dan, how open would you be to adding DT support? We'd have to ignore
> some of the firmware query stuff like QTGs as there isn't an equivalent
> in DT - or we'd have to go as far as defining actual devices with
> mailboxes to query that info.
> 
> 2) Add it to something like the SBSA machine, but that brings a large
> burden in firmware etc and need to communicate everything via DT to
> EDK2 that is needed to build the ACPI tables in a flexible fashion +
> mass of EDK2 development.  Whilst the SBSA model is great for ARM
> specific stuff, requiring the large additional complexity in
> actually using it to test arch independent software is probably
> going to just mean it bit rots.
> 
> 3) Fork virt (obviously sharing as much as possible), potentially I
> guess that could be pretty light weight by declaring a new
> TypeInfo that is very nearly identical to virt with just the few extra
> calls inserted. 
> 
> Any other routes open to me to getting this support available?
> 
> Jonathan
> 
> 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
@ 2022-06-24 14:54             ` Jonathan Cameron
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-24 14:54 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi, Dan Williams

On Fri, 24 Jun 2022 15:08:44 +0100
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Fri, 24 Jun 2022 13:56:32 +0100
> Peter Maydell <peter.maydell@linaro.org> wrote:
> 
> > On Fri, 24 Jun 2022 at 13:39, Jonathan Cameron
> > <Jonathan.Cameron@huawei.com> wrote:  
> > >
> > > On Fri, 24 Jun 2022 11:48:47 +0100
> > > Peter Maydell <peter.maydell@linaro.org> wrote:    
> > > >
> > > > This seems to be missing code to advertise the new devices in the
> > > > device tree.    
> > >
> > > Intentionally. I am not aware of any current interest
> > > in defining DT support CXL or supporting it in operating systems.
> > > Easy enough to add if anyone does the leg work to figure out the
> > > bindings, but that needs to come from someone who cares and
> > > would need to be driven by OS support and a usecase. The ACPI
> > > stuff here is defined as part of the main CXL spec because the
> > > target class of machines simply doesn't generally use DT.    
> > 
> > I don't really want new devices in the virt board that aren't
> > usable in the common use case of "just pass a kernel with -kernel"...
> > 
> > -- PMM  
> 
> Ok.  In that case, what do you suggest?  
> 
> Options I can think of:
> 
> 1) I can come up with plausible DT bindings, but I'm not sure how
> that will be received by the kernel community, It will add a bunch of
> infrastructure to maintain that may be seen as useless at least in
> the short to medium term. Hence is not in anyone's test matrices etc.

Just occurred to me there is another barrier to an approach that adds
DT bindings.
I fairly sure hw/pci-bridge/pci_expander_bridge.c (PXB)
only works on ACPI platforms and is the only host bridge supported
for CXL emulation in QEMU.

> 
> Dan, how open would you be to adding DT support? We'd have to ignore
> some of the firmware query stuff like QTGs as there isn't an equivalent
> in DT - or we'd have to go as far as defining actual devices with
> mailboxes to query that info.
> 
> 2) Add it to something like the SBSA machine, but that brings a large
> burden in firmware etc and need to communicate everything via DT to
> EDK2 that is needed to build the ACPI tables in a flexible fashion +
> mass of EDK2 development.  Whilst the SBSA model is great for ARM
> specific stuff, requiring the large additional complexity in
> actually using it to test arch independent software is probably
> going to just mean it bit rots.
> 
> 3) Fork virt (obviously sharing as much as possible), potentially I
> guess that could be pretty light weight by declaring a new
> TypeInfo that is very nearly identical to virt with just the few extra
> calls inserted. 
> 
> Any other routes open to me to getting this support available?
> 
> Jonathan
> 
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-06-24 14:54             ` Jonathan Cameron
  (?)
@ 2022-06-24 15:01             ` Peter Maydell
  2022-06-24 15:59                 ` Jonathan Cameron via
  -1 siblings, 1 reply; 23+ messages in thread
From: Peter Maydell @ 2022-06-24 15:01 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi, Dan Williams

On Fri, 24 Jun 2022 at 15:54, Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
> Just occurred to me there is another barrier to an approach that adds
> DT bindings.
> I fairly sure hw/pci-bridge/pci_expander_bridge.c (PXB)
> only works on ACPI platforms and is the only host bridge supported
> for CXL emulation in QEMU.

Isn't it probeable like any other PCI device/bridge ?

-- PMM

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-06-24 15:01             ` Peter Maydell
@ 2022-06-24 15:59                 ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-24 15:59 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi, Dan Williams

On Fri, 24 Jun 2022 16:01:42 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Fri, 24 Jun 2022 at 15:54, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> > Just occurred to me there is another barrier to an approach that adds
> > DT bindings.
> > I fairly sure hw/pci-bridge/pci_expander_bridge.c (PXB)
> > only works on ACPI platforms and is the only host bridge supported
> > for CXL emulation in QEMU.  
> 
> Isn't it probeable like any other PCI device/bridge ?

Nope - PXB is a really weird device. (I tested it quickly in case I was
wrong and indeed, no sign of device on the downstream side without a
suitable BIOS / ACPI)
There is no driver support for it as such, rather it presents
as two things.
1) A EP on the main host bridge - which is used for interrupt routing
   and possibly a few other things.  Linux has no idea that's what it
   is though so attaches no driver to it.  lspci shows this as
   Red Hat, Inc, QEMU PCIe Expander Bridge

2) A host bridge with firmware described characteristics (bus number
   range and similar).  Host bridges as defined in ACPI are a concept
   rather than actual hardware and presented to the OS via firmware
   descriptions (ACPI DSDT stuff in this case).

You could probably add dt description via
pci-host-ecam-generic bindings though but it would be an interesting
late bit of dt addition in the virt_machine_done() function. Similar
to the fw_cfg and ACPI stuff done at that stage to deal with PXB
devices becoming visible.

So gut feeling is PXB could be made to work with DT, but doesn't today.
Give the main usecase for PXB is typically NUMA description I guess no one
noticed on DT platforms.

Jonathan



 
> 
> -- PMM


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
@ 2022-06-24 15:59                 ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-24 15:59 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi, Dan Williams

On Fri, 24 Jun 2022 16:01:42 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Fri, 24 Jun 2022 at 15:54, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> > Just occurred to me there is another barrier to an approach that adds
> > DT bindings.
> > I fairly sure hw/pci-bridge/pci_expander_bridge.c (PXB)
> > only works on ACPI platforms and is the only host bridge supported
> > for CXL emulation in QEMU.  
> 
> Isn't it probeable like any other PCI device/bridge ?

Nope - PXB is a really weird device. (I tested it quickly in case I was
wrong and indeed, no sign of device on the downstream side without a
suitable BIOS / ACPI)
There is no driver support for it as such, rather it presents
as two things.
1) A EP on the main host bridge - which is used for interrupt routing
   and possibly a few other things.  Linux has no idea that's what it
   is though so attaches no driver to it.  lspci shows this as
   Red Hat, Inc, QEMU PCIe Expander Bridge

2) A host bridge with firmware described characteristics (bus number
   range and similar).  Host bridges as defined in ACPI are a concept
   rather than actual hardware and presented to the OS via firmware
   descriptions (ACPI DSDT stuff in this case).

You could probably add dt description via
pci-host-ecam-generic bindings though but it would be an interesting
late bit of dt addition in the virt_machine_done() function. Similar
to the fw_cfg and ACPI stuff done at that stage to deal with PXB
devices becoming visible.

So gut feeling is PXB could be made to work with DT, but doesn't today.
Give the main usecase for PXB is typically NUMA description I guess no one
noticed on DT platforms.

Jonathan



 
> 
> -- PMM



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 2/2] qtest/cxl: Add aarch64 virt test for CXL
  2022-06-16 14:19   ` Jonathan Cameron via
  (?)
  (?)
@ 2022-06-24 16:12   ` Peter Maydell
  2022-06-24 17:59       ` Jonathan Cameron via
  -1 siblings, 1 reply; 23+ messages in thread
From: Peter Maydell @ 2022-06-24 16:12 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Thu, 16 Jun 2022 at 15:20, Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> Add a single complex case for aarch64 virt machine.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
>  tests/qtest/cxl-test.c  | 48 +++++++++++++++++++++++++++++++++--------
>  tests/qtest/meson.build |  1 +
>  2 files changed, 40 insertions(+), 9 deletions(-)
>
> diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
> index 2133e973f4..1015d0e7c2 100644
> --- a/tests/qtest/cxl-test.c
> +++ b/tests/qtest/cxl-test.c
> @@ -17,6 +17,11 @@
>                        "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
>                        "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
>
> +#define QEMU_VIRT_2PXB_CMD "-machine virt,cxl=on "                      \
> +                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
> +                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "  \
> +                      "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
> +

If CXL requires booting via UEFI, what does this test case do?
It doesn't seem to be passing in a BIOS image.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 2/2] qtest/cxl: Add aarch64 virt test for CXL
  2022-06-24 16:12   ` Peter Maydell
@ 2022-06-24 17:59       ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2022-06-24 17:59 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Fri, 24 Jun 2022 17:12:25 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Thu, 16 Jun 2022 at 15:20, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > Add a single complex case for aarch64 virt machine.
> >
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > ---
> >  tests/qtest/cxl-test.c  | 48 +++++++++++++++++++++++++++++++++--------
> >  tests/qtest/meson.build |  1 +
> >  2 files changed, 40 insertions(+), 9 deletions(-)
> >
> > diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
> > index 2133e973f4..1015d0e7c2 100644
> > --- a/tests/qtest/cxl-test.c
> > +++ b/tests/qtest/cxl-test.c
> > @@ -17,6 +17,11 @@
> >                        "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
> >                        "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
> >
> > +#define QEMU_VIRT_2PXB_CMD "-machine virt,cxl=on "                      \
> > +                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
> > +                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "  \
> > +                      "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
> > +  
> 
> If CXL requires booting via UEFI, what does this test case do?
> It doesn't seem to be passing in a BIOS image.

Not a lot beyond checking device creation is valid etc and the machine boots.

There is a bios tables test that checks we pass the right tables to the BIOS image.
I didn't duplicate that for ARM on the basis it's more or less identical, but
perhaps that is worth adding.

To do any useful functional testing will require a mass of complex OS
handling after booting. That testing is definitely something I'd like to
add, but the userspace tooling isn't all in place yet. Final kernel series that's
needed to get to the point where you can use the non volatile memory had
a new version posted yesterday.

Jonathan

> 
> thanks
> -- PMM


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v11 2/2] qtest/cxl: Add aarch64 virt test for CXL
@ 2022-06-24 17:59       ` Jonathan Cameron via
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron via @ 2022-06-24 17:59 UTC (permalink / raw)
  To: Peter Maydell
  Cc: qemu-devel, alex.bennee, qemu-arm, Michael S . Tsirkin,
	Ben Widawsky, Paolo Bonzini, linux-cxl, linuxarm,
	Marcel Apfelbaum, Igor Mammedov, Markus Armbruster,
	Mark Cave-Ayland, Adam Manzanares, Tong Zhang,
	Shameerali Kolothum Thodi

On Fri, 24 Jun 2022 17:12:25 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Thu, 16 Jun 2022 at 15:20, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > Add a single complex case for aarch64 virt machine.
> >
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > ---
> >  tests/qtest/cxl-test.c  | 48 +++++++++++++++++++++++++++++++++--------
> >  tests/qtest/meson.build |  1 +
> >  2 files changed, 40 insertions(+), 9 deletions(-)
> >
> > diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
> > index 2133e973f4..1015d0e7c2 100644
> > --- a/tests/qtest/cxl-test.c
> > +++ b/tests/qtest/cxl-test.c
> > @@ -17,6 +17,11 @@
> >                        "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
> >                        "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
> >
> > +#define QEMU_VIRT_2PXB_CMD "-machine virt,cxl=on "                      \
> > +                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
> > +                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "  \
> > +                      "-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.targets.1=cxl.1,cxl-fmw.0.size=4G "
> > +  
> 
> If CXL requires booting via UEFI, what does this test case do?
> It doesn't seem to be passing in a BIOS image.

Not a lot beyond checking device creation is valid etc and the machine boots.

There is a bios tables test that checks we pass the right tables to the BIOS image.
I didn't duplicate that for ARM on the basis it's more or less identical, but
perhaps that is worth adding.

To do any useful functional testing will require a mass of complex OS
handling after booting. That testing is definitely something I'd like to
add, but the userspace tooling isn't all in place yet. Final kernel series that's
needed to get to the point where you can use the non volatile memory had
a new version posted yesterday.

Jonathan

> 
> thanks
> -- PMM



^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2022-06-24 18:04 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-16 14:19 [PATCH v11 0/2] arm/virt: CXL support via pxb_cxl Jonathan Cameron
2022-06-16 14:19 ` Jonathan Cameron via
2022-06-16 14:19 ` [PATCH v11 1/2] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl Jonathan Cameron
2022-06-16 14:19   ` Jonathan Cameron via
2022-06-24 10:48   ` Peter Maydell
2022-06-24 12:39     ` Jonathan Cameron
2022-06-24 12:39       ` Jonathan Cameron via
2022-06-24 12:56       ` Peter Maydell
2022-06-24 14:08         ` Jonathan Cameron
2022-06-24 14:08           ` Jonathan Cameron via
2022-06-24 14:54           ` Jonathan Cameron via
2022-06-24 14:54             ` Jonathan Cameron
2022-06-24 15:01             ` Peter Maydell
2022-06-24 15:59               ` Jonathan Cameron
2022-06-24 15:59                 ` Jonathan Cameron via
2022-06-16 14:19 ` [PATCH v11 2/2] qtest/cxl: Add aarch64 virt test for CXL Jonathan Cameron
2022-06-16 14:19   ` Jonathan Cameron via
2022-06-24 10:41   ` Peter Maydell
2022-06-24 16:12   ` Peter Maydell
2022-06-24 17:59     ` Jonathan Cameron
2022-06-24 17:59       ` Jonathan Cameron via
2022-06-24  9:07 ` [PATCH v11 0/2] arm/virt: CXL support via pxb_cxl Jonathan Cameron
2022-06-24  9:07   ` Jonathan Cameron via

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.