All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-05  1:40 ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

This patch set provides an ACPI code reorganization in preparation for
adding a shared hardware-reduced ACPI API to QEMU.

The changes are coming from the NEMU [1] project where we're defining
a new x86 machine type: i386/virt. This is an EFI only, ACPI
hardware-reduced platform that is built on top of a generic
hardware-reduced ACPI API [2]. This API was initially based off the
generic parts of the arm/virt-acpi-build.c implementation, and the goal
is for both i386/virt and arm/virt to duplicate as little code as
possible by using this new, shared API.

As a preliminary for adding this hardware-reduced ACPI API to QEMU, we did
some ACPI code reorganization with the following goals:

* Share as much as possible of the current ACPI build APIs between
  legacy and hardware-reduced ACPI.
* Share the ACPI build code across machine types and architectures and
  remove the typical PC machine type dependency.

The patches are also available in their own git branch [3].

[1] https://github.com/intel/nemu
[2] https://github.com/intel/nemu/blob/topic/virt-x86/hw/acpi/reduced.c
[3] https://github.com/intel/nemu/tree/topic/upstream/acpi

v1 -> v2:
   * Drop the hardware-reduced implementation for now. Our next patch
   * set
     will add hardware-reduced and convert arm/virt to it.
   * Implement the ACPI build methods as a QOM Interface Class and
   * convert
     the PC machine type to it.
   * acpi_conf_pc_init() uses a PCMachineState pointer and not a
     MachineState one as its argument.

v2 -> v3:
   * Cc all relevant maintainers, no functional changes.

v3 -> v4:
   * Renamed all AcpiConfiguration pointers from conf to acpi_conf.
   * Removed the ACPI_BUILD_ALIGN_SIZE export.
   * Temporarily updated the arm virt build_rsdp() prototype for
     bisectability purposes.
   * Removed unneeded pci headers from acpi-build.c.
   * Refactor the acpi PCI host getter so that it truly is architecture
     agnostic, by carrying the PCI host pointer through the
     AcpiConfiguration structure.
   * Splitted the PCI host AML builder API export patch from the PCI
     host and holes getter one.
   * Reduced the build_srat() export scope to hw/i386 instead of the
     broader hw/acpi. SRAT builders are truly architecture specific
     and can hardly be generalized.
   * Completed the ACPI builder documentation.

v4 -> v5:
   * Reorganize the ACPI RSDP export and XSDT implementation into 3
     patches.
   * Fix the hw/i386/acpi header inclusions.

Samuel Ortiz (16):
  hw: i386: Decouple the ACPI build from the PC machine type
  hw: acpi: Export ACPI build alignment API
  hw: acpi: The RSDP build API can return void
  hw: acpi: Export the RSDP build API
  hw: acpi: Implement XSDT support for RSDP
  hw: acpi: Factorize the RSDP build API implementation
  hw: i386: Move PCI host definitions to pci_host.h
  hw: acpi: Export the PCI host and holes getters
  hw: acpi: Do not create hotplug method when handler is not defined
  hw: i386: Make the hotpluggable memory size property more generic
  hw: i386: Export the i386 ACPI SRAT build method
  hw: i386: Export the MADT build method
  hw: acpi: Define ACPI tables builder interface
  hw: i386: Implement the ACPI builder interface for PC
  hw: pci-host: piix: Return PCI host pointer instead of PCI bus
  hw: i386: Set ACPI configuration PCI host pointer

Sebastien Boeuf (2):
  hw: acpi: Export the PCI hotplug API
  hw: acpi: Retrieve the PCI bus from AcpiPciHpState

Yang Zhong (6):
  hw: acpi: Generalize AML build routines
  hw: acpi: Factorize _OSC AML across architectures
  hw: acpi: Export and generalize the PCI host AML API
  hw: acpi: Export the MCFG getter
  hw: acpi: Fix memory hotplug AML generation error
  hw: i386: Refactor PCI host getter

 hw/i386/acpi-build.h           |    9 +-
 include/hw/acpi/acpi-defs.h    |   14 +
 include/hw/acpi/acpi.h         |   44 ++
 include/hw/acpi/aml-build.h    |   47 ++
 include/hw/acpi/builder.h      |  100 +++
 include/hw/i386/acpi.h         |   28 +
 include/hw/i386/pc.h           |   49 +-
 include/hw/mem/memory-device.h |    2 +
 include/hw/pci/pci_host.h      |    6 +
 hw/acpi/aml-build.c            |  981 +++++++++++++++++++++++++++++
 hw/acpi/builder.c              |   97 +++
 hw/acpi/cpu.c                  |    8 +-
 hw/acpi/cpu_hotplug.c          |    9 +-
 hw/acpi/memory_hotplug.c       |   21 +-
 hw/acpi/pcihp.c                |   10 +-
 hw/arm/virt-acpi-build.c       |   93 +--
 hw/i386/acpi-build.c           | 1072 +++-----------------------------
 hw/i386/pc.c                   |  198 +++---
 hw/i386/pc_piix.c              |   36 +-
 hw/i386/pc_q35.c               |   22 +-
 hw/i386/xen/xen-hvm.c          |   19 +-
 hw/pci-host/piix.c             |   32 +-
 stubs/pci-host-piix.c          |    6 -
 hw/acpi/Makefile.objs          |    1 +
 stubs/Makefile.objs            |    1 -
 25 files changed, 1644 insertions(+), 1261 deletions(-)
 create mode 100644 include/hw/acpi/builder.h
 create mode 100644 include/hw/i386/acpi.h
 create mode 100644 hw/acpi/builder.c
 delete mode 100644 stubs/pci-host-piix.c

-- 
2.19.1

^ permalink raw reply	[flat|nested] 170+ messages in thread

* [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-05  1:40 ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

This patch set provides an ACPI code reorganization in preparation for
adding a shared hardware-reduced ACPI API to QEMU.

The changes are coming from the NEMU [1] project where we're defining
a new x86 machine type: i386/virt. This is an EFI only, ACPI
hardware-reduced platform that is built on top of a generic
hardware-reduced ACPI API [2]. This API was initially based off the
generic parts of the arm/virt-acpi-build.c implementation, and the goal
is for both i386/virt and arm/virt to duplicate as little code as
possible by using this new, shared API.

As a preliminary for adding this hardware-reduced ACPI API to QEMU, we did
some ACPI code reorganization with the following goals:

* Share as much as possible of the current ACPI build APIs between
  legacy and hardware-reduced ACPI.
* Share the ACPI build code across machine types and architectures and
  remove the typical PC machine type dependency.

The patches are also available in their own git branch [3].

[1] https://github.com/intel/nemu
[2] https://github.com/intel/nemu/blob/topic/virt-x86/hw/acpi/reduced.c
[3] https://github.com/intel/nemu/tree/topic/upstream/acpi

v1 -> v2:
   * Drop the hardware-reduced implementation for now. Our next patch
   * set
     will add hardware-reduced and convert arm/virt to it.
   * Implement the ACPI build methods as a QOM Interface Class and
   * convert
     the PC machine type to it.
   * acpi_conf_pc_init() uses a PCMachineState pointer and not a
     MachineState one as its argument.

v2 -> v3:
   * Cc all relevant maintainers, no functional changes.

v3 -> v4:
   * Renamed all AcpiConfiguration pointers from conf to acpi_conf.
   * Removed the ACPI_BUILD_ALIGN_SIZE export.
   * Temporarily updated the arm virt build_rsdp() prototype for
     bisectability purposes.
   * Removed unneeded pci headers from acpi-build.c.
   * Refactor the acpi PCI host getter so that it truly is architecture
     agnostic, by carrying the PCI host pointer through the
     AcpiConfiguration structure.
   * Splitted the PCI host AML builder API export patch from the PCI
     host and holes getter one.
   * Reduced the build_srat() export scope to hw/i386 instead of the
     broader hw/acpi. SRAT builders are truly architecture specific
     and can hardly be generalized.
   * Completed the ACPI builder documentation.

v4 -> v5:
   * Reorganize the ACPI RSDP export and XSDT implementation into 3
     patches.
   * Fix the hw/i386/acpi header inclusions.

Samuel Ortiz (16):
  hw: i386: Decouple the ACPI build from the PC machine type
  hw: acpi: Export ACPI build alignment API
  hw: acpi: The RSDP build API can return void
  hw: acpi: Export the RSDP build API
  hw: acpi: Implement XSDT support for RSDP
  hw: acpi: Factorize the RSDP build API implementation
  hw: i386: Move PCI host definitions to pci_host.h
  hw: acpi: Export the PCI host and holes getters
  hw: acpi: Do not create hotplug method when handler is not defined
  hw: i386: Make the hotpluggable memory size property more generic
  hw: i386: Export the i386 ACPI SRAT build method
  hw: i386: Export the MADT build method
  hw: acpi: Define ACPI tables builder interface
  hw: i386: Implement the ACPI builder interface for PC
  hw: pci-host: piix: Return PCI host pointer instead of PCI bus
  hw: i386: Set ACPI configuration PCI host pointer

Sebastien Boeuf (2):
  hw: acpi: Export the PCI hotplug API
  hw: acpi: Retrieve the PCI bus from AcpiPciHpState

Yang Zhong (6):
  hw: acpi: Generalize AML build routines
  hw: acpi: Factorize _OSC AML across architectures
  hw: acpi: Export and generalize the PCI host AML API
  hw: acpi: Export the MCFG getter
  hw: acpi: Fix memory hotplug AML generation error
  hw: i386: Refactor PCI host getter

 hw/i386/acpi-build.h           |    9 +-
 include/hw/acpi/acpi-defs.h    |   14 +
 include/hw/acpi/acpi.h         |   44 ++
 include/hw/acpi/aml-build.h    |   47 ++
 include/hw/acpi/builder.h      |  100 +++
 include/hw/i386/acpi.h         |   28 +
 include/hw/i386/pc.h           |   49 +-
 include/hw/mem/memory-device.h |    2 +
 include/hw/pci/pci_host.h      |    6 +
 hw/acpi/aml-build.c            |  981 +++++++++++++++++++++++++++++
 hw/acpi/builder.c              |   97 +++
 hw/acpi/cpu.c                  |    8 +-
 hw/acpi/cpu_hotplug.c          |    9 +-
 hw/acpi/memory_hotplug.c       |   21 +-
 hw/acpi/pcihp.c                |   10 +-
 hw/arm/virt-acpi-build.c       |   93 +--
 hw/i386/acpi-build.c           | 1072 +++-----------------------------
 hw/i386/pc.c                   |  198 +++---
 hw/i386/pc_piix.c              |   36 +-
 hw/i386/pc_q35.c               |   22 +-
 hw/i386/xen/xen-hvm.c          |   19 +-
 hw/pci-host/piix.c             |   32 +-
 stubs/pci-host-piix.c          |    6 -
 hw/acpi/Makefile.objs          |    1 +
 stubs/Makefile.objs            |    1 -
 25 files changed, 1644 insertions(+), 1261 deletions(-)
 create mode 100644 include/hw/acpi/builder.h
 create mode 100644 include/hw/i386/acpi.h
 create mode 100644 hw/acpi/builder.c
 delete mode 100644 stubs/pci-host-piix.c

-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

ACPI tables are platform and machine type and even architecture
agnostic, and as such we want to provide an internal ACPI API that
only depends on platform agnostic information.

For the x86 architecture, in order to build ACPI tables independently
from the PC or Q35 machine types, we are moving a few MachineState
structure fields into a machine type agnostic structure called
AcpiConfiguration. The structure fields we move are:

   HotplugHandler *acpi_dev
   AcpiNVDIMMState acpi_nvdimm_state;
   FWCfgState *fw_cfg
   ram_addr_t below_4g_mem_size, above_4g_mem_size
   bool apic_xrupt_override
   unsigned apic_id_limit
   uint64_t numa_nodes
   uint64_t numa_mem

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/i386/acpi-build.h     |   4 +-
 include/hw/acpi/acpi.h   |  44 ++++++++++
 include/hw/i386/pc.h     |  19 ++---
 hw/acpi/cpu_hotplug.c    |   9 +-
 hw/arm/virt-acpi-build.c |  10 ---
 hw/i386/acpi-build.c     | 136 ++++++++++++++----------------
 hw/i386/pc.c             | 176 ++++++++++++++++++++++++---------------
 hw/i386/pc_piix.c        |  21 ++---
 hw/i386/pc_q35.c         |  21 ++---
 hw/i386/xen/xen-hvm.c    |  19 +++--
 10 files changed, 257 insertions(+), 202 deletions(-)

diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
index 007332e51c..065a1d8250 100644
--- a/hw/i386/acpi-build.h
+++ b/hw/i386/acpi-build.h
@@ -2,6 +2,8 @@
 #ifndef HW_I386_ACPI_BUILD_H
 #define HW_I386_ACPI_BUILD_H
 
-void acpi_setup(void);
+#include "hw/acpi/acpi.h"
+
+void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
 
 #endif
diff --git a/include/hw/acpi/acpi.h b/include/hw/acpi/acpi.h
index c20ace0d0b..254c8d0cfc 100644
--- a/include/hw/acpi/acpi.h
+++ b/include/hw/acpi/acpi.h
@@ -24,6 +24,8 @@
 #include "exec/memory.h"
 #include "hw/irq.h"
 #include "hw/acpi/acpi_dev_interface.h"
+#include "hw/hotplug.h"
+#include "hw/mem/nvdimm.h"
 
 /*
  * current device naming scheme supports up to 256 memory devices
@@ -186,6 +188,48 @@ extern int acpi_enabled;
 extern char unsigned *acpi_tables;
 extern size_t acpi_tables_len;
 
+typedef
+struct AcpiBuildState {
+    /* Copy of table in RAM (for patching). */
+    MemoryRegion *table_mr;
+    /* Is table patched? */
+    bool patched;
+    void *rsdp;
+    MemoryRegion *rsdp_mr;
+    MemoryRegion *linker_mr;
+} AcpiBuildState;
+
+typedef
+struct AcpiConfiguration {
+    /* Machine class ACPI settings */
+    int legacy_acpi_table_size;
+    bool rsdp_in_ram;
+    unsigned acpi_data_size;
+
+    /* Machine state ACPI settings */
+    HotplugHandler *acpi_dev;
+    AcpiNVDIMMState acpi_nvdimm_state;
+
+    /*
+     * The fields below are machine settings that
+     * are not ACPI specific. However they are needed
+     * for building ACPI tables and as such should be
+     * carried through the ACPI configuration structure.
+     */
+    bool legacy_cpu_hotplug;
+    bool linuxboot_dma_enabled;
+    FWCfgState *fw_cfg;
+    ram_addr_t below_4g_mem_size, above_4g_mem_size;;
+    uint64_t numa_nodes;
+    uint64_t *node_mem;
+    bool apic_xrupt_override;
+    unsigned apic_id_limit;
+    PCIHostState *pci_host;
+
+    /* Build state */
+    AcpiBuildState *build_state;
+} AcpiConfiguration;
+
 uint8_t *acpi_table_first(void);
 uint8_t *acpi_table_next(uint8_t *current);
 unsigned acpi_table_len(void *current);
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 136fe497b6..fed136fcdd 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -12,6 +12,7 @@
 #include "qemu/range.h"
 #include "qemu/bitmap.h"
 #include "sysemu/sysemu.h"
+#include "hw/acpi/acpi.h"
 #include "hw/pci/pci.h"
 #include "hw/compat.h"
 #include "hw/mem/pc-dimm.h"
@@ -35,10 +36,8 @@ struct PCMachineState {
     Notifier machine_done;
 
     /* Pointers to devices and objects: */
-    HotplugHandler *acpi_dev;
     ISADevice *rtc;
     PCIBus *bus;
-    FWCfgState *fw_cfg;
     qemu_irq *gsi;
 
     /* Configuration options: */
@@ -46,28 +45,20 @@ struct PCMachineState {
     OnOffAuto vmport;
     OnOffAuto smm;
 
-    AcpiNVDIMMState acpi_nvdimm_state;
-
     bool acpi_build_enabled;
     bool smbus;
     bool sata;
     bool pit;
 
-    /* RAM information (sizes, addresses, configuration): */
-    ram_addr_t below_4g_mem_size, above_4g_mem_size;
-
-    /* CPU and apic information: */
-    bool apic_xrupt_override;
-    unsigned apic_id_limit;
+    /* CPU information */
     uint16_t boot_cpus;
 
-    /* NUMA information: */
-    uint64_t numa_nodes;
-    uint64_t *node_mem;
-
     /* Address space used by IOAPIC device. All IOAPIC interrupts
      * will be translated to MSI messages in the address space. */
     AddressSpace *ioapic_as;
+
+    /* ACPI configuration */
+    AcpiConfiguration acpi_configuration;
 };
 
 #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
diff --git a/hw/acpi/cpu_hotplug.c b/hw/acpi/cpu_hotplug.c
index 5243918125..634dc3b846 100644
--- a/hw/acpi/cpu_hotplug.c
+++ b/hw/acpi/cpu_hotplug.c
@@ -237,9 +237,9 @@ void build_legacy_cpu_hotplug_aml(Aml *ctx, MachineState *machine,
     /* The current AML generator can cover the APIC ID range [0..255],
      * inclusive, for VCPU hotplug. */
     QEMU_BUILD_BUG_ON(ACPI_CPU_HOTPLUG_ID_LIMIT > 256);
-    if (pcms->apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
+    if (pcms->acpi_configuration.apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
         error_report("max_cpus is too large. APIC ID of last CPU is %u",
-                     pcms->apic_id_limit - 1);
+                     pcms->acpi_configuration.apic_id_limit - 1);
         exit(1);
     }
 
@@ -316,8 +316,9 @@ void build_legacy_cpu_hotplug_aml(Aml *ctx, MachineState *machine,
      * ith up to 255 elements. Windows guests up to win2k8 fail when
      * VarPackageOp is used.
      */
-    pkg = pcms->apic_id_limit <= 255 ? aml_package(pcms->apic_id_limit) :
-                                       aml_varpackage(pcms->apic_id_limit);
+    pkg = pcms->acpi_configuration.apic_id_limit <= 255 ?
+        aml_package(pcms->acpi_configuration.apic_id_limit) :
+        aml_varpackage(pcms->acpi_configuration.apic_id_limit);
 
     for (i = 0, apic_idx = 0; i < apic_ids->len; i++) {
         int apic_id = apic_ids->cpus[i].arch_id;
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 5785fb697c..f28a2faa53 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -790,16 +790,6 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
     free_aml_allocator();
 }
 
-typedef
-struct AcpiBuildState {
-    /* Copy of table in RAM (for patching). */
-    MemoryRegion *table_mr;
-    MemoryRegion *rsdp_mr;
-    MemoryRegion *linker_mr;
-    /* Is table patched? */
-    bool patched;
-} AcpiBuildState;
-
 static
 void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
 {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 1599caa7c5..d0362e1382 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -338,13 +338,14 @@ void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
 }
 
 static void
-build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
+build_madt(GArray *table_data, BIOSLinker *linker,
+           MachineState *ms, AcpiConfiguration *acpi_conf)
 {
-    MachineClass *mc = MACHINE_GET_CLASS(pcms);
-    const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(MACHINE(pcms));
+    MachineClass *mc = MACHINE_GET_CLASS(ms);
+    const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(ms);
     int madt_start = table_data->len;
-    AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(pcms->acpi_dev);
-    AcpiDeviceIf *adev = ACPI_DEVICE_IF(pcms->acpi_dev);
+    AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_conf->acpi_dev);
+    AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_conf->acpi_dev);
     bool x2apic_mode = false;
 
     AcpiMultipleApicTable *madt;
@@ -370,7 +371,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
     io_apic->address = cpu_to_le32(IO_APIC_DEFAULT_ADDRESS);
     io_apic->interrupt = cpu_to_le32(0);
 
-    if (pcms->apic_xrupt_override) {
+    if (acpi_conf->apic_xrupt_override) {
         intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);
         intsrcovr->type   = ACPI_APIC_XRUPT_OVERRIDE;
         intsrcovr->length = sizeof(*intsrcovr);
@@ -1786,13 +1787,12 @@ static Aml *build_q35_osc_method(void)
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
-           Range *pci_hole, Range *pci_hole64, MachineState *machine)
+           Range *pci_hole, Range *pci_hole64,
+           MachineState *machine, AcpiConfiguration *acpi_conf)
 {
     CrsRangeEntry *entry;
     Aml *dsdt, *sb_scope, *scope, *dev, *method, *field, *pkg, *crs;
     CrsRangeSet crs_range_set;
-    PCMachineState *pcms = PC_MACHINE(machine);
-    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(machine);
     uint32_t nr_mem = machine->ram_slots;
     int root_bus_limit = 0xFF;
     PCIBus *bus = NULL;
@@ -1836,7 +1836,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         build_q35_pci0_int(dsdt);
     }
 
-    if (pcmc->legacy_cpu_hotplug) {
+    if (acpi_conf->legacy_cpu_hotplug) {
         build_legacy_cpu_hotplug_aml(dsdt, machine, pm->cpu_hp_io_base);
     } else {
         CPUHotplugFeatures opts = {
@@ -1860,7 +1860,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(scope, method);
         }
 
-        if (pcms->acpi_nvdimm_state.is_enabled) {
+        if (acpi_conf->acpi_nvdimm_state.is_enabled) {
             method = aml_method("_E04", 0, AML_NOTSERIALIZED);
             aml_append(method, aml_notify(aml_name("\\_SB.NVDR"),
                                           aml_int(0x80)));
@@ -2041,7 +2041,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
          * with half of the 16-bit control register. Hence, the total size
          * of the i/o region used is FW_CFG_CTL_SIZE; when using DMA, the
          * DMA control register is located at FW_CFG_DMA_IO_BASE + 4 */
-        uint8_t io_size = object_property_get_bool(OBJECT(pcms->fw_cfg),
+        uint8_t io_size = object_property_get_bool(OBJECT(acpi_conf->fw_cfg),
                                                    "dma_enabled", NULL) ?
                           ROUND_UP(FW_CFG_CTL_SIZE, 4) + sizeof(dma_addr_t) :
                           FW_CFG_CTL_SIZE;
@@ -2252,7 +2252,8 @@ build_tpm2(GArray *table_data, BIOSLinker *linker, GArray *tcpalog)
 #define HOLE_640K_END   (1 * MiB)
 
 static void
-build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
+build_srat(GArray *table_data, BIOSLinker *linker,
+           MachineState *machine, AcpiConfiguration *acpi_conf)
 {
     AcpiSystemResourceAffinityTable *srat;
     AcpiSratMemoryAffinity *numamem;
@@ -2262,9 +2263,8 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
     uint64_t mem_len, mem_base, next_base;
     MachineClass *mc = MACHINE_GET_CLASS(machine);
     const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
-    PCMachineState *pcms = PC_MACHINE(machine);
     ram_addr_t hotplugabble_address_space_size =
-        object_property_get_int(OBJECT(pcms), PC_MACHINE_DEVMEM_REGION_SIZE,
+        object_property_get_int(OBJECT(machine), PC_MACHINE_DEVMEM_REGION_SIZE,
                                 NULL);
 
     srat_start = table_data->len;
@@ -2306,9 +2306,9 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
     next_base = 0;
     numa_start = table_data->len;
 
-    for (i = 1; i < pcms->numa_nodes + 1; ++i) {
+    for (i = 1; i < acpi_conf->numa_nodes + 1; ++i) {
         mem_base = next_base;
-        mem_len = pcms->node_mem[i - 1];
+        mem_len = acpi_conf->node_mem[i - 1];
         next_base = mem_base + mem_len;
 
         /* Cut out the 640K hole */
@@ -2331,16 +2331,16 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
         }
 
         /* Cut out the ACPI_PCI hole */
-        if (mem_base <= pcms->below_4g_mem_size &&
-            next_base > pcms->below_4g_mem_size) {
-            mem_len -= next_base - pcms->below_4g_mem_size;
+        if (mem_base <= acpi_conf->below_4g_mem_size &&
+            next_base > acpi_conf->below_4g_mem_size) {
+            mem_len -= next_base - acpi_conf->below_4g_mem_size;
             if (mem_len > 0) {
                 numamem = acpi_data_push(table_data, sizeof *numamem);
                 build_srat_memory(numamem, mem_base, mem_len, i - 1,
                                   MEM_AFFINITY_ENABLED);
             }
             mem_base = 1ULL << 32;
-            mem_len = next_base - pcms->below_4g_mem_size;
+            mem_len = next_base - acpi_conf->below_4g_mem_size;
             next_base = mem_base + mem_len;
         }
 
@@ -2351,7 +2351,7 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
         }
     }
     slots = (table_data->len - numa_start) / sizeof *numamem;
-    for (; slots < pcms->numa_nodes + 2; slots++) {
+    for (; slots < acpi_conf->numa_nodes + 2; slots++) {
         numamem = acpi_data_push(table_data, sizeof *numamem);
         build_srat_memory(numamem, 0, 0, 0, MEM_AFFINITY_NOFLAGS);
     }
@@ -2367,7 +2367,8 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
     if (hotplugabble_address_space_size) {
         numamem = acpi_data_push(table_data, sizeof *numamem);
         build_srat_memory(numamem, machine->device_memory->base,
-                          hotplugabble_address_space_size, pcms->numa_nodes - 1,
+                          hotplugabble_address_space_size,
+                          acpi_conf->numa_nodes - 1,
                           MEM_AFFINITY_HOTPLUGGABLE | MEM_AFFINITY_ENABLED);
     }
 
@@ -2546,17 +2547,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
     return rsdp_table;
 }
 
-typedef
-struct AcpiBuildState {
-    /* Copy of table in RAM (for patching). */
-    MemoryRegion *table_mr;
-    /* Is table patched? */
-    uint8_t patched;
-    void *rsdp;
-    MemoryRegion *rsdp_mr;
-    MemoryRegion *linker_mr;
-} AcpiBuildState;
-
 static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
 {
     Object *pci_host;
@@ -2580,10 +2570,9 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
 }
 
 static
-void acpi_build(AcpiBuildTables *tables, MachineState *machine)
+void acpi_build(AcpiBuildTables *tables,
+                MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-    PCMachineState *pcms = PC_MACHINE(machine);
-    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
     GArray *table_offsets;
     unsigned facs, dsdt, rsdt, fadt;
     AcpiPmInfo pm;
@@ -2621,7 +2610,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
     /* DSDT is pointed to by FADT */
     dsdt = tables_blob->len;
     build_dsdt(tables_blob, tables->linker, &pm, &misc,
-               &pci_hole, &pci_hole64, machine);
+               &pci_hole, &pci_hole64, machine, acpi_conf);
 
     /* Count the size of the DSDT and SSDT, we will need it for legacy
      * sizing of ACPI tables.
@@ -2639,7 +2628,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
     aml_len += tables_blob->len - fadt;
 
     acpi_add_table(table_offsets, tables_blob);
-    build_madt(tables_blob, tables->linker, pcms);
+    build_madt(tables_blob, tables->linker, machine, acpi_conf);
 
     vmgenid_dev = find_vmgenid_dev();
     if (vmgenid_dev) {
@@ -2661,9 +2650,9 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
             build_tpm2(tables_blob, tables->linker, tables->tcpalog);
         }
     }
-    if (pcms->numa_nodes) {
+    if (acpi_conf->numa_nodes) {
         acpi_add_table(table_offsets, tables_blob);
-        build_srat(tables_blob, tables->linker, machine);
+        build_srat(tables_blob, tables->linker, machine, acpi_conf);
         if (have_numa_distance) {
             acpi_add_table(table_offsets, tables_blob);
             build_slit(tables_blob, tables->linker);
@@ -2683,9 +2672,9 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
             build_dmar_q35(tables_blob, tables->linker);
         }
     }
-    if (pcms->acpi_nvdimm_state.is_enabled) {
+    if (acpi_conf->acpi_nvdimm_state.is_enabled) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
-                          &pcms->acpi_nvdimm_state, machine->ram_slots);
+                          &acpi_conf->acpi_nvdimm_state, machine->ram_slots);
     }
 
     /* Add tables supplied by user (if any) */
@@ -2721,13 +2710,13 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
      *
      * All this is for PIIX4, since QEMU 2.0 didn't support Q35 migration.
      */
-    if (pcmc->legacy_acpi_table_size) {
+    if (acpi_conf->legacy_acpi_table_size) {
         /* Subtracting aml_len gives the size of fixed tables.  Then add the
          * size of the PIIX4 DSDT/SSDT in QEMU 2.0.
          */
         int legacy_aml_len =
-            pcmc->legacy_acpi_table_size +
-            ACPI_BUILD_LEGACY_CPU_AML_SIZE * pcms->apic_id_limit;
+            acpi_conf->legacy_acpi_table_size +
+            ACPI_BUILD_LEGACY_CPU_AML_SIZE * acpi_conf->apic_id_limit;
         int legacy_table_size =
             ROUND_UP(tables_blob->len - aml_len + legacy_aml_len,
                      ACPI_BUILD_ALIGN_SIZE);
@@ -2772,9 +2761,17 @@ static void acpi_ram_update(MemoryRegion *mr, GArray *data)
 
 static void acpi_build_update(void *build_opaque)
 {
-    AcpiBuildState *build_state = build_opaque;
+    AcpiConfiguration *acpi_conf = build_opaque;
+    AcpiBuildState *build_state;
     AcpiBuildTables tables;
 
+    /* No ACPI configuration? Nothing to do. */
+    if (!acpi_conf) {
+        return;
+    }
+
+    build_state = acpi_conf->build_state;
+
     /* No state to update or already patched? Nothing to do. */
     if (!build_state || build_state->patched) {
         return;
@@ -2783,7 +2780,7 @@ static void acpi_build_update(void *build_opaque)
 
     acpi_build_tables_init(&tables);
 
-    acpi_build(&tables, MACHINE(qdev_get_machine()));
+    acpi_build(&tables, MACHINE(qdev_get_machine()), acpi_conf);
 
     acpi_ram_update(build_state->table_mr, tables.table_data);
 
@@ -2803,12 +2800,12 @@ static void acpi_build_reset(void *build_opaque)
     build_state->patched = 0;
 }
 
-static MemoryRegion *acpi_add_rom_blob(AcpiBuildState *build_state,
+static MemoryRegion *acpi_add_rom_blob(AcpiConfiguration *acpi_conf,
                                        GArray *blob, const char *name,
                                        uint64_t max_size)
 {
     return rom_add_blob(name, blob->data, acpi_data_len(blob), max_size, -1,
-                        name, acpi_build_update, build_state, NULL, true);
+                        name, acpi_build_update, acpi_conf, NULL, true);
 }
 
 static const VMStateDescription vmstate_acpi_build = {
@@ -2816,59 +2813,48 @@ static const VMStateDescription vmstate_acpi_build = {
     .version_id = 1,
     .minimum_version_id = 1,
     .fields = (VMStateField[]) {
-        VMSTATE_UINT8(patched, AcpiBuildState),
+        VMSTATE_BOOL(patched, AcpiBuildState),
         VMSTATE_END_OF_LIST()
     },
 };
 
-void acpi_setup(void)
+void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-    PCMachineState *pcms = PC_MACHINE(qdev_get_machine());
-    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
     AcpiBuildTables tables;
     AcpiBuildState *build_state;
     Object *vmgenid_dev;
 
-    if (!pcms->fw_cfg) {
-        ACPI_BUILD_DPRINTF("No fw cfg. Bailing out.\n");
-        return;
-    }
-
-    if (!pcms->acpi_build_enabled) {
-        ACPI_BUILD_DPRINTF("ACPI build disabled. Bailing out.\n");
-        return;
-    }
-
-    if (!acpi_enabled) {
-        ACPI_BUILD_DPRINTF("ACPI disabled. Bailing out.\n");
+    if (!acpi_conf) {
+        ACPI_BUILD_DPRINTF("No ACPI config. Bailing out.\n");
         return;
     }
 
     build_state = g_malloc0(sizeof *build_state);
+    acpi_conf->build_state = build_state;
 
     acpi_build_tables_init(&tables);
-    acpi_build(&tables, MACHINE(pcms));
+    acpi_build(&tables, machine, acpi_conf);
 
     /* Now expose it all to Guest */
-    build_state->table_mr = acpi_add_rom_blob(build_state, tables.table_data,
+    build_state->table_mr = acpi_add_rom_blob(acpi_conf, tables.table_data,
                                                ACPI_BUILD_TABLE_FILE,
                                                ACPI_BUILD_TABLE_MAX_SIZE);
     assert(build_state->table_mr != NULL);
 
     build_state->linker_mr =
-        acpi_add_rom_blob(build_state, tables.linker->cmd_blob,
+        acpi_add_rom_blob(acpi_conf, tables.linker->cmd_blob,
                           "etc/table-loader", 0);
 
-    fw_cfg_add_file(pcms->fw_cfg, ACPI_BUILD_TPMLOG_FILE,
+    fw_cfg_add_file(acpi_conf->fw_cfg, ACPI_BUILD_TPMLOG_FILE,
                     tables.tcpalog->data, acpi_data_len(tables.tcpalog));
 
     vmgenid_dev = find_vmgenid_dev();
     if (vmgenid_dev) {
-        vmgenid_add_fw_cfg(VMGENID(vmgenid_dev), pcms->fw_cfg,
+        vmgenid_add_fw_cfg(VMGENID(vmgenid_dev), acpi_conf->fw_cfg,
                            tables.vmgenid);
     }
 
-    if (!pcmc->rsdp_in_ram) {
+    if (!acpi_conf->rsdp_in_ram) {
         /*
          * Keep for compatibility with old machine types.
          * Though RSDP is small, its contents isn't immutable, so
@@ -2877,13 +2863,13 @@ void acpi_setup(void)
         uint32_t rsdp_size = acpi_data_len(tables.rsdp);
 
         build_state->rsdp = g_memdup(tables.rsdp->data, rsdp_size);
-        fw_cfg_add_file_callback(pcms->fw_cfg, ACPI_BUILD_RSDP_FILE,
-                                 acpi_build_update, NULL, build_state,
+        fw_cfg_add_file_callback(acpi_conf->fw_cfg, ACPI_BUILD_RSDP_FILE,
+                                 acpi_build_update, NULL, acpi_conf,
                                  build_state->rsdp, rsdp_size, true);
         build_state->rsdp_mr = NULL;
     } else {
         build_state->rsdp = NULL;
-        build_state->rsdp_mr = acpi_add_rom_blob(build_state, tables.rsdp,
+        build_state->rsdp_mr = acpi_add_rom_blob(acpi_conf, tables.rsdp,
                                                   ACPI_BUILD_RSDP_FILE, 0);
     }
 
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index f095725dba..090f969933 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -444,17 +444,18 @@ void pc_cmos_init(PCMachineState *pcms,
 {
     int val;
     static pc_cmos_init_late_arg arg;
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     /* various important CMOS locations needed by PC/Bochs bios */
 
     /* memory size */
     /* base memory (first MiB) */
-    val = MIN(pcms->below_4g_mem_size / KiB, 640);
+    val = MIN(acpi_conf->below_4g_mem_size / KiB, 640);
     rtc_set_memory(s, 0x15, val);
     rtc_set_memory(s, 0x16, val >> 8);
     /* extended memory (next 64MiB) */
-    if (pcms->below_4g_mem_size > 1 * MiB) {
-        val = (pcms->below_4g_mem_size - 1 * MiB) / KiB;
+    if (acpi_conf->below_4g_mem_size > 1 * MiB) {
+        val = (acpi_conf->below_4g_mem_size - 1 * MiB) / KiB;
     } else {
         val = 0;
     }
@@ -465,8 +466,8 @@ void pc_cmos_init(PCMachineState *pcms,
     rtc_set_memory(s, 0x30, val);
     rtc_set_memory(s, 0x31, val >> 8);
     /* memory between 16MiB and 4GiB */
-    if (pcms->below_4g_mem_size > 16 * MiB) {
-        val = (pcms->below_4g_mem_size - 16 * MiB) / (64 * KiB);
+    if (acpi_conf->below_4g_mem_size > 16 * MiB) {
+        val = (acpi_conf->below_4g_mem_size - 16 * MiB) / (64 * KiB);
     } else {
         val = 0;
     }
@@ -475,7 +476,7 @@ void pc_cmos_init(PCMachineState *pcms,
     rtc_set_memory(s, 0x34, val);
     rtc_set_memory(s, 0x35, val >> 8);
     /* memory above 4GiB */
-    val = pcms->above_4g_mem_size / 65536;
+    val = acpi_conf->above_4g_mem_size / 65536;
     rtc_set_memory(s, 0x5b, val);
     rtc_set_memory(s, 0x5c, val >> 8);
     rtc_set_memory(s, 0x5d, val >> 16);
@@ -714,13 +715,14 @@ static void pc_build_smbios(PCMachineState *pcms)
     unsigned i, array_count;
     MachineState *ms = MACHINE(pcms);
     X86CPU *cpu = X86_CPU(ms->possible_cpus->cpus[0].cpu);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     /* tell smbios about cpuid version and features */
     smbios_set_cpuid(cpu->env.cpuid_version, cpu->env.features[FEAT_1_EDX]);
 
     smbios_tables = smbios_get_table_legacy(&smbios_tables_len);
     if (smbios_tables) {
-        fw_cfg_add_bytes(pcms->fw_cfg, FW_CFG_SMBIOS_ENTRIES,
+        fw_cfg_add_bytes(acpi_conf->fw_cfg, FW_CFG_SMBIOS_ENTRIES,
                          smbios_tables, smbios_tables_len);
     }
 
@@ -741,9 +743,9 @@ static void pc_build_smbios(PCMachineState *pcms)
     g_free(mem_array);
 
     if (smbios_anchor) {
-        fw_cfg_add_file(pcms->fw_cfg, "etc/smbios/smbios-tables",
+        fw_cfg_add_file(acpi_conf->fw_cfg, "etc/smbios/smbios-tables",
                         smbios_tables, smbios_tables_len);
-        fw_cfg_add_file(pcms->fw_cfg, "etc/smbios/smbios-anchor",
+        fw_cfg_add_file(acpi_conf->fw_cfg, "etc/smbios/smbios-anchor",
                         smbios_anchor, smbios_anchor_len);
     }
 }
@@ -755,6 +757,7 @@ static FWCfgState *bochs_bios_init(AddressSpace *as, PCMachineState *pcms)
     int i;
     const CPUArchIdList *cpus;
     MachineClass *mc = MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     fw_cfg = fw_cfg_init_io_dma(FW_CFG_IO_BASE, FW_CFG_IO_BASE + 4, as);
     fw_cfg_add_i16(fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
@@ -771,7 +774,7 @@ static FWCfgState *bochs_bios_init(AddressSpace *as, PCMachineState *pcms)
      * So for compatibility reasons with old BIOSes we are stuck with
      * "etc/max-cpus" actually being apic_id_limit
      */
-    fw_cfg_add_i16(fw_cfg, FW_CFG_MAX_CPUS, (uint16_t)pcms->apic_id_limit);
+    fw_cfg_add_i16(fw_cfg, FW_CFG_MAX_CPUS, (uint16_t)acpi_conf->apic_id_limit);
     fw_cfg_add_i64(fw_cfg, FW_CFG_RAM_SIZE, (uint64_t)ram_size);
     fw_cfg_add_bytes(fw_cfg, FW_CFG_ACPI_TABLES,
                      acpi_tables, acpi_tables_len);
@@ -787,20 +790,21 @@ static FWCfgState *bochs_bios_init(AddressSpace *as, PCMachineState *pcms)
      * of nodes, one word for each VCPU->node and one word for each node to
      * hold the amount of memory.
      */
-    numa_fw_cfg = g_new0(uint64_t, 1 + pcms->apic_id_limit + nb_numa_nodes);
+    numa_fw_cfg = g_new0(uint64_t,
+                         1 + acpi_conf->apic_id_limit + nb_numa_nodes);
     numa_fw_cfg[0] = cpu_to_le64(nb_numa_nodes);
     cpus = mc->possible_cpu_arch_ids(MACHINE(pcms));
     for (i = 0; i < cpus->len; i++) {
         unsigned int apic_id = cpus->cpus[i].arch_id;
-        assert(apic_id < pcms->apic_id_limit);
+        assert(apic_id < acpi_conf->apic_id_limit);
         numa_fw_cfg[apic_id + 1] = cpu_to_le64(cpus->cpus[i].props.node_id);
     }
     for (i = 0; i < nb_numa_nodes; i++) {
-        numa_fw_cfg[pcms->apic_id_limit + 1 + i] =
+        numa_fw_cfg[acpi_conf->apic_id_limit + 1 + i] =
             cpu_to_le64(numa_info[i].node_mem);
     }
     fw_cfg_add_bytes(fw_cfg, FW_CFG_NUMA, numa_fw_cfg,
-                     (1 + pcms->apic_id_limit + nb_numa_nodes) *
+                     (1 + acpi_conf->apic_id_limit + nb_numa_nodes) *
                      sizeof(*numa_fw_cfg));
 
     return fw_cfg;
@@ -848,6 +852,7 @@ static void load_linux(PCMachineState *pcms,
     char *vmode;
     MachineState *machine = MACHINE(pcms);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     struct setup_data *setup_data;
     const char *kernel_filename = machine->kernel_filename;
     const char *initrd_filename = machine->initrd_filename;
@@ -917,8 +922,8 @@ static void load_linux(PCMachineState *pcms,
         initrd_max = 0x37ffffff;
     }
 
-    if (initrd_max >= pcms->below_4g_mem_size - pcmc->acpi_data_size) {
-        initrd_max = pcms->below_4g_mem_size - pcmc->acpi_data_size - 1;
+    if (initrd_max >= acpi_conf->below_4g_mem_size - pcmc->acpi_data_size) {
+        initrd_max = acpi_conf->below_4g_mem_size - pcmc->acpi_data_size - 1;
     }
 
     fw_cfg_add_i32(fw_cfg, FW_CFG_CMDLINE_ADDR, cmdline_addr);
@@ -1154,7 +1159,8 @@ void pc_cpus_init(PCMachineState *pcms)
      *
      * This is used for FW_CFG_MAX_CPUS. See comments on bochs_bios_init().
      */
-    pcms->apic_id_limit = x86_cpu_apic_id_from_index(max_cpus - 1) + 1;
+    pcms->acpi_configuration.apic_id_limit =
+        x86_cpu_apic_id_from_index(max_cpus - 1) + 1;
     possible_cpus = mc->possible_cpu_arch_ids(ms);
     for (i = 0; i < smp_cpus; i++) {
         pc_new_cpu(possible_cpus->cpus[i].type, possible_cpus->cpus[i].arch_id,
@@ -1188,7 +1194,8 @@ static void pc_build_feature_control_file(PCMachineState *pcms)
 
     val = g_malloc(sizeof(*val));
     *val = cpu_to_le64(feature_control_bits | FEATURE_CONTROL_LOCKED);
-    fw_cfg_add_file(pcms->fw_cfg, "etc/msr_feature_control", val, sizeof(*val));
+    fw_cfg_add_file(pcms->acpi_configuration.fw_cfg,
+                    "etc/msr_feature_control", val, sizeof(*val));
 }
 
 static void rtc_set_cpus_count(ISADevice *rtc, uint16_t cpus_count)
@@ -1204,11 +1211,26 @@ static void rtc_set_cpus_count(ISADevice *rtc, uint16_t cpus_count)
     }
 }
 
+static void acpi_conf_pc_init(PCMachineState *pcms)
+{
+    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
+
+    /* Machine class settings */
+    acpi_conf->legacy_acpi_table_size = pcmc->legacy_acpi_table_size;
+    acpi_conf->legacy_cpu_hotplug = pcmc->legacy_cpu_hotplug;
+    acpi_conf->rsdp_in_ram = pcmc->rsdp_in_ram;
+
+    /* ACPI build state */
+    acpi_conf->build_state = NULL;
+}
+
 static
 void pc_machine_done(Notifier *notifier, void *data)
 {
     PCMachineState *pcms = container_of(notifier,
                                         PCMachineState, machine_done);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     PCIBus *bus = pcms->bus;
 
     /* set the number of CPUs */
@@ -1223,23 +1245,27 @@ void pc_machine_done(Notifier *notifier, void *data)
                 extra_hosts++;
             }
         }
-        if (extra_hosts && pcms->fw_cfg) {
+        if (extra_hosts && acpi_conf->fw_cfg) {
             uint64_t *val = g_malloc(sizeof(*val));
             *val = cpu_to_le64(extra_hosts);
-            fw_cfg_add_file(pcms->fw_cfg,
+            fw_cfg_add_file(acpi_conf->fw_cfg,
                     "etc/extra-pci-roots", val, sizeof(*val));
         }
     }
 
-    acpi_setup();
-    if (pcms->fw_cfg) {
+    if (pcms->acpi_build_enabled) {
+        acpi_conf_pc_init(pcms);
+        acpi_setup(MACHINE(pcms), acpi_conf);
+    }
+
+    if (acpi_conf->fw_cfg) {
         pc_build_smbios(pcms);
         pc_build_feature_control_file(pcms);
         /* update FW_CFG_NB_CPUS to account for -device added CPUs */
-        fw_cfg_modify_i16(pcms->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
+        fw_cfg_modify_i16(acpi_conf->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
     }
 
-    if (pcms->apic_id_limit > 255 && !xen_enabled()) {
+    if (acpi_conf->apic_id_limit > 255 && !xen_enabled()) {
         IntelIOMMUState *iommu = INTEL_IOMMU_DEVICE(x86_iommu_get_default());
 
         if (!iommu || !iommu->x86_iommu.intr_supported ||
@@ -1256,13 +1282,14 @@ void pc_machine_done(Notifier *notifier, void *data)
 void pc_guest_info_init(PCMachineState *pcms)
 {
     int i;
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    pcms->apic_xrupt_override = kvm_allows_irq0_override();
-    pcms->numa_nodes = nb_numa_nodes;
-    pcms->node_mem = g_malloc0(pcms->numa_nodes *
-                                    sizeof *pcms->node_mem);
+    acpi_conf->apic_xrupt_override = kvm_allows_irq0_override();
+    acpi_conf->numa_nodes = nb_numa_nodes;
+    acpi_conf->node_mem = g_malloc0(acpi_conf->numa_nodes *
+                                    sizeof *acpi_conf->node_mem);
     for (i = 0; i < nb_numa_nodes; i++) {
-        pcms->node_mem[i] = numa_info[i].node_mem;
+        acpi_conf->node_mem[i] = numa_info[i].node_mem;
     }
 
     pcms->machine_done.notify = pc_machine_done;
@@ -1323,7 +1350,7 @@ void xen_load_linux(PCMachineState *pcms)
                !strcmp(option_rom[i].name, "multiboot.bin"));
         rom_add_option(option_rom[i].name, option_rom[i].bootindex);
     }
-    pcms->fw_cfg = fw_cfg;
+    pcms->acpi_configuration.fw_cfg = fw_cfg;
 }
 
 void pc_memory_init(PCMachineState *pcms,
@@ -1337,9 +1364,10 @@ void pc_memory_init(PCMachineState *pcms,
     FWCfgState *fw_cfg;
     MachineState *machine = MACHINE(pcms);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    assert(machine->ram_size == pcms->below_4g_mem_size +
-                                pcms->above_4g_mem_size);
+    assert(machine->ram_size == acpi_conf->below_4g_mem_size +
+                                acpi_conf->above_4g_mem_size);
 
     linux_boot = (machine->kernel_filename != NULL);
 
@@ -1353,17 +1381,17 @@ void pc_memory_init(PCMachineState *pcms,
     *ram_memory = ram;
     ram_below_4g = g_malloc(sizeof(*ram_below_4g));
     memory_region_init_alias(ram_below_4g, NULL, "ram-below-4g", ram,
-                             0, pcms->below_4g_mem_size);
+                             0, acpi_conf->below_4g_mem_size);
     memory_region_add_subregion(system_memory, 0, ram_below_4g);
-    e820_add_entry(0, pcms->below_4g_mem_size, E820_RAM);
-    if (pcms->above_4g_mem_size > 0) {
+    e820_add_entry(0, acpi_conf->below_4g_mem_size, E820_RAM);
+    if (acpi_conf->above_4g_mem_size > 0) {
         ram_above_4g = g_malloc(sizeof(*ram_above_4g));
         memory_region_init_alias(ram_above_4g, NULL, "ram-above-4g", ram,
-                                 pcms->below_4g_mem_size,
-                                 pcms->above_4g_mem_size);
+                                 acpi_conf->below_4g_mem_size,
+                                 acpi_conf->above_4g_mem_size);
         memory_region_add_subregion(system_memory, 0x100000000ULL,
                                     ram_above_4g);
-        e820_add_entry(0x100000000ULL, pcms->above_4g_mem_size, E820_RAM);
+        e820_add_entry(0x100000000ULL, acpi_conf->above_4g_mem_size, E820_RAM);
     }
 
     if (!pcmc->has_reserved_memory &&
@@ -1398,7 +1426,7 @@ void pc_memory_init(PCMachineState *pcms,
         }
 
         machine->device_memory->base =
-            ROUND_UP(0x100000000ULL + pcms->above_4g_mem_size, 1 * GiB);
+            ROUND_UP(0x100000000ULL + acpi_conf->above_4g_mem_size, 1 * GiB);
 
         if (pcmc->enforce_aligned_dimm) {
             /* size device region assuming 1G page max alignment per slot */
@@ -1455,7 +1483,7 @@ void pc_memory_init(PCMachineState *pcms,
     for (i = 0; i < nb_option_roms; i++) {
         rom_add_option(option_rom[i].name, option_rom[i].bootindex);
     }
-    pcms->fw_cfg = fw_cfg;
+    acpi_conf->fw_cfg = fw_cfg;
 
     /* Init default IOAPIC address space */
     pcms->ioapic_as = &address_space_memory;
@@ -1478,7 +1506,8 @@ uint64_t pc_pci_hole64_start(void)
             hole64_start += memory_region_size(&ms->device_memory->mr);
         }
     } else {
-        hole64_start = 0x100000000ULL + pcms->above_4g_mem_size;
+        hole64_start =
+            0x100000000ULL + pcms->acpi_configuration.above_4g_mem_size;
     }
 
     return ROUND_UP(hole64_start, 1 * GiB);
@@ -1685,21 +1714,22 @@ static void pc_memory_pre_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
 {
     const PCMachineState *pcms = PC_MACHINE(hotplug_dev);
     const PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    const AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     const bool is_nvdimm = object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM);
     const uint64_t legacy_align = TARGET_PAGE_SIZE;
 
     /*
      * When -no-acpi is used with Q35 machine type, no ACPI is built,
-     * but pcms->acpi_dev is still created. Check !acpi_enabled in
+     * but acpi_dev is still created. Check !acpi_enabled in
      * addition to cover this case.
      */
-    if (!pcms->acpi_dev || !acpi_enabled) {
+    if (!acpi_conf->acpi_dev || !acpi_enabled) {
         error_setg(errp,
                    "memory hotplug is not enabled: missing acpi device or acpi disabled");
         return;
     }
 
-    if (is_nvdimm && !pcms->acpi_nvdimm_state.is_enabled) {
+    if (is_nvdimm && !acpi_conf->acpi_nvdimm_state.is_enabled) {
         error_setg(errp, "nvdimm is not enabled: missing 'nvdimm' in '-M'");
         return;
     }
@@ -1715,6 +1745,7 @@ static void pc_memory_plug(HotplugHandler *hotplug_dev,
     Error *local_err = NULL;
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
     bool is_nvdimm = object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     pc_dimm_plug(PC_DIMM(dev), MACHINE(pcms), &local_err);
     if (local_err) {
@@ -1722,11 +1753,11 @@ static void pc_memory_plug(HotplugHandler *hotplug_dev,
     }
 
     if (is_nvdimm) {
-        nvdimm_plug(&pcms->acpi_nvdimm_state);
+        nvdimm_plug(&acpi_conf->acpi_nvdimm_state);
     }
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->plug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &error_abort);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->plug(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &error_abort);
 out:
     error_propagate(errp, local_err);
 }
@@ -1737,13 +1768,14 @@ static void pc_memory_unplug_request(HotplugHandler *hotplug_dev,
     HotplugHandlerClass *hhc;
     Error *local_err = NULL;
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     /*
      * When -no-acpi is used with Q35 machine type, no ACPI is built,
-     * but pcms->acpi_dev is still created. Check !acpi_enabled in
+     * but acpi_dev is still created. Check !acpi_enabled in
      * addition to cover this case.
      */
-    if (!pcms->acpi_dev || !acpi_enabled) {
+    if (!acpi_conf->acpi_dev || !acpi_enabled) {
         error_setg(&local_err,
                    "memory hotplug is not enabled: missing acpi device or acpi disabled");
         goto out;
@@ -1755,8 +1787,8 @@ static void pc_memory_unplug_request(HotplugHandler *hotplug_dev,
         goto out;
     }
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->unplug_request(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->unplug_request(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
 
 out:
     error_propagate(errp, local_err);
@@ -1766,11 +1798,12 @@ static void pc_memory_unplug(HotplugHandler *hotplug_dev,
                              DeviceState *dev, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     HotplugHandlerClass *hhc;
     Error *local_err = NULL;
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->unplug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->unplug(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
 
     if (local_err) {
         goto out;
@@ -1817,10 +1850,11 @@ static void pc_cpu_plug(HotplugHandler *hotplug_dev,
     Error *local_err = NULL;
     X86CPU *cpu = X86_CPU(dev);
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    if (pcms->acpi_dev) {
-        hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-        hhc->plug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    if (acpi_conf->acpi_dev) {
+        hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+        hhc->plug(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
         if (local_err) {
             goto out;
         }
@@ -1831,8 +1865,8 @@ static void pc_cpu_plug(HotplugHandler *hotplug_dev,
     if (pcms->rtc) {
         rtc_set_cpus_count(pcms->rtc, pcms->boot_cpus);
     }
-    if (pcms->fw_cfg) {
-        fw_cfg_modify_i16(pcms->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
+    if (acpi_conf->fw_cfg) {
+        fw_cfg_modify_i16(acpi_conf->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
     }
 
     found_cpu = pc_find_cpu_slot(MACHINE(pcms), cpu->apic_id, NULL);
@@ -1848,8 +1882,9 @@ static void pc_cpu_unplug_request_cb(HotplugHandler *hotplug_dev,
     Error *local_err = NULL;
     X86CPU *cpu = X86_CPU(dev);
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    if (!pcms->acpi_dev) {
+    if (!acpi_conf->acpi_dev) {
         error_setg(&local_err, "CPU hot unplug not supported without ACPI");
         goto out;
     }
@@ -1861,8 +1896,8 @@ static void pc_cpu_unplug_request_cb(HotplugHandler *hotplug_dev,
         goto out;
     }
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->unplug_request(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->unplug_request(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
 
     if (local_err) {
         goto out;
@@ -1881,9 +1916,10 @@ static void pc_cpu_unplug_cb(HotplugHandler *hotplug_dev,
     Error *local_err = NULL;
     X86CPU *cpu = X86_CPU(dev);
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->unplug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->unplug(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
 
     if (local_err) {
         goto out;
@@ -1897,7 +1933,7 @@ static void pc_cpu_unplug_cb(HotplugHandler *hotplug_dev,
     pcms->boot_cpus--;
     /* Update the number of CPUs in CMOS */
     rtc_set_cpus_count(pcms->rtc, pcms->boot_cpus);
-    fw_cfg_modify_i16(pcms->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
+    fw_cfg_modify_i16(acpi_conf->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
  out:
     error_propagate(errp, local_err);
 }
@@ -2181,28 +2217,30 @@ static bool pc_machine_get_nvdimm(Object *obj, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
 
-    return pcms->acpi_nvdimm_state.is_enabled;
+    return pcms->acpi_configuration.acpi_nvdimm_state.is_enabled;
 }
 
 static void pc_machine_set_nvdimm(Object *obj, bool value, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    pcms->acpi_nvdimm_state.is_enabled = value;
+    acpi_conf->acpi_nvdimm_state.is_enabled = value;
 }
 
 static char *pc_machine_get_nvdimm_persistence(Object *obj, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    return g_strdup(pcms->acpi_nvdimm_state.persistence_string);
+    return g_strdup(acpi_conf->acpi_nvdimm_state.persistence_string);
 }
 
 static void pc_machine_set_nvdimm_persistence(Object *obj, const char *value,
                                                Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
-    AcpiNVDIMMState *nvdimm_state = &pcms->acpi_nvdimm_state;
+    AcpiNVDIMMState *nvdimm_state = &pcms->acpi_configuration.acpi_nvdimm_state;
 
     if (strcmp(value, "cpu") == 0)
         nvdimm_state->persistence = 3;
@@ -2268,7 +2306,7 @@ static void pc_machine_initfn(Object *obj)
     pcms->smm = ON_OFF_AUTO_AUTO;
     pcms->vmport = ON_OFF_AUTO_AUTO;
     /* nvdimm is disabled on default. */
-    pcms->acpi_nvdimm_state.is_enabled = false;
+    pcms->acpi_configuration.acpi_nvdimm_state.is_enabled = false;
     /* acpi build is enabled by default if machine supports it */
     pcms->acpi_build_enabled = PC_MACHINE_GET_CLASS(pcms)->has_acpi_build;
     pcms->smbus = true;
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index dc09466b3e..0620d10715 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -71,6 +71,7 @@ static void pc_init1(MachineState *machine,
 {
     PCMachineState *pcms = PC_MACHINE(machine);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     MemoryRegion *system_memory = get_system_memory();
     MemoryRegion *system_io = get_system_io();
     int i;
@@ -142,11 +143,11 @@ static void pc_init1(MachineState *machine,
         }
 
         if (machine->ram_size >= lowmem) {
-            pcms->above_4g_mem_size = machine->ram_size - lowmem;
-            pcms->below_4g_mem_size = lowmem;
+            acpi_conf->above_4g_mem_size = machine->ram_size - lowmem;
+            acpi_conf->below_4g_mem_size = lowmem;
         } else {
-            pcms->above_4g_mem_size = 0;
-            pcms->below_4g_mem_size = machine->ram_size;
+            acpi_conf->above_4g_mem_size = 0;
+            acpi_conf->below_4g_mem_size = machine->ram_size;
         }
     }
 
@@ -199,8 +200,8 @@ static void pc_init1(MachineState *machine,
                               pci_type,
                               &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
                               system_memory, system_io, machine->ram_size,
-                              pcms->below_4g_mem_size,
-                              pcms->above_4g_mem_size,
+                              acpi_conf->below_4g_mem_size,
+                              acpi_conf->above_4g_mem_size,
                               pci_memory, ram_memory);
         pcms->bus = pci_bus;
     } else {
@@ -289,16 +290,16 @@ static void pc_init1(MachineState *machine,
 
         object_property_add_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
                                  TYPE_HOTPLUG_HANDLER,
-                                 (Object **)&pcms->acpi_dev,
+                                 (Object **)&acpi_conf->acpi_dev,
                                  object_property_allow_set_link,
                                  OBJ_PROP_LINK_STRONG, &error_abort);
         object_property_set_link(OBJECT(machine), OBJECT(piix4_pm),
                                  PC_MACHINE_ACPI_DEVICE_PROP, &error_abort);
     }
 
-    if (pcms->acpi_nvdimm_state.is_enabled) {
-        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
-                               pcms->fw_cfg, OBJECT(pcms));
+    if (acpi_conf->acpi_nvdimm_state.is_enabled) {
+        nvdimm_init_acpi_state(&acpi_conf->acpi_nvdimm_state, system_io,
+                               acpi_conf->fw_cfg, OBJECT(pcms));
     }
 }
 
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 532241e3f8..cdde4a4beb 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -63,6 +63,7 @@ static void pc_q35_init(MachineState *machine)
 {
     PCMachineState *pcms = PC_MACHINE(machine);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     Q35PCIHost *q35_host;
     PCIHostState *phb;
     PCIBus *host_bus;
@@ -116,11 +117,11 @@ static void pc_q35_init(MachineState *machine)
     }
 
     if (machine->ram_size >= lowmem) {
-        pcms->above_4g_mem_size = machine->ram_size - lowmem;
-        pcms->below_4g_mem_size = lowmem;
+        acpi_conf->above_4g_mem_size = machine->ram_size - lowmem;
+        acpi_conf->below_4g_mem_size = lowmem;
     } else {
-        pcms->above_4g_mem_size = 0;
-        pcms->below_4g_mem_size = machine->ram_size;
+        acpi_conf->above_4g_mem_size = 0;
+        acpi_conf->below_4g_mem_size = machine->ram_size;
     }
 
     if (xen_enabled()) {
@@ -179,9 +180,9 @@ static void pc_q35_init(MachineState *machine)
                              MCH_HOST_PROP_SYSTEM_MEM, NULL);
     object_property_set_link(OBJECT(q35_host), OBJECT(system_io),
                              MCH_HOST_PROP_IO_MEM, NULL);
-    object_property_set_int(OBJECT(q35_host), pcms->below_4g_mem_size,
+    object_property_set_int(OBJECT(q35_host), acpi_conf->below_4g_mem_size,
                             PCI_HOST_BELOW_4G_MEM_SIZE, NULL);
-    object_property_set_int(OBJECT(q35_host), pcms->above_4g_mem_size,
+    object_property_set_int(OBJECT(q35_host), acpi_conf->above_4g_mem_size,
                             PCI_HOST_ABOVE_4G_MEM_SIZE, NULL);
     /* pci */
     qdev_init_nofail(DEVICE(q35_host));
@@ -194,7 +195,7 @@ static void pc_q35_init(MachineState *machine)
 
     object_property_add_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
                              TYPE_HOTPLUG_HANDLER,
-                             (Object **)&pcms->acpi_dev,
+                             (Object **)&acpi_conf->acpi_dev,
                              object_property_allow_set_link,
                              OBJ_PROP_LINK_STRONG, &error_abort);
     object_property_set_link(OBJECT(machine), OBJECT(lpc),
@@ -276,9 +277,9 @@ static void pc_q35_init(MachineState *machine)
     pc_vga_init(isa_bus, host_bus);
     pc_nic_init(pcmc, isa_bus, host_bus);
 
-    if (pcms->acpi_nvdimm_state.is_enabled) {
-        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
-                               pcms->fw_cfg, OBJECT(pcms));
+    if (acpi_conf->acpi_nvdimm_state.is_enabled) {
+        nvdimm_init_acpi_state(&acpi_conf->acpi_nvdimm_state, system_io,
+                               acpi_conf->fw_cfg, OBJECT(pcms));
     }
 }
 
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 935a3676c8..0459fb7340 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -190,6 +190,7 @@ qemu_irq *xen_interrupt_controller_init(void)
 static void xen_ram_init(PCMachineState *pcms,
                          ram_addr_t ram_size, MemoryRegion **ram_memory_p)
 {
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     MemoryRegion *sysmem = get_system_memory();
     ram_addr_t block_len;
     uint64_t user_lowmem = object_property_get_uint(qdev_get_machine(),
@@ -207,20 +208,20 @@ static void xen_ram_init(PCMachineState *pcms,
     }
 
     if (ram_size >= user_lowmem) {
-        pcms->above_4g_mem_size = ram_size - user_lowmem;
-        pcms->below_4g_mem_size = user_lowmem;
+        acpi_conf->above_4g_mem_size = ram_size - user_lowmem;
+        acpi_conf->below_4g_mem_size = user_lowmem;
     } else {
-        pcms->above_4g_mem_size = 0;
-        pcms->below_4g_mem_size = ram_size;
+        acpi_conf->above_4g_mem_size = 0;
+        acpi_conf->below_4g_mem_size = ram_size;
     }
-    if (!pcms->above_4g_mem_size) {
+    if (!acpi_conf->above_4g_mem_size) {
         block_len = ram_size;
     } else {
         /*
          * Xen does not allocate the memory continuously, it keeps a
          * hole of the size computed above or passed in.
          */
-        block_len = (1ULL << 32) + pcms->above_4g_mem_size;
+        block_len = (1ULL << 32) + acpi_conf->above_4g_mem_size;
     }
     memory_region_init_ram(&ram_memory, NULL, "xen.ram", block_len,
                            &error_fatal);
@@ -237,12 +238,12 @@ static void xen_ram_init(PCMachineState *pcms,
      */
     memory_region_init_alias(&ram_lo, NULL, "xen.ram.lo",
                              &ram_memory, 0xc0000,
-                             pcms->below_4g_mem_size - 0xc0000);
+                             acpi_conf->below_4g_mem_size - 0xc0000);
     memory_region_add_subregion(sysmem, 0xc0000, &ram_lo);
-    if (pcms->above_4g_mem_size > 0) {
+    if (acpi_conf->above_4g_mem_size > 0) {
         memory_region_init_alias(&ram_hi, NULL, "xen.ram.hi",
                                  &ram_memory, 0x100000000ULL,
-                                 pcms->above_4g_mem_size);
+                                 acpi_conf->above_4g_mem_size);
         memory_region_add_subregion(sysmem, 0x100000000ULL, &ram_hi);
     }
 }
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

ACPI tables are platform and machine type and even architecture
agnostic, and as such we want to provide an internal ACPI API that
only depends on platform agnostic information.

For the x86 architecture, in order to build ACPI tables independently
from the PC or Q35 machine types, we are moving a few MachineState
structure fields into a machine type agnostic structure called
AcpiConfiguration. The structure fields we move are:

   HotplugHandler *acpi_dev
   AcpiNVDIMMState acpi_nvdimm_state;
   FWCfgState *fw_cfg
   ram_addr_t below_4g_mem_size, above_4g_mem_size
   bool apic_xrupt_override
   unsigned apic_id_limit
   uint64_t numa_nodes
   uint64_t numa_mem

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/i386/acpi-build.h     |   4 +-
 include/hw/acpi/acpi.h   |  44 ++++++++++
 include/hw/i386/pc.h     |  19 ++---
 hw/acpi/cpu_hotplug.c    |   9 +-
 hw/arm/virt-acpi-build.c |  10 ---
 hw/i386/acpi-build.c     | 136 ++++++++++++++----------------
 hw/i386/pc.c             | 176 ++++++++++++++++++++++++---------------
 hw/i386/pc_piix.c        |  21 ++---
 hw/i386/pc_q35.c         |  21 ++---
 hw/i386/xen/xen-hvm.c    |  19 +++--
 10 files changed, 257 insertions(+), 202 deletions(-)

diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
index 007332e51c..065a1d8250 100644
--- a/hw/i386/acpi-build.h
+++ b/hw/i386/acpi-build.h
@@ -2,6 +2,8 @@
 #ifndef HW_I386_ACPI_BUILD_H
 #define HW_I386_ACPI_BUILD_H
 
-void acpi_setup(void);
+#include "hw/acpi/acpi.h"
+
+void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
 
 #endif
diff --git a/include/hw/acpi/acpi.h b/include/hw/acpi/acpi.h
index c20ace0d0b..254c8d0cfc 100644
--- a/include/hw/acpi/acpi.h
+++ b/include/hw/acpi/acpi.h
@@ -24,6 +24,8 @@
 #include "exec/memory.h"
 #include "hw/irq.h"
 #include "hw/acpi/acpi_dev_interface.h"
+#include "hw/hotplug.h"
+#include "hw/mem/nvdimm.h"
 
 /*
  * current device naming scheme supports up to 256 memory devices
@@ -186,6 +188,48 @@ extern int acpi_enabled;
 extern char unsigned *acpi_tables;
 extern size_t acpi_tables_len;
 
+typedef
+struct AcpiBuildState {
+    /* Copy of table in RAM (for patching). */
+    MemoryRegion *table_mr;
+    /* Is table patched? */
+    bool patched;
+    void *rsdp;
+    MemoryRegion *rsdp_mr;
+    MemoryRegion *linker_mr;
+} AcpiBuildState;
+
+typedef
+struct AcpiConfiguration {
+    /* Machine class ACPI settings */
+    int legacy_acpi_table_size;
+    bool rsdp_in_ram;
+    unsigned acpi_data_size;
+
+    /* Machine state ACPI settings */
+    HotplugHandler *acpi_dev;
+    AcpiNVDIMMState acpi_nvdimm_state;
+
+    /*
+     * The fields below are machine settings that
+     * are not ACPI specific. However they are needed
+     * for building ACPI tables and as such should be
+     * carried through the ACPI configuration structure.
+     */
+    bool legacy_cpu_hotplug;
+    bool linuxboot_dma_enabled;
+    FWCfgState *fw_cfg;
+    ram_addr_t below_4g_mem_size, above_4g_mem_size;;
+    uint64_t numa_nodes;
+    uint64_t *node_mem;
+    bool apic_xrupt_override;
+    unsigned apic_id_limit;
+    PCIHostState *pci_host;
+
+    /* Build state */
+    AcpiBuildState *build_state;
+} AcpiConfiguration;
+
 uint8_t *acpi_table_first(void);
 uint8_t *acpi_table_next(uint8_t *current);
 unsigned acpi_table_len(void *current);
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 136fe497b6..fed136fcdd 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -12,6 +12,7 @@
 #include "qemu/range.h"
 #include "qemu/bitmap.h"
 #include "sysemu/sysemu.h"
+#include "hw/acpi/acpi.h"
 #include "hw/pci/pci.h"
 #include "hw/compat.h"
 #include "hw/mem/pc-dimm.h"
@@ -35,10 +36,8 @@ struct PCMachineState {
     Notifier machine_done;
 
     /* Pointers to devices and objects: */
-    HotplugHandler *acpi_dev;
     ISADevice *rtc;
     PCIBus *bus;
-    FWCfgState *fw_cfg;
     qemu_irq *gsi;
 
     /* Configuration options: */
@@ -46,28 +45,20 @@ struct PCMachineState {
     OnOffAuto vmport;
     OnOffAuto smm;
 
-    AcpiNVDIMMState acpi_nvdimm_state;
-
     bool acpi_build_enabled;
     bool smbus;
     bool sata;
     bool pit;
 
-    /* RAM information (sizes, addresses, configuration): */
-    ram_addr_t below_4g_mem_size, above_4g_mem_size;
-
-    /* CPU and apic information: */
-    bool apic_xrupt_override;
-    unsigned apic_id_limit;
+    /* CPU information */
     uint16_t boot_cpus;
 
-    /* NUMA information: */
-    uint64_t numa_nodes;
-    uint64_t *node_mem;
-
     /* Address space used by IOAPIC device. All IOAPIC interrupts
      * will be translated to MSI messages in the address space. */
     AddressSpace *ioapic_as;
+
+    /* ACPI configuration */
+    AcpiConfiguration acpi_configuration;
 };
 
 #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
diff --git a/hw/acpi/cpu_hotplug.c b/hw/acpi/cpu_hotplug.c
index 5243918125..634dc3b846 100644
--- a/hw/acpi/cpu_hotplug.c
+++ b/hw/acpi/cpu_hotplug.c
@@ -237,9 +237,9 @@ void build_legacy_cpu_hotplug_aml(Aml *ctx, MachineState *machine,
     /* The current AML generator can cover the APIC ID range [0..255],
      * inclusive, for VCPU hotplug. */
     QEMU_BUILD_BUG_ON(ACPI_CPU_HOTPLUG_ID_LIMIT > 256);
-    if (pcms->apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
+    if (pcms->acpi_configuration.apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
         error_report("max_cpus is too large. APIC ID of last CPU is %u",
-                     pcms->apic_id_limit - 1);
+                     pcms->acpi_configuration.apic_id_limit - 1);
         exit(1);
     }
 
@@ -316,8 +316,9 @@ void build_legacy_cpu_hotplug_aml(Aml *ctx, MachineState *machine,
      * ith up to 255 elements. Windows guests up to win2k8 fail when
      * VarPackageOp is used.
      */
-    pkg = pcms->apic_id_limit <= 255 ? aml_package(pcms->apic_id_limit) :
-                                       aml_varpackage(pcms->apic_id_limit);
+    pkg = pcms->acpi_configuration.apic_id_limit <= 255 ?
+        aml_package(pcms->acpi_configuration.apic_id_limit) :
+        aml_varpackage(pcms->acpi_configuration.apic_id_limit);
 
     for (i = 0, apic_idx = 0; i < apic_ids->len; i++) {
         int apic_id = apic_ids->cpus[i].arch_id;
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 5785fb697c..f28a2faa53 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -790,16 +790,6 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
     free_aml_allocator();
 }
 
-typedef
-struct AcpiBuildState {
-    /* Copy of table in RAM (for patching). */
-    MemoryRegion *table_mr;
-    MemoryRegion *rsdp_mr;
-    MemoryRegion *linker_mr;
-    /* Is table patched? */
-    bool patched;
-} AcpiBuildState;
-
 static
 void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
 {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 1599caa7c5..d0362e1382 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -338,13 +338,14 @@ void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
 }
 
 static void
-build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
+build_madt(GArray *table_data, BIOSLinker *linker,
+           MachineState *ms, AcpiConfiguration *acpi_conf)
 {
-    MachineClass *mc = MACHINE_GET_CLASS(pcms);
-    const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(MACHINE(pcms));
+    MachineClass *mc = MACHINE_GET_CLASS(ms);
+    const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(ms);
     int madt_start = table_data->len;
-    AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(pcms->acpi_dev);
-    AcpiDeviceIf *adev = ACPI_DEVICE_IF(pcms->acpi_dev);
+    AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_conf->acpi_dev);
+    AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_conf->acpi_dev);
     bool x2apic_mode = false;
 
     AcpiMultipleApicTable *madt;
@@ -370,7 +371,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
     io_apic->address = cpu_to_le32(IO_APIC_DEFAULT_ADDRESS);
     io_apic->interrupt = cpu_to_le32(0);
 
-    if (pcms->apic_xrupt_override) {
+    if (acpi_conf->apic_xrupt_override) {
         intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);
         intsrcovr->type   = ACPI_APIC_XRUPT_OVERRIDE;
         intsrcovr->length = sizeof(*intsrcovr);
@@ -1786,13 +1787,12 @@ static Aml *build_q35_osc_method(void)
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
-           Range *pci_hole, Range *pci_hole64, MachineState *machine)
+           Range *pci_hole, Range *pci_hole64,
+           MachineState *machine, AcpiConfiguration *acpi_conf)
 {
     CrsRangeEntry *entry;
     Aml *dsdt, *sb_scope, *scope, *dev, *method, *field, *pkg, *crs;
     CrsRangeSet crs_range_set;
-    PCMachineState *pcms = PC_MACHINE(machine);
-    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(machine);
     uint32_t nr_mem = machine->ram_slots;
     int root_bus_limit = 0xFF;
     PCIBus *bus = NULL;
@@ -1836,7 +1836,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         build_q35_pci0_int(dsdt);
     }
 
-    if (pcmc->legacy_cpu_hotplug) {
+    if (acpi_conf->legacy_cpu_hotplug) {
         build_legacy_cpu_hotplug_aml(dsdt, machine, pm->cpu_hp_io_base);
     } else {
         CPUHotplugFeatures opts = {
@@ -1860,7 +1860,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(scope, method);
         }
 
-        if (pcms->acpi_nvdimm_state.is_enabled) {
+        if (acpi_conf->acpi_nvdimm_state.is_enabled) {
             method = aml_method("_E04", 0, AML_NOTSERIALIZED);
             aml_append(method, aml_notify(aml_name("\\_SB.NVDR"),
                                           aml_int(0x80)));
@@ -2041,7 +2041,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
          * with half of the 16-bit control register. Hence, the total size
          * of the i/o region used is FW_CFG_CTL_SIZE; when using DMA, the
          * DMA control register is located at FW_CFG_DMA_IO_BASE + 4 */
-        uint8_t io_size = object_property_get_bool(OBJECT(pcms->fw_cfg),
+        uint8_t io_size = object_property_get_bool(OBJECT(acpi_conf->fw_cfg),
                                                    "dma_enabled", NULL) ?
                           ROUND_UP(FW_CFG_CTL_SIZE, 4) + sizeof(dma_addr_t) :
                           FW_CFG_CTL_SIZE;
@@ -2252,7 +2252,8 @@ build_tpm2(GArray *table_data, BIOSLinker *linker, GArray *tcpalog)
 #define HOLE_640K_END   (1 * MiB)
 
 static void
-build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
+build_srat(GArray *table_data, BIOSLinker *linker,
+           MachineState *machine, AcpiConfiguration *acpi_conf)
 {
     AcpiSystemResourceAffinityTable *srat;
     AcpiSratMemoryAffinity *numamem;
@@ -2262,9 +2263,8 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
     uint64_t mem_len, mem_base, next_base;
     MachineClass *mc = MACHINE_GET_CLASS(machine);
     const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
-    PCMachineState *pcms = PC_MACHINE(machine);
     ram_addr_t hotplugabble_address_space_size =
-        object_property_get_int(OBJECT(pcms), PC_MACHINE_DEVMEM_REGION_SIZE,
+        object_property_get_int(OBJECT(machine), PC_MACHINE_DEVMEM_REGION_SIZE,
                                 NULL);
 
     srat_start = table_data->len;
@@ -2306,9 +2306,9 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
     next_base = 0;
     numa_start = table_data->len;
 
-    for (i = 1; i < pcms->numa_nodes + 1; ++i) {
+    for (i = 1; i < acpi_conf->numa_nodes + 1; ++i) {
         mem_base = next_base;
-        mem_len = pcms->node_mem[i - 1];
+        mem_len = acpi_conf->node_mem[i - 1];
         next_base = mem_base + mem_len;
 
         /* Cut out the 640K hole */
@@ -2331,16 +2331,16 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
         }
 
         /* Cut out the ACPI_PCI hole */
-        if (mem_base <= pcms->below_4g_mem_size &&
-            next_base > pcms->below_4g_mem_size) {
-            mem_len -= next_base - pcms->below_4g_mem_size;
+        if (mem_base <= acpi_conf->below_4g_mem_size &&
+            next_base > acpi_conf->below_4g_mem_size) {
+            mem_len -= next_base - acpi_conf->below_4g_mem_size;
             if (mem_len > 0) {
                 numamem = acpi_data_push(table_data, sizeof *numamem);
                 build_srat_memory(numamem, mem_base, mem_len, i - 1,
                                   MEM_AFFINITY_ENABLED);
             }
             mem_base = 1ULL << 32;
-            mem_len = next_base - pcms->below_4g_mem_size;
+            mem_len = next_base - acpi_conf->below_4g_mem_size;
             next_base = mem_base + mem_len;
         }
 
@@ -2351,7 +2351,7 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
         }
     }
     slots = (table_data->len - numa_start) / sizeof *numamem;
-    for (; slots < pcms->numa_nodes + 2; slots++) {
+    for (; slots < acpi_conf->numa_nodes + 2; slots++) {
         numamem = acpi_data_push(table_data, sizeof *numamem);
         build_srat_memory(numamem, 0, 0, 0, MEM_AFFINITY_NOFLAGS);
     }
@@ -2367,7 +2367,8 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
     if (hotplugabble_address_space_size) {
         numamem = acpi_data_push(table_data, sizeof *numamem);
         build_srat_memory(numamem, machine->device_memory->base,
-                          hotplugabble_address_space_size, pcms->numa_nodes - 1,
+                          hotplugabble_address_space_size,
+                          acpi_conf->numa_nodes - 1,
                           MEM_AFFINITY_HOTPLUGGABLE | MEM_AFFINITY_ENABLED);
     }
 
@@ -2546,17 +2547,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
     return rsdp_table;
 }
 
-typedef
-struct AcpiBuildState {
-    /* Copy of table in RAM (for patching). */
-    MemoryRegion *table_mr;
-    /* Is table patched? */
-    uint8_t patched;
-    void *rsdp;
-    MemoryRegion *rsdp_mr;
-    MemoryRegion *linker_mr;
-} AcpiBuildState;
-
 static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
 {
     Object *pci_host;
@@ -2580,10 +2570,9 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
 }
 
 static
-void acpi_build(AcpiBuildTables *tables, MachineState *machine)
+void acpi_build(AcpiBuildTables *tables,
+                MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-    PCMachineState *pcms = PC_MACHINE(machine);
-    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
     GArray *table_offsets;
     unsigned facs, dsdt, rsdt, fadt;
     AcpiPmInfo pm;
@@ -2621,7 +2610,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
     /* DSDT is pointed to by FADT */
     dsdt = tables_blob->len;
     build_dsdt(tables_blob, tables->linker, &pm, &misc,
-               &pci_hole, &pci_hole64, machine);
+               &pci_hole, &pci_hole64, machine, acpi_conf);
 
     /* Count the size of the DSDT and SSDT, we will need it for legacy
      * sizing of ACPI tables.
@@ -2639,7 +2628,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
     aml_len += tables_blob->len - fadt;
 
     acpi_add_table(table_offsets, tables_blob);
-    build_madt(tables_blob, tables->linker, pcms);
+    build_madt(tables_blob, tables->linker, machine, acpi_conf);
 
     vmgenid_dev = find_vmgenid_dev();
     if (vmgenid_dev) {
@@ -2661,9 +2650,9 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
             build_tpm2(tables_blob, tables->linker, tables->tcpalog);
         }
     }
-    if (pcms->numa_nodes) {
+    if (acpi_conf->numa_nodes) {
         acpi_add_table(table_offsets, tables_blob);
-        build_srat(tables_blob, tables->linker, machine);
+        build_srat(tables_blob, tables->linker, machine, acpi_conf);
         if (have_numa_distance) {
             acpi_add_table(table_offsets, tables_blob);
             build_slit(tables_blob, tables->linker);
@@ -2683,9 +2672,9 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
             build_dmar_q35(tables_blob, tables->linker);
         }
     }
-    if (pcms->acpi_nvdimm_state.is_enabled) {
+    if (acpi_conf->acpi_nvdimm_state.is_enabled) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
-                          &pcms->acpi_nvdimm_state, machine->ram_slots);
+                          &acpi_conf->acpi_nvdimm_state, machine->ram_slots);
     }
 
     /* Add tables supplied by user (if any) */
@@ -2721,13 +2710,13 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
      *
      * All this is for PIIX4, since QEMU 2.0 didn't support Q35 migration.
      */
-    if (pcmc->legacy_acpi_table_size) {
+    if (acpi_conf->legacy_acpi_table_size) {
         /* Subtracting aml_len gives the size of fixed tables.  Then add the
          * size of the PIIX4 DSDT/SSDT in QEMU 2.0.
          */
         int legacy_aml_len =
-            pcmc->legacy_acpi_table_size +
-            ACPI_BUILD_LEGACY_CPU_AML_SIZE * pcms->apic_id_limit;
+            acpi_conf->legacy_acpi_table_size +
+            ACPI_BUILD_LEGACY_CPU_AML_SIZE * acpi_conf->apic_id_limit;
         int legacy_table_size =
             ROUND_UP(tables_blob->len - aml_len + legacy_aml_len,
                      ACPI_BUILD_ALIGN_SIZE);
@@ -2772,9 +2761,17 @@ static void acpi_ram_update(MemoryRegion *mr, GArray *data)
 
 static void acpi_build_update(void *build_opaque)
 {
-    AcpiBuildState *build_state = build_opaque;
+    AcpiConfiguration *acpi_conf = build_opaque;
+    AcpiBuildState *build_state;
     AcpiBuildTables tables;
 
+    /* No ACPI configuration? Nothing to do. */
+    if (!acpi_conf) {
+        return;
+    }
+
+    build_state = acpi_conf->build_state;
+
     /* No state to update or already patched? Nothing to do. */
     if (!build_state || build_state->patched) {
         return;
@@ -2783,7 +2780,7 @@ static void acpi_build_update(void *build_opaque)
 
     acpi_build_tables_init(&tables);
 
-    acpi_build(&tables, MACHINE(qdev_get_machine()));
+    acpi_build(&tables, MACHINE(qdev_get_machine()), acpi_conf);
 
     acpi_ram_update(build_state->table_mr, tables.table_data);
 
@@ -2803,12 +2800,12 @@ static void acpi_build_reset(void *build_opaque)
     build_state->patched = 0;
 }
 
-static MemoryRegion *acpi_add_rom_blob(AcpiBuildState *build_state,
+static MemoryRegion *acpi_add_rom_blob(AcpiConfiguration *acpi_conf,
                                        GArray *blob, const char *name,
                                        uint64_t max_size)
 {
     return rom_add_blob(name, blob->data, acpi_data_len(blob), max_size, -1,
-                        name, acpi_build_update, build_state, NULL, true);
+                        name, acpi_build_update, acpi_conf, NULL, true);
 }
 
 static const VMStateDescription vmstate_acpi_build = {
@@ -2816,59 +2813,48 @@ static const VMStateDescription vmstate_acpi_build = {
     .version_id = 1,
     .minimum_version_id = 1,
     .fields = (VMStateField[]) {
-        VMSTATE_UINT8(patched, AcpiBuildState),
+        VMSTATE_BOOL(patched, AcpiBuildState),
         VMSTATE_END_OF_LIST()
     },
 };
 
-void acpi_setup(void)
+void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-    PCMachineState *pcms = PC_MACHINE(qdev_get_machine());
-    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
     AcpiBuildTables tables;
     AcpiBuildState *build_state;
     Object *vmgenid_dev;
 
-    if (!pcms->fw_cfg) {
-        ACPI_BUILD_DPRINTF("No fw cfg. Bailing out.\n");
-        return;
-    }
-
-    if (!pcms->acpi_build_enabled) {
-        ACPI_BUILD_DPRINTF("ACPI build disabled. Bailing out.\n");
-        return;
-    }
-
-    if (!acpi_enabled) {
-        ACPI_BUILD_DPRINTF("ACPI disabled. Bailing out.\n");
+    if (!acpi_conf) {
+        ACPI_BUILD_DPRINTF("No ACPI config. Bailing out.\n");
         return;
     }
 
     build_state = g_malloc0(sizeof *build_state);
+    acpi_conf->build_state = build_state;
 
     acpi_build_tables_init(&tables);
-    acpi_build(&tables, MACHINE(pcms));
+    acpi_build(&tables, machine, acpi_conf);
 
     /* Now expose it all to Guest */
-    build_state->table_mr = acpi_add_rom_blob(build_state, tables.table_data,
+    build_state->table_mr = acpi_add_rom_blob(acpi_conf, tables.table_data,
                                                ACPI_BUILD_TABLE_FILE,
                                                ACPI_BUILD_TABLE_MAX_SIZE);
     assert(build_state->table_mr != NULL);
 
     build_state->linker_mr =
-        acpi_add_rom_blob(build_state, tables.linker->cmd_blob,
+        acpi_add_rom_blob(acpi_conf, tables.linker->cmd_blob,
                           "etc/table-loader", 0);
 
-    fw_cfg_add_file(pcms->fw_cfg, ACPI_BUILD_TPMLOG_FILE,
+    fw_cfg_add_file(acpi_conf->fw_cfg, ACPI_BUILD_TPMLOG_FILE,
                     tables.tcpalog->data, acpi_data_len(tables.tcpalog));
 
     vmgenid_dev = find_vmgenid_dev();
     if (vmgenid_dev) {
-        vmgenid_add_fw_cfg(VMGENID(vmgenid_dev), pcms->fw_cfg,
+        vmgenid_add_fw_cfg(VMGENID(vmgenid_dev), acpi_conf->fw_cfg,
                            tables.vmgenid);
     }
 
-    if (!pcmc->rsdp_in_ram) {
+    if (!acpi_conf->rsdp_in_ram) {
         /*
          * Keep for compatibility with old machine types.
          * Though RSDP is small, its contents isn't immutable, so
@@ -2877,13 +2863,13 @@ void acpi_setup(void)
         uint32_t rsdp_size = acpi_data_len(tables.rsdp);
 
         build_state->rsdp = g_memdup(tables.rsdp->data, rsdp_size);
-        fw_cfg_add_file_callback(pcms->fw_cfg, ACPI_BUILD_RSDP_FILE,
-                                 acpi_build_update, NULL, build_state,
+        fw_cfg_add_file_callback(acpi_conf->fw_cfg, ACPI_BUILD_RSDP_FILE,
+                                 acpi_build_update, NULL, acpi_conf,
                                  build_state->rsdp, rsdp_size, true);
         build_state->rsdp_mr = NULL;
     } else {
         build_state->rsdp = NULL;
-        build_state->rsdp_mr = acpi_add_rom_blob(build_state, tables.rsdp,
+        build_state->rsdp_mr = acpi_add_rom_blob(acpi_conf, tables.rsdp,
                                                   ACPI_BUILD_RSDP_FILE, 0);
     }
 
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index f095725dba..090f969933 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -444,17 +444,18 @@ void pc_cmos_init(PCMachineState *pcms,
 {
     int val;
     static pc_cmos_init_late_arg arg;
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     /* various important CMOS locations needed by PC/Bochs bios */
 
     /* memory size */
     /* base memory (first MiB) */
-    val = MIN(pcms->below_4g_mem_size / KiB, 640);
+    val = MIN(acpi_conf->below_4g_mem_size / KiB, 640);
     rtc_set_memory(s, 0x15, val);
     rtc_set_memory(s, 0x16, val >> 8);
     /* extended memory (next 64MiB) */
-    if (pcms->below_4g_mem_size > 1 * MiB) {
-        val = (pcms->below_4g_mem_size - 1 * MiB) / KiB;
+    if (acpi_conf->below_4g_mem_size > 1 * MiB) {
+        val = (acpi_conf->below_4g_mem_size - 1 * MiB) / KiB;
     } else {
         val = 0;
     }
@@ -465,8 +466,8 @@ void pc_cmos_init(PCMachineState *pcms,
     rtc_set_memory(s, 0x30, val);
     rtc_set_memory(s, 0x31, val >> 8);
     /* memory between 16MiB and 4GiB */
-    if (pcms->below_4g_mem_size > 16 * MiB) {
-        val = (pcms->below_4g_mem_size - 16 * MiB) / (64 * KiB);
+    if (acpi_conf->below_4g_mem_size > 16 * MiB) {
+        val = (acpi_conf->below_4g_mem_size - 16 * MiB) / (64 * KiB);
     } else {
         val = 0;
     }
@@ -475,7 +476,7 @@ void pc_cmos_init(PCMachineState *pcms,
     rtc_set_memory(s, 0x34, val);
     rtc_set_memory(s, 0x35, val >> 8);
     /* memory above 4GiB */
-    val = pcms->above_4g_mem_size / 65536;
+    val = acpi_conf->above_4g_mem_size / 65536;
     rtc_set_memory(s, 0x5b, val);
     rtc_set_memory(s, 0x5c, val >> 8);
     rtc_set_memory(s, 0x5d, val >> 16);
@@ -714,13 +715,14 @@ static void pc_build_smbios(PCMachineState *pcms)
     unsigned i, array_count;
     MachineState *ms = MACHINE(pcms);
     X86CPU *cpu = X86_CPU(ms->possible_cpus->cpus[0].cpu);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     /* tell smbios about cpuid version and features */
     smbios_set_cpuid(cpu->env.cpuid_version, cpu->env.features[FEAT_1_EDX]);
 
     smbios_tables = smbios_get_table_legacy(&smbios_tables_len);
     if (smbios_tables) {
-        fw_cfg_add_bytes(pcms->fw_cfg, FW_CFG_SMBIOS_ENTRIES,
+        fw_cfg_add_bytes(acpi_conf->fw_cfg, FW_CFG_SMBIOS_ENTRIES,
                          smbios_tables, smbios_tables_len);
     }
 
@@ -741,9 +743,9 @@ static void pc_build_smbios(PCMachineState *pcms)
     g_free(mem_array);
 
     if (smbios_anchor) {
-        fw_cfg_add_file(pcms->fw_cfg, "etc/smbios/smbios-tables",
+        fw_cfg_add_file(acpi_conf->fw_cfg, "etc/smbios/smbios-tables",
                         smbios_tables, smbios_tables_len);
-        fw_cfg_add_file(pcms->fw_cfg, "etc/smbios/smbios-anchor",
+        fw_cfg_add_file(acpi_conf->fw_cfg, "etc/smbios/smbios-anchor",
                         smbios_anchor, smbios_anchor_len);
     }
 }
@@ -755,6 +757,7 @@ static FWCfgState *bochs_bios_init(AddressSpace *as, PCMachineState *pcms)
     int i;
     const CPUArchIdList *cpus;
     MachineClass *mc = MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     fw_cfg = fw_cfg_init_io_dma(FW_CFG_IO_BASE, FW_CFG_IO_BASE + 4, as);
     fw_cfg_add_i16(fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
@@ -771,7 +774,7 @@ static FWCfgState *bochs_bios_init(AddressSpace *as, PCMachineState *pcms)
      * So for compatibility reasons with old BIOSes we are stuck with
      * "etc/max-cpus" actually being apic_id_limit
      */
-    fw_cfg_add_i16(fw_cfg, FW_CFG_MAX_CPUS, (uint16_t)pcms->apic_id_limit);
+    fw_cfg_add_i16(fw_cfg, FW_CFG_MAX_CPUS, (uint16_t)acpi_conf->apic_id_limit);
     fw_cfg_add_i64(fw_cfg, FW_CFG_RAM_SIZE, (uint64_t)ram_size);
     fw_cfg_add_bytes(fw_cfg, FW_CFG_ACPI_TABLES,
                      acpi_tables, acpi_tables_len);
@@ -787,20 +790,21 @@ static FWCfgState *bochs_bios_init(AddressSpace *as, PCMachineState *pcms)
      * of nodes, one word for each VCPU->node and one word for each node to
      * hold the amount of memory.
      */
-    numa_fw_cfg = g_new0(uint64_t, 1 + pcms->apic_id_limit + nb_numa_nodes);
+    numa_fw_cfg = g_new0(uint64_t,
+                         1 + acpi_conf->apic_id_limit + nb_numa_nodes);
     numa_fw_cfg[0] = cpu_to_le64(nb_numa_nodes);
     cpus = mc->possible_cpu_arch_ids(MACHINE(pcms));
     for (i = 0; i < cpus->len; i++) {
         unsigned int apic_id = cpus->cpus[i].arch_id;
-        assert(apic_id < pcms->apic_id_limit);
+        assert(apic_id < acpi_conf->apic_id_limit);
         numa_fw_cfg[apic_id + 1] = cpu_to_le64(cpus->cpus[i].props.node_id);
     }
     for (i = 0; i < nb_numa_nodes; i++) {
-        numa_fw_cfg[pcms->apic_id_limit + 1 + i] =
+        numa_fw_cfg[acpi_conf->apic_id_limit + 1 + i] =
             cpu_to_le64(numa_info[i].node_mem);
     }
     fw_cfg_add_bytes(fw_cfg, FW_CFG_NUMA, numa_fw_cfg,
-                     (1 + pcms->apic_id_limit + nb_numa_nodes) *
+                     (1 + acpi_conf->apic_id_limit + nb_numa_nodes) *
                      sizeof(*numa_fw_cfg));
 
     return fw_cfg;
@@ -848,6 +852,7 @@ static void load_linux(PCMachineState *pcms,
     char *vmode;
     MachineState *machine = MACHINE(pcms);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     struct setup_data *setup_data;
     const char *kernel_filename = machine->kernel_filename;
     const char *initrd_filename = machine->initrd_filename;
@@ -917,8 +922,8 @@ static void load_linux(PCMachineState *pcms,
         initrd_max = 0x37ffffff;
     }
 
-    if (initrd_max >= pcms->below_4g_mem_size - pcmc->acpi_data_size) {
-        initrd_max = pcms->below_4g_mem_size - pcmc->acpi_data_size - 1;
+    if (initrd_max >= acpi_conf->below_4g_mem_size - pcmc->acpi_data_size) {
+        initrd_max = acpi_conf->below_4g_mem_size - pcmc->acpi_data_size - 1;
     }
 
     fw_cfg_add_i32(fw_cfg, FW_CFG_CMDLINE_ADDR, cmdline_addr);
@@ -1154,7 +1159,8 @@ void pc_cpus_init(PCMachineState *pcms)
      *
      * This is used for FW_CFG_MAX_CPUS. See comments on bochs_bios_init().
      */
-    pcms->apic_id_limit = x86_cpu_apic_id_from_index(max_cpus - 1) + 1;
+    pcms->acpi_configuration.apic_id_limit =
+        x86_cpu_apic_id_from_index(max_cpus - 1) + 1;
     possible_cpus = mc->possible_cpu_arch_ids(ms);
     for (i = 0; i < smp_cpus; i++) {
         pc_new_cpu(possible_cpus->cpus[i].type, possible_cpus->cpus[i].arch_id,
@@ -1188,7 +1194,8 @@ static void pc_build_feature_control_file(PCMachineState *pcms)
 
     val = g_malloc(sizeof(*val));
     *val = cpu_to_le64(feature_control_bits | FEATURE_CONTROL_LOCKED);
-    fw_cfg_add_file(pcms->fw_cfg, "etc/msr_feature_control", val, sizeof(*val));
+    fw_cfg_add_file(pcms->acpi_configuration.fw_cfg,
+                    "etc/msr_feature_control", val, sizeof(*val));
 }
 
 static void rtc_set_cpus_count(ISADevice *rtc, uint16_t cpus_count)
@@ -1204,11 +1211,26 @@ static void rtc_set_cpus_count(ISADevice *rtc, uint16_t cpus_count)
     }
 }
 
+static void acpi_conf_pc_init(PCMachineState *pcms)
+{
+    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
+
+    /* Machine class settings */
+    acpi_conf->legacy_acpi_table_size = pcmc->legacy_acpi_table_size;
+    acpi_conf->legacy_cpu_hotplug = pcmc->legacy_cpu_hotplug;
+    acpi_conf->rsdp_in_ram = pcmc->rsdp_in_ram;
+
+    /* ACPI build state */
+    acpi_conf->build_state = NULL;
+}
+
 static
 void pc_machine_done(Notifier *notifier, void *data)
 {
     PCMachineState *pcms = container_of(notifier,
                                         PCMachineState, machine_done);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     PCIBus *bus = pcms->bus;
 
     /* set the number of CPUs */
@@ -1223,23 +1245,27 @@ void pc_machine_done(Notifier *notifier, void *data)
                 extra_hosts++;
             }
         }
-        if (extra_hosts && pcms->fw_cfg) {
+        if (extra_hosts && acpi_conf->fw_cfg) {
             uint64_t *val = g_malloc(sizeof(*val));
             *val = cpu_to_le64(extra_hosts);
-            fw_cfg_add_file(pcms->fw_cfg,
+            fw_cfg_add_file(acpi_conf->fw_cfg,
                     "etc/extra-pci-roots", val, sizeof(*val));
         }
     }
 
-    acpi_setup();
-    if (pcms->fw_cfg) {
+    if (pcms->acpi_build_enabled) {
+        acpi_conf_pc_init(pcms);
+        acpi_setup(MACHINE(pcms), acpi_conf);
+    }
+
+    if (acpi_conf->fw_cfg) {
         pc_build_smbios(pcms);
         pc_build_feature_control_file(pcms);
         /* update FW_CFG_NB_CPUS to account for -device added CPUs */
-        fw_cfg_modify_i16(pcms->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
+        fw_cfg_modify_i16(acpi_conf->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
     }
 
-    if (pcms->apic_id_limit > 255 && !xen_enabled()) {
+    if (acpi_conf->apic_id_limit > 255 && !xen_enabled()) {
         IntelIOMMUState *iommu = INTEL_IOMMU_DEVICE(x86_iommu_get_default());
 
         if (!iommu || !iommu->x86_iommu.intr_supported ||
@@ -1256,13 +1282,14 @@ void pc_machine_done(Notifier *notifier, void *data)
 void pc_guest_info_init(PCMachineState *pcms)
 {
     int i;
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    pcms->apic_xrupt_override = kvm_allows_irq0_override();
-    pcms->numa_nodes = nb_numa_nodes;
-    pcms->node_mem = g_malloc0(pcms->numa_nodes *
-                                    sizeof *pcms->node_mem);
+    acpi_conf->apic_xrupt_override = kvm_allows_irq0_override();
+    acpi_conf->numa_nodes = nb_numa_nodes;
+    acpi_conf->node_mem = g_malloc0(acpi_conf->numa_nodes *
+                                    sizeof *acpi_conf->node_mem);
     for (i = 0; i < nb_numa_nodes; i++) {
-        pcms->node_mem[i] = numa_info[i].node_mem;
+        acpi_conf->node_mem[i] = numa_info[i].node_mem;
     }
 
     pcms->machine_done.notify = pc_machine_done;
@@ -1323,7 +1350,7 @@ void xen_load_linux(PCMachineState *pcms)
                !strcmp(option_rom[i].name, "multiboot.bin"));
         rom_add_option(option_rom[i].name, option_rom[i].bootindex);
     }
-    pcms->fw_cfg = fw_cfg;
+    pcms->acpi_configuration.fw_cfg = fw_cfg;
 }
 
 void pc_memory_init(PCMachineState *pcms,
@@ -1337,9 +1364,10 @@ void pc_memory_init(PCMachineState *pcms,
     FWCfgState *fw_cfg;
     MachineState *machine = MACHINE(pcms);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    assert(machine->ram_size == pcms->below_4g_mem_size +
-                                pcms->above_4g_mem_size);
+    assert(machine->ram_size == acpi_conf->below_4g_mem_size +
+                                acpi_conf->above_4g_mem_size);
 
     linux_boot = (machine->kernel_filename != NULL);
 
@@ -1353,17 +1381,17 @@ void pc_memory_init(PCMachineState *pcms,
     *ram_memory = ram;
     ram_below_4g = g_malloc(sizeof(*ram_below_4g));
     memory_region_init_alias(ram_below_4g, NULL, "ram-below-4g", ram,
-                             0, pcms->below_4g_mem_size);
+                             0, acpi_conf->below_4g_mem_size);
     memory_region_add_subregion(system_memory, 0, ram_below_4g);
-    e820_add_entry(0, pcms->below_4g_mem_size, E820_RAM);
-    if (pcms->above_4g_mem_size > 0) {
+    e820_add_entry(0, acpi_conf->below_4g_mem_size, E820_RAM);
+    if (acpi_conf->above_4g_mem_size > 0) {
         ram_above_4g = g_malloc(sizeof(*ram_above_4g));
         memory_region_init_alias(ram_above_4g, NULL, "ram-above-4g", ram,
-                                 pcms->below_4g_mem_size,
-                                 pcms->above_4g_mem_size);
+                                 acpi_conf->below_4g_mem_size,
+                                 acpi_conf->above_4g_mem_size);
         memory_region_add_subregion(system_memory, 0x100000000ULL,
                                     ram_above_4g);
-        e820_add_entry(0x100000000ULL, pcms->above_4g_mem_size, E820_RAM);
+        e820_add_entry(0x100000000ULL, acpi_conf->above_4g_mem_size, E820_RAM);
     }
 
     if (!pcmc->has_reserved_memory &&
@@ -1398,7 +1426,7 @@ void pc_memory_init(PCMachineState *pcms,
         }
 
         machine->device_memory->base =
-            ROUND_UP(0x100000000ULL + pcms->above_4g_mem_size, 1 * GiB);
+            ROUND_UP(0x100000000ULL + acpi_conf->above_4g_mem_size, 1 * GiB);
 
         if (pcmc->enforce_aligned_dimm) {
             /* size device region assuming 1G page max alignment per slot */
@@ -1455,7 +1483,7 @@ void pc_memory_init(PCMachineState *pcms,
     for (i = 0; i < nb_option_roms; i++) {
         rom_add_option(option_rom[i].name, option_rom[i].bootindex);
     }
-    pcms->fw_cfg = fw_cfg;
+    acpi_conf->fw_cfg = fw_cfg;
 
     /* Init default IOAPIC address space */
     pcms->ioapic_as = &address_space_memory;
@@ -1478,7 +1506,8 @@ uint64_t pc_pci_hole64_start(void)
             hole64_start += memory_region_size(&ms->device_memory->mr);
         }
     } else {
-        hole64_start = 0x100000000ULL + pcms->above_4g_mem_size;
+        hole64_start =
+            0x100000000ULL + pcms->acpi_configuration.above_4g_mem_size;
     }
 
     return ROUND_UP(hole64_start, 1 * GiB);
@@ -1685,21 +1714,22 @@ static void pc_memory_pre_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
 {
     const PCMachineState *pcms = PC_MACHINE(hotplug_dev);
     const PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    const AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     const bool is_nvdimm = object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM);
     const uint64_t legacy_align = TARGET_PAGE_SIZE;
 
     /*
      * When -no-acpi is used with Q35 machine type, no ACPI is built,
-     * but pcms->acpi_dev is still created. Check !acpi_enabled in
+     * but acpi_dev is still created. Check !acpi_enabled in
      * addition to cover this case.
      */
-    if (!pcms->acpi_dev || !acpi_enabled) {
+    if (!acpi_conf->acpi_dev || !acpi_enabled) {
         error_setg(errp,
                    "memory hotplug is not enabled: missing acpi device or acpi disabled");
         return;
     }
 
-    if (is_nvdimm && !pcms->acpi_nvdimm_state.is_enabled) {
+    if (is_nvdimm && !acpi_conf->acpi_nvdimm_state.is_enabled) {
         error_setg(errp, "nvdimm is not enabled: missing 'nvdimm' in '-M'");
         return;
     }
@@ -1715,6 +1745,7 @@ static void pc_memory_plug(HotplugHandler *hotplug_dev,
     Error *local_err = NULL;
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
     bool is_nvdimm = object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     pc_dimm_plug(PC_DIMM(dev), MACHINE(pcms), &local_err);
     if (local_err) {
@@ -1722,11 +1753,11 @@ static void pc_memory_plug(HotplugHandler *hotplug_dev,
     }
 
     if (is_nvdimm) {
-        nvdimm_plug(&pcms->acpi_nvdimm_state);
+        nvdimm_plug(&acpi_conf->acpi_nvdimm_state);
     }
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->plug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &error_abort);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->plug(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &error_abort);
 out:
     error_propagate(errp, local_err);
 }
@@ -1737,13 +1768,14 @@ static void pc_memory_unplug_request(HotplugHandler *hotplug_dev,
     HotplugHandlerClass *hhc;
     Error *local_err = NULL;
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
     /*
      * When -no-acpi is used with Q35 machine type, no ACPI is built,
-     * but pcms->acpi_dev is still created. Check !acpi_enabled in
+     * but acpi_dev is still created. Check !acpi_enabled in
      * addition to cover this case.
      */
-    if (!pcms->acpi_dev || !acpi_enabled) {
+    if (!acpi_conf->acpi_dev || !acpi_enabled) {
         error_setg(&local_err,
                    "memory hotplug is not enabled: missing acpi device or acpi disabled");
         goto out;
@@ -1755,8 +1787,8 @@ static void pc_memory_unplug_request(HotplugHandler *hotplug_dev,
         goto out;
     }
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->unplug_request(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->unplug_request(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
 
 out:
     error_propagate(errp, local_err);
@@ -1766,11 +1798,12 @@ static void pc_memory_unplug(HotplugHandler *hotplug_dev,
                              DeviceState *dev, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     HotplugHandlerClass *hhc;
     Error *local_err = NULL;
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->unplug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->unplug(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
 
     if (local_err) {
         goto out;
@@ -1817,10 +1850,11 @@ static void pc_cpu_plug(HotplugHandler *hotplug_dev,
     Error *local_err = NULL;
     X86CPU *cpu = X86_CPU(dev);
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    if (pcms->acpi_dev) {
-        hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-        hhc->plug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    if (acpi_conf->acpi_dev) {
+        hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+        hhc->plug(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
         if (local_err) {
             goto out;
         }
@@ -1831,8 +1865,8 @@ static void pc_cpu_plug(HotplugHandler *hotplug_dev,
     if (pcms->rtc) {
         rtc_set_cpus_count(pcms->rtc, pcms->boot_cpus);
     }
-    if (pcms->fw_cfg) {
-        fw_cfg_modify_i16(pcms->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
+    if (acpi_conf->fw_cfg) {
+        fw_cfg_modify_i16(acpi_conf->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
     }
 
     found_cpu = pc_find_cpu_slot(MACHINE(pcms), cpu->apic_id, NULL);
@@ -1848,8 +1882,9 @@ static void pc_cpu_unplug_request_cb(HotplugHandler *hotplug_dev,
     Error *local_err = NULL;
     X86CPU *cpu = X86_CPU(dev);
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    if (!pcms->acpi_dev) {
+    if (!acpi_conf->acpi_dev) {
         error_setg(&local_err, "CPU hot unplug not supported without ACPI");
         goto out;
     }
@@ -1861,8 +1896,8 @@ static void pc_cpu_unplug_request_cb(HotplugHandler *hotplug_dev,
         goto out;
     }
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->unplug_request(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->unplug_request(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
 
     if (local_err) {
         goto out;
@@ -1881,9 +1916,10 @@ static void pc_cpu_unplug_cb(HotplugHandler *hotplug_dev,
     Error *local_err = NULL;
     X86CPU *cpu = X86_CPU(dev);
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev);
-    hhc->unplug(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err);
+    hhc = HOTPLUG_HANDLER_GET_CLASS(acpi_conf->acpi_dev);
+    hhc->unplug(HOTPLUG_HANDLER(acpi_conf->acpi_dev), dev, &local_err);
 
     if (local_err) {
         goto out;
@@ -1897,7 +1933,7 @@ static void pc_cpu_unplug_cb(HotplugHandler *hotplug_dev,
     pcms->boot_cpus--;
     /* Update the number of CPUs in CMOS */
     rtc_set_cpus_count(pcms->rtc, pcms->boot_cpus);
-    fw_cfg_modify_i16(pcms->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
+    fw_cfg_modify_i16(acpi_conf->fw_cfg, FW_CFG_NB_CPUS, pcms->boot_cpus);
  out:
     error_propagate(errp, local_err);
 }
@@ -2181,28 +2217,30 @@ static bool pc_machine_get_nvdimm(Object *obj, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
 
-    return pcms->acpi_nvdimm_state.is_enabled;
+    return pcms->acpi_configuration.acpi_nvdimm_state.is_enabled;
 }
 
 static void pc_machine_set_nvdimm(Object *obj, bool value, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    pcms->acpi_nvdimm_state.is_enabled = value;
+    acpi_conf->acpi_nvdimm_state.is_enabled = value;
 }
 
 static char *pc_machine_get_nvdimm_persistence(Object *obj, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
 
-    return g_strdup(pcms->acpi_nvdimm_state.persistence_string);
+    return g_strdup(acpi_conf->acpi_nvdimm_state.persistence_string);
 }
 
 static void pc_machine_set_nvdimm_persistence(Object *obj, const char *value,
                                                Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
-    AcpiNVDIMMState *nvdimm_state = &pcms->acpi_nvdimm_state;
+    AcpiNVDIMMState *nvdimm_state = &pcms->acpi_configuration.acpi_nvdimm_state;
 
     if (strcmp(value, "cpu") == 0)
         nvdimm_state->persistence = 3;
@@ -2268,7 +2306,7 @@ static void pc_machine_initfn(Object *obj)
     pcms->smm = ON_OFF_AUTO_AUTO;
     pcms->vmport = ON_OFF_AUTO_AUTO;
     /* nvdimm is disabled on default. */
-    pcms->acpi_nvdimm_state.is_enabled = false;
+    pcms->acpi_configuration.acpi_nvdimm_state.is_enabled = false;
     /* acpi build is enabled by default if machine supports it */
     pcms->acpi_build_enabled = PC_MACHINE_GET_CLASS(pcms)->has_acpi_build;
     pcms->smbus = true;
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index dc09466b3e..0620d10715 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -71,6 +71,7 @@ static void pc_init1(MachineState *machine,
 {
     PCMachineState *pcms = PC_MACHINE(machine);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     MemoryRegion *system_memory = get_system_memory();
     MemoryRegion *system_io = get_system_io();
     int i;
@@ -142,11 +143,11 @@ static void pc_init1(MachineState *machine,
         }
 
         if (machine->ram_size >= lowmem) {
-            pcms->above_4g_mem_size = machine->ram_size - lowmem;
-            pcms->below_4g_mem_size = lowmem;
+            acpi_conf->above_4g_mem_size = machine->ram_size - lowmem;
+            acpi_conf->below_4g_mem_size = lowmem;
         } else {
-            pcms->above_4g_mem_size = 0;
-            pcms->below_4g_mem_size = machine->ram_size;
+            acpi_conf->above_4g_mem_size = 0;
+            acpi_conf->below_4g_mem_size = machine->ram_size;
         }
     }
 
@@ -199,8 +200,8 @@ static void pc_init1(MachineState *machine,
                               pci_type,
                               &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
                               system_memory, system_io, machine->ram_size,
-                              pcms->below_4g_mem_size,
-                              pcms->above_4g_mem_size,
+                              acpi_conf->below_4g_mem_size,
+                              acpi_conf->above_4g_mem_size,
                               pci_memory, ram_memory);
         pcms->bus = pci_bus;
     } else {
@@ -289,16 +290,16 @@ static void pc_init1(MachineState *machine,
 
         object_property_add_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
                                  TYPE_HOTPLUG_HANDLER,
-                                 (Object **)&pcms->acpi_dev,
+                                 (Object **)&acpi_conf->acpi_dev,
                                  object_property_allow_set_link,
                                  OBJ_PROP_LINK_STRONG, &error_abort);
         object_property_set_link(OBJECT(machine), OBJECT(piix4_pm),
                                  PC_MACHINE_ACPI_DEVICE_PROP, &error_abort);
     }
 
-    if (pcms->acpi_nvdimm_state.is_enabled) {
-        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
-                               pcms->fw_cfg, OBJECT(pcms));
+    if (acpi_conf->acpi_nvdimm_state.is_enabled) {
+        nvdimm_init_acpi_state(&acpi_conf->acpi_nvdimm_state, system_io,
+                               acpi_conf->fw_cfg, OBJECT(pcms));
     }
 }
 
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 532241e3f8..cdde4a4beb 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -63,6 +63,7 @@ static void pc_q35_init(MachineState *machine)
 {
     PCMachineState *pcms = PC_MACHINE(machine);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     Q35PCIHost *q35_host;
     PCIHostState *phb;
     PCIBus *host_bus;
@@ -116,11 +117,11 @@ static void pc_q35_init(MachineState *machine)
     }
 
     if (machine->ram_size >= lowmem) {
-        pcms->above_4g_mem_size = machine->ram_size - lowmem;
-        pcms->below_4g_mem_size = lowmem;
+        acpi_conf->above_4g_mem_size = machine->ram_size - lowmem;
+        acpi_conf->below_4g_mem_size = lowmem;
     } else {
-        pcms->above_4g_mem_size = 0;
-        pcms->below_4g_mem_size = machine->ram_size;
+        acpi_conf->above_4g_mem_size = 0;
+        acpi_conf->below_4g_mem_size = machine->ram_size;
     }
 
     if (xen_enabled()) {
@@ -179,9 +180,9 @@ static void pc_q35_init(MachineState *machine)
                              MCH_HOST_PROP_SYSTEM_MEM, NULL);
     object_property_set_link(OBJECT(q35_host), OBJECT(system_io),
                              MCH_HOST_PROP_IO_MEM, NULL);
-    object_property_set_int(OBJECT(q35_host), pcms->below_4g_mem_size,
+    object_property_set_int(OBJECT(q35_host), acpi_conf->below_4g_mem_size,
                             PCI_HOST_BELOW_4G_MEM_SIZE, NULL);
-    object_property_set_int(OBJECT(q35_host), pcms->above_4g_mem_size,
+    object_property_set_int(OBJECT(q35_host), acpi_conf->above_4g_mem_size,
                             PCI_HOST_ABOVE_4G_MEM_SIZE, NULL);
     /* pci */
     qdev_init_nofail(DEVICE(q35_host));
@@ -194,7 +195,7 @@ static void pc_q35_init(MachineState *machine)
 
     object_property_add_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
                              TYPE_HOTPLUG_HANDLER,
-                             (Object **)&pcms->acpi_dev,
+                             (Object **)&acpi_conf->acpi_dev,
                              object_property_allow_set_link,
                              OBJ_PROP_LINK_STRONG, &error_abort);
     object_property_set_link(OBJECT(machine), OBJECT(lpc),
@@ -276,9 +277,9 @@ static void pc_q35_init(MachineState *machine)
     pc_vga_init(isa_bus, host_bus);
     pc_nic_init(pcmc, isa_bus, host_bus);
 
-    if (pcms->acpi_nvdimm_state.is_enabled) {
-        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
-                               pcms->fw_cfg, OBJECT(pcms));
+    if (acpi_conf->acpi_nvdimm_state.is_enabled) {
+        nvdimm_init_acpi_state(&acpi_conf->acpi_nvdimm_state, system_io,
+                               acpi_conf->fw_cfg, OBJECT(pcms));
     }
 }
 
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 935a3676c8..0459fb7340 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -190,6 +190,7 @@ qemu_irq *xen_interrupt_controller_init(void)
 static void xen_ram_init(PCMachineState *pcms,
                          ram_addr_t ram_size, MemoryRegion **ram_memory_p)
 {
+    AcpiConfiguration *acpi_conf = &pcms->acpi_configuration;
     MemoryRegion *sysmem = get_system_memory();
     ram_addr_t block_len;
     uint64_t user_lowmem = object_property_get_uint(qdev_get_machine(),
@@ -207,20 +208,20 @@ static void xen_ram_init(PCMachineState *pcms,
     }
 
     if (ram_size >= user_lowmem) {
-        pcms->above_4g_mem_size = ram_size - user_lowmem;
-        pcms->below_4g_mem_size = user_lowmem;
+        acpi_conf->above_4g_mem_size = ram_size - user_lowmem;
+        acpi_conf->below_4g_mem_size = user_lowmem;
     } else {
-        pcms->above_4g_mem_size = 0;
-        pcms->below_4g_mem_size = ram_size;
+        acpi_conf->above_4g_mem_size = 0;
+        acpi_conf->below_4g_mem_size = ram_size;
     }
-    if (!pcms->above_4g_mem_size) {
+    if (!acpi_conf->above_4g_mem_size) {
         block_len = ram_size;
     } else {
         /*
          * Xen does not allocate the memory continuously, it keeps a
          * hole of the size computed above or passed in.
          */
-        block_len = (1ULL << 32) + pcms->above_4g_mem_size;
+        block_len = (1ULL << 32) + acpi_conf->above_4g_mem_size;
     }
     memory_region_init_ram(&ram_memory, NULL, "xen.ram", block_len,
                            &error_fatal);
@@ -237,12 +238,12 @@ static void xen_ram_init(PCMachineState *pcms,
      */
     memory_region_init_alias(&ram_lo, NULL, "xen.ram.lo",
                              &ram_memory, 0xc0000,
-                             pcms->below_4g_mem_size - 0xc0000);
+                             acpi_conf->below_4g_mem_size - 0xc0000);
     memory_region_add_subregion(sysmem, 0xc0000, &ram_lo);
-    if (pcms->above_4g_mem_size > 0) {
+    if (acpi_conf->above_4g_mem_size > 0) {
         memory_region_init_alias(&ram_hi, NULL, "xen.ram.hi",
                                  &ram_memory, 0x100000000ULL,
-                                 pcms->above_4g_mem_size);
+                                 acpi_conf->above_4g_mem_size);
         memory_region_add_subregion(sysmem, 0x100000000ULL, &ram_hi);
     }
 }
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

This is going to be needed by the Hardware-reduced ACPI routines.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h | 2 ++
 hw/acpi/aml-build.c         | 8 ++++++++
 hw/i386/acpi-build.c        | 8 --------
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 6c36903c0a..73fc6659f2 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -384,6 +384,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
              const char *oem_id, const char *oem_table_id);
 void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
+/* Align AML blob size to a multiple of 'align' */
+void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 1e43cd736d..51b608432f 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1565,6 +1565,14 @@ unsigned acpi_data_len(GArray *table)
     return table->len;
 }
 
+void acpi_align_size(GArray *blob, unsigned align)
+{
+    /* Align size to multiple of given size. This reduces the chance
+     * we need to change size in the future (breaking cross version migration).
+     */
+    g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
+}
+
 void acpi_add_table(GArray *table_offsets, GArray *table_data)
 {
     uint32_t offset = table_data->len;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index d0362e1382..81d98fa34f 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -282,14 +282,6 @@ static void acpi_get_pci_holes(Range *hole, Range *hole64)
                                                NULL));
 }
 
-static void acpi_align_size(GArray *blob, unsigned align)
-{
-    /* Align size to multiple of given size. This reduces the chance
-     * we need to change size in the future (breaking cross version migration).
-     */
-    g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
-}
-
 /* FACS */
 static void
 build_facs(GArray *table_data, BIOSLinker *linker)
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

This is going to be needed by the Hardware-reduced ACPI routines.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h | 2 ++
 hw/acpi/aml-build.c         | 8 ++++++++
 hw/i386/acpi-build.c        | 8 --------
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 6c36903c0a..73fc6659f2 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -384,6 +384,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
              const char *oem_id, const char *oem_table_id);
 void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
+/* Align AML blob size to a multiple of 'align' */
+void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 1e43cd736d..51b608432f 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1565,6 +1565,14 @@ unsigned acpi_data_len(GArray *table)
     return table->len;
 }
 
+void acpi_align_size(GArray *blob, unsigned align)
+{
+    /* Align size to multiple of given size. This reduces the chance
+     * we need to change size in the future (breaking cross version migration).
+     */
+    g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
+}
+
 void acpi_add_table(GArray *table_offsets, GArray *table_data)
 {
     uint32_t offset = table_data->len;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index d0362e1382..81d98fa34f 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -282,14 +282,6 @@ static void acpi_get_pci_holes(Range *hole, Range *hole64)
                                                NULL));
 }
 
-static void acpi_align_size(GArray *blob, unsigned align)
-{
-    /* Align size to multiple of given size. This reduces the chance
-     * we need to change size in the future (breaking cross version migration).
-     */
-    g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
-}
-
 /* FACS */
 static void
 build_facs(GArray *table_data, BIOSLinker *linker)
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 03/24] hw: acpi: The RSDP build API can return void
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

For both x86 and ARM architectures, the internal RSDP build API can
return void as the current return value is unused.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/arm/virt-acpi-build.c | 4 +---
 hw/i386/acpi-build.c     | 4 +---
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index f28a2faa53..fc59cce769 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -367,7 +367,7 @@ static void acpi_dsdt_add_power_button(Aml *scope)
 }
 
 /* RSDP */
-static GArray *
+static void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
@@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
     bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
         (char *)rsdp - rsdp_table->data, sizeof *rsdp,
         (char *)&rsdp->checksum - rsdp_table->data);
-
-    return rsdp_table;
 }
 
 static void
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 81d98fa34f..74419d0663 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,7 +2513,7 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
                  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static GArray *
+static void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
@@ -2535,8 +2535,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
     bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
         (char *)rsdp - rsdp_table->data, sizeof *rsdp,
         (char *)&rsdp->checksum - rsdp_table->data);
-
-    return rsdp_table;
 }
 
 static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 03/24] hw: acpi: The RSDP build API can return void
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

For both x86 and ARM architectures, the internal RSDP build API can
return void as the current return value is unused.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/arm/virt-acpi-build.c | 4 +---
 hw/i386/acpi-build.c     | 4 +---
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index f28a2faa53..fc59cce769 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -367,7 +367,7 @@ static void acpi_dsdt_add_power_button(Aml *scope)
 }
 
 /* RSDP */
-static GArray *
+static void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
@@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
     bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
         (char *)rsdp - rsdp_table->data, sizeof *rsdp,
         (char *)&rsdp->checksum - rsdp_table->data);
-
-    return rsdp_table;
 }
 
 static void
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 81d98fa34f..74419d0663 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,7 +2513,7 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
                  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static GArray *
+static void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
@@ -2535,8 +2535,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
     bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
         (char *)rsdp - rsdp_table->data, sizeof *rsdp,
         (char *)&rsdp->checksum - rsdp_table->data);
-
-    return rsdp_table;
 }
 
 static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 04/24] hw: acpi: Export the RSDP build API
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

The hardware-reduced API will need to build RSDP as well, so we should
export this routine.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h | 3 +++
 hw/arm/virt-acpi-build.c    | 2 +-
 hw/i386/acpi-build.c        | 2 +-
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 73fc6659f2..c9bcb32d81 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -390,6 +390,9 @@ void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
 void
+build_rsdp(GArray *table_data,
+           BIOSLinker *linker, unsigned rsdt_tbl_offset);
+void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
            const char *oem_id, const char *oem_table_id);
 void
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index fc59cce769..623a6c4eac 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -367,7 +367,7 @@ static void acpi_dsdt_add_power_button(Aml *scope)
 }
 
 /* RSDP */
-static void
+void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 74419d0663..f7a67f5c9c 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,7 +2513,7 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
                  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static void
+void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 04/24] hw: acpi: Export the RSDP build API
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

The hardware-reduced API will need to build RSDP as well, so we should
export this routine.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h | 3 +++
 hw/arm/virt-acpi-build.c    | 2 +-
 hw/i386/acpi-build.c        | 2 +-
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 73fc6659f2..c9bcb32d81 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -390,6 +390,9 @@ void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
 void
+build_rsdp(GArray *table_data,
+           BIOSLinker *linker, unsigned rsdt_tbl_offset);
+void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
            const char *oem_id, const char *oem_table_id);
 void
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index fc59cce769..623a6c4eac 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -367,7 +367,7 @@ static void acpi_dsdt_add_power_button(Aml *scope)
 }
 
 /* RSDP */
-static void
+void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 74419d0663..f7a67f5c9c 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,7 +2513,7 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
                  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static void
+void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
Description Table). RSDT only allow for 32-bit addressses and have thus
been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
no longer RSDTs, although RSDTs are still supported for backward
compatibility.

Since version 2.0, RSDPs should add an extended checksum, a complete table
length and a version field to the table.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h |  3 +++
 hw/acpi/aml-build.c         | 37 +++++++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index c9bcb32d81..3580d0ce90 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -393,6 +393,9 @@ void
 build_rsdp(GArray *table_data,
            BIOSLinker *linker, unsigned rsdt_tbl_offset);
 void
+build_rsdp_xsdt(GArray *table_data,
+                BIOSLinker *linker, unsigned xsdt_tbl_offset);
+void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
            const char *oem_id, const char *oem_table_id);
 void
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 51b608432f..a030d40674 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
                  (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
 }
 
+/* RSDP pointing at an XSDT */
+void
+build_rsdp_xsdt(GArray *rsdp_table,
+                BIOSLinker *linker, unsigned xsdt_tbl_offset)
+{
+    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
+    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
+    unsigned xsdt_pa_offset =
+        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
+    unsigned xsdt_offset =
+        (char *)&rsdp->length - rsdp_table->data;
+
+    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
+                             true /* fseg memory */);
+
+    memcpy(&rsdp->signature, "RSD PTR ", 8);
+    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
+    rsdp->length = cpu_to_le32(sizeof(*rsdp));
+    /* version 2, we will use the XSDT pointer */
+    rsdp->revision = 0x02;
+
+    /* Address to be filled by Guest linker */
+    bios_linker_loader_add_pointer(linker,
+        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
+        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
+
+    /* Legacy checksum to be filled by Guest linker */
+    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+        (char *)rsdp - rsdp_table->data, xsdt_offset,
+        (char *)&rsdp->checksum - rsdp_table->data);
+
+    /* Extended checksum to be filled by Guest linker */
+    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
+        (char *)&rsdp->extended_checksum - rsdp_table->data);
+}
+
 void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
                        uint64_t len, int node, MemoryAffinityFlags flags)
 {
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
Description Table). RSDT only allow for 32-bit addressses and have thus
been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
no longer RSDTs, although RSDTs are still supported for backward
compatibility.

Since version 2.0, RSDPs should add an extended checksum, a complete table
length and a version field to the table.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h |  3 +++
 hw/acpi/aml-build.c         | 37 +++++++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index c9bcb32d81..3580d0ce90 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -393,6 +393,9 @@ void
 build_rsdp(GArray *table_data,
            BIOSLinker *linker, unsigned rsdt_tbl_offset);
 void
+build_rsdp_xsdt(GArray *table_data,
+                BIOSLinker *linker, unsigned xsdt_tbl_offset);
+void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
            const char *oem_id, const char *oem_table_id);
 void
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 51b608432f..a030d40674 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
                  (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
 }
 
+/* RSDP pointing at an XSDT */
+void
+build_rsdp_xsdt(GArray *rsdp_table,
+                BIOSLinker *linker, unsigned xsdt_tbl_offset)
+{
+    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
+    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
+    unsigned xsdt_pa_offset =
+        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
+    unsigned xsdt_offset =
+        (char *)&rsdp->length - rsdp_table->data;
+
+    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
+                             true /* fseg memory */);
+
+    memcpy(&rsdp->signature, "RSD PTR ", 8);
+    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
+    rsdp->length = cpu_to_le32(sizeof(*rsdp));
+    /* version 2, we will use the XSDT pointer */
+    rsdp->revision = 0x02;
+
+    /* Address to be filled by Guest linker */
+    bios_linker_loader_add_pointer(linker,
+        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
+        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
+
+    /* Legacy checksum to be filled by Guest linker */
+    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+        (char *)rsdp - rsdp_table->data, xsdt_offset,
+        (char *)&rsdp->checksum - rsdp_table->data);
+
+    /* Extended checksum to be filled by Guest linker */
+    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
+        (char *)&rsdp->extended_checksum - rsdp_table->data);
+}
+
 void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
                        uint64_t len, int node, MemoryAffinityFlags flags)
 {
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 06/24] hw: acpi: Factorize the RSDP build API implementation
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

We can now share the RSDP build between the ARM and x86 architectures.
Here we make the default RSDP build use XSDT and keep the existing x86
ACPI build implementation using the legacy RSDT version.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h |  8 ++++----
 hw/acpi/aml-build.c         | 28 +++++++++++++++++++++++++---
 hw/arm/virt-acpi-build.c    | 28 ----------------------------
 hw/i386/acpi-build.c        | 26 +-------------------------
 4 files changed, 30 insertions(+), 60 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 3580d0ce90..a2ef8b6f31 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -390,11 +390,11 @@ void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
 void
-build_rsdp(GArray *table_data,
-           BIOSLinker *linker, unsigned rsdt_tbl_offset);
+build_rsdp_rsdt(GArray *table_data,
+                BIOSLinker *linker, unsigned rsdt_tbl_offset);
 void
-build_rsdp_xsdt(GArray *table_data,
-                BIOSLinker *linker, unsigned xsdt_tbl_offset);
+build_rsdp(GArray *table_data,
+           BIOSLinker *linker, unsigned xsdt_tbl_offset);
 void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
            const char *oem_id, const char *oem_table_id);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index a030d40674..8c2388274c 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1651,10 +1651,32 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
                  (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
 }
 
+/* Legacy RSDP pointing at an RSDT. This is deprecated */
+void build_rsdp_rsdt(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
+{
+    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
+    unsigned rsdt_pa_size = sizeof(rsdp->rsdt_physical_address);
+    unsigned rsdt_pa_offset =
+        (char *)&rsdp->rsdt_physical_address - rsdp_table->data;
+
+    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
+                             true /* fseg memory */);
+
+    memcpy(&rsdp->signature, "RSD PTR ", 8);
+    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
+    /* Address to be filled by Guest linker */
+    bios_linker_loader_add_pointer(linker,
+        ACPI_BUILD_RSDP_FILE, rsdt_pa_offset, rsdt_pa_size,
+        ACPI_BUILD_TABLE_FILE, rsdt_tbl_offset);
+
+    /* Checksum to be filled by Guest linker */
+    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
+        (char *)&rsdp->checksum - rsdp_table->data);
+}
+
 /* RSDP pointing at an XSDT */
-void
-build_rsdp_xsdt(GArray *rsdp_table,
-                BIOSLinker *linker, unsigned xsdt_tbl_offset)
+void build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
     unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 623a6c4eac..261363e20c 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -366,34 +366,6 @@ static void acpi_dsdt_add_power_button(Aml *scope)
     aml_append(scope, dev);
 }
 
-/* RSDP */
-void
-build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
-{
-    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
-    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
-    unsigned xsdt_pa_offset =
-        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
-
-    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
-                             true /* fseg memory */);
-
-    memcpy(&rsdp->signature, "RSD PTR ", sizeof(rsdp->signature));
-    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, sizeof(rsdp->oem_id));
-    rsdp->length = cpu_to_le32(sizeof(*rsdp));
-    rsdp->revision = 0x02;
-
-    /* Address to be filled by Guest linker */
-    bios_linker_loader_add_pointer(linker,
-        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
-        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
-
-    /* Checksum to be filled by Guest linker */
-    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
-        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
-        (char *)&rsdp->checksum - rsdp_table->data);
-}
-
 static void
 build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index f7a67f5c9c..cfc2444d0d 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,30 +2513,6 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
                  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-void
-build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
-{
-    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
-    unsigned rsdt_pa_size = sizeof(rsdp->rsdt_physical_address);
-    unsigned rsdt_pa_offset =
-        (char *)&rsdp->rsdt_physical_address - rsdp_table->data;
-
-    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
-                             true /* fseg memory */);
-
-    memcpy(&rsdp->signature, "RSD PTR ", 8);
-    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
-    /* Address to be filled by Guest linker */
-    bios_linker_loader_add_pointer(linker,
-        ACPI_BUILD_RSDP_FILE, rsdt_pa_offset, rsdt_pa_size,
-        ACPI_BUILD_TABLE_FILE, rsdt_tbl_offset);
-
-    /* Checksum to be filled by Guest linker */
-    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
-        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
-        (char *)&rsdp->checksum - rsdp_table->data);
-}
-
 static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
 {
     Object *pci_host;
@@ -2681,7 +2657,7 @@ void acpi_build(AcpiBuildTables *tables,
                slic_oem.id, slic_oem.table_id);
 
     /* RSDP is in FSEG memory, so allocate it separately */
-    build_rsdp(tables->rsdp, tables->linker, rsdt);
+    build_rsdp_rsdt(tables->rsdp, tables->linker, rsdt);
 
     /* We'll expose it all to Guest so we want to reduce
      * chance of size changes.
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 06/24] hw: acpi: Factorize the RSDP build API implementation
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

We can now share the RSDP build between the ARM and x86 architectures.
Here we make the default RSDP build use XSDT and keep the existing x86
ACPI build implementation using the legacy RSDT version.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h |  8 ++++----
 hw/acpi/aml-build.c         | 28 +++++++++++++++++++++++++---
 hw/arm/virt-acpi-build.c    | 28 ----------------------------
 hw/i386/acpi-build.c        | 26 +-------------------------
 4 files changed, 30 insertions(+), 60 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 3580d0ce90..a2ef8b6f31 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -390,11 +390,11 @@ void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
 void
-build_rsdp(GArray *table_data,
-           BIOSLinker *linker, unsigned rsdt_tbl_offset);
+build_rsdp_rsdt(GArray *table_data,
+                BIOSLinker *linker, unsigned rsdt_tbl_offset);
 void
-build_rsdp_xsdt(GArray *table_data,
-                BIOSLinker *linker, unsigned xsdt_tbl_offset);
+build_rsdp(GArray *table_data,
+           BIOSLinker *linker, unsigned xsdt_tbl_offset);
 void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
            const char *oem_id, const char *oem_table_id);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index a030d40674..8c2388274c 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1651,10 +1651,32 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
                  (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
 }
 
+/* Legacy RSDP pointing at an RSDT. This is deprecated */
+void build_rsdp_rsdt(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
+{
+    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
+    unsigned rsdt_pa_size = sizeof(rsdp->rsdt_physical_address);
+    unsigned rsdt_pa_offset =
+        (char *)&rsdp->rsdt_physical_address - rsdp_table->data;
+
+    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
+                             true /* fseg memory */);
+
+    memcpy(&rsdp->signature, "RSD PTR ", 8);
+    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
+    /* Address to be filled by Guest linker */
+    bios_linker_loader_add_pointer(linker,
+        ACPI_BUILD_RSDP_FILE, rsdt_pa_offset, rsdt_pa_size,
+        ACPI_BUILD_TABLE_FILE, rsdt_tbl_offset);
+
+    /* Checksum to be filled by Guest linker */
+    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
+        (char *)&rsdp->checksum - rsdp_table->data);
+}
+
 /* RSDP pointing at an XSDT */
-void
-build_rsdp_xsdt(GArray *rsdp_table,
-                BIOSLinker *linker, unsigned xsdt_tbl_offset)
+void build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
 {
     AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
     unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 623a6c4eac..261363e20c 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -366,34 +366,6 @@ static void acpi_dsdt_add_power_button(Aml *scope)
     aml_append(scope, dev);
 }
 
-/* RSDP */
-void
-build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
-{
-    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
-    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
-    unsigned xsdt_pa_offset =
-        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
-
-    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
-                             true /* fseg memory */);
-
-    memcpy(&rsdp->signature, "RSD PTR ", sizeof(rsdp->signature));
-    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, sizeof(rsdp->oem_id));
-    rsdp->length = cpu_to_le32(sizeof(*rsdp));
-    rsdp->revision = 0x02;
-
-    /* Address to be filled by Guest linker */
-    bios_linker_loader_add_pointer(linker,
-        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
-        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
-
-    /* Checksum to be filled by Guest linker */
-    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
-        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
-        (char *)&rsdp->checksum - rsdp_table->data);
-}
-
 static void
 build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index f7a67f5c9c..cfc2444d0d 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,30 +2513,6 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
                  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-void
-build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
-{
-    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
-    unsigned rsdt_pa_size = sizeof(rsdp->rsdt_physical_address);
-    unsigned rsdt_pa_offset =
-        (char *)&rsdp->rsdt_physical_address - rsdp_table->data;
-
-    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
-                             true /* fseg memory */);
-
-    memcpy(&rsdp->signature, "RSD PTR ", 8);
-    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
-    /* Address to be filled by Guest linker */
-    bios_linker_loader_add_pointer(linker,
-        ACPI_BUILD_RSDP_FILE, rsdt_pa_offset, rsdt_pa_size,
-        ACPI_BUILD_TABLE_FILE, rsdt_tbl_offset);
-
-    /* Checksum to be filled by Guest linker */
-    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
-        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
-        (char *)&rsdp->checksum - rsdp_table->data);
-}
-
 static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
 {
     Object *pci_host;
@@ -2681,7 +2657,7 @@ void acpi_build(AcpiBuildTables *tables,
                slic_oem.id, slic_oem.table_id);
 
     /* RSDP is in FSEG memory, so allocate it separately */
-    build_rsdp(tables->rsdp, tables->linker, rsdt);
+    build_rsdp_rsdt(tables->rsdp, tables->linker, rsdt);
 
     /* We'll expose it all to Guest so we want to reduce
      * chance of size changes.
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 07/24] hw: acpi: Generalize AML build routines
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost, Yang Zhong

From: Yang Zhong <yang.zhong@intel.com>

Most of the AML build routines under acpi-build are not even
architecture specific. They can be moved to the more generic hw/acpi
folder where they could be shared across machine types and
architectures.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 include/hw/acpi/aml-build.h |  25 ++
 hw/acpi/aml-build.c         | 498 ++++++++++++++++++++++++++++++++++
 hw/arm/virt-acpi-build.c    |   4 +-
 hw/i386/acpi-build.c        | 518 +-----------------------------------
 4 files changed, 528 insertions(+), 517 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index a2ef8b6f31..4f678c45a5 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -3,6 +3,7 @@
 
 #include "hw/acpi/acpi-defs.h"
 #include "hw/acpi/bios-linker-loader.h"
+#include "hw/pci/pcie_host.h"
 
 /* Reserve RAM space for tables: add another order of magnitude. */
 #define ACPI_BUILD_TABLE_MAX_SIZE         0x200000
@@ -223,6 +224,21 @@ struct AcpiBuildTables {
     BIOSLinker *linker;
 } AcpiBuildTables;
 
+typedef struct AcpiMcfgInfo {
+    uint64_t mcfg_base;
+    uint32_t mcfg_size;
+} AcpiMcfgInfo;
+
+typedef struct CrsRangeEntry {
+    uint64_t base;
+    uint64_t limit;
+} CrsRangeEntry;
+
+typedef struct CrsRangeSet {
+    GPtrArray *io_ranges;
+    GPtrArray *mem_ranges;
+    GPtrArray *mem_64bit_ranges;
+} CrsRangeSet;
 /**
  * init_aml_allocator:
  *
@@ -389,6 +405,15 @@ void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
+Aml *build_osc_method(void);
+void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
+Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
+Aml *build_prt(bool is_pci0_prt);
+void crs_range_set_init(CrsRangeSet *range_set);
+Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
+void crs_replace_with_free_ranges(GPtrArray *ranges,
+                                  uint64_t start, uint64_t end);
+void crs_range_set_free(CrsRangeSet *range_set);
 void
 build_rsdp_rsdt(GArray *table_data,
                 BIOSLinker *linker, unsigned rsdt_tbl_offset);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 8c2388274c..d3242c6b31 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -25,6 +25,10 @@
 #include "qemu/bswap.h"
 #include "qemu/bitops.h"
 #include "sysemu/numa.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/pci_bus.h"
+#include "qemu/range.h"
+#include "hw/pci/pci_bridge.h"
 
 static GArray *build_alloc_array(void)
 {
@@ -1597,6 +1601,500 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
     g_array_free(tables->vmgenid, mfre);
 }
 
+static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
+{
+    CrsRangeEntry *entry;
+
+    entry = g_malloc(sizeof(*entry));
+    entry->base = base;
+    entry->limit = limit;
+
+    g_ptr_array_add(ranges, entry);
+}
+
+static void crs_range_free(gpointer data)
+{
+    CrsRangeEntry *entry = (CrsRangeEntry *)data;
+    g_free(entry);
+}
+
+void crs_range_set_init(CrsRangeSet *range_set)
+{
+    range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
+    range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
+    range_set->mem_64bit_ranges =
+            g_ptr_array_new_with_free_func(crs_range_free);
+}
+
+void crs_range_set_free(CrsRangeSet *range_set)
+{
+    g_ptr_array_free(range_set->io_ranges, true);
+    g_ptr_array_free(range_set->mem_ranges, true);
+    g_ptr_array_free(range_set->mem_64bit_ranges, true);
+}
+
+static gint crs_range_compare(gconstpointer a, gconstpointer b)
+{
+     CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
+     CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
+
+     return (int64_t)entry_a->base - (int64_t)entry_b->base;
+}
+
+/*
+ * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
+ * interval, computes the 'free' ranges from the same interval.
+ * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
+ * will return { [base - a1], [a2 - b1], [b2 - limit] }.
+ */
+void crs_replace_with_free_ranges(GPtrArray *ranges,
+                                         uint64_t start, uint64_t end)
+{
+    GPtrArray *free_ranges = g_ptr_array_new();
+    uint64_t free_base = start;
+    int i;
+
+    g_ptr_array_sort(ranges, crs_range_compare);
+    for (i = 0; i < ranges->len; i++) {
+        CrsRangeEntry *used = g_ptr_array_index(ranges, i);
+
+        if (free_base < used->base) {
+            crs_range_insert(free_ranges, free_base, used->base - 1);
+        }
+
+        free_base = used->limit + 1;
+    }
+
+    if (free_base < end) {
+        crs_range_insert(free_ranges, free_base, end);
+    }
+
+    g_ptr_array_set_size(ranges, 0);
+    for (i = 0; i < free_ranges->len; i++) {
+        g_ptr_array_add(ranges, g_ptr_array_index(free_ranges, i));
+    }
+
+    g_ptr_array_free(free_ranges, true);
+}
+
+/*
+ * crs_range_merge - merges adjacent ranges in the given array.
+ * Array elements are deleted and replaced with the merged ranges.
+ */
+static void crs_range_merge(GPtrArray *range)
+{
+    GPtrArray *tmp =  g_ptr_array_new_with_free_func(crs_range_free);
+    CrsRangeEntry *entry;
+    uint64_t range_base, range_limit;
+    int i;
+
+    if (!range->len) {
+        return;
+    }
+
+    g_ptr_array_sort(range, crs_range_compare);
+
+    entry = g_ptr_array_index(range, 0);
+    range_base = entry->base;
+    range_limit = entry->limit;
+    for (i = 1; i < range->len; i++) {
+        entry = g_ptr_array_index(range, i);
+        if (entry->base - 1 == range_limit) {
+            range_limit = entry->limit;
+        } else {
+            crs_range_insert(tmp, range_base, range_limit);
+            range_base = entry->base;
+            range_limit = entry->limit;
+        }
+    }
+    crs_range_insert(tmp, range_base, range_limit);
+
+    g_ptr_array_set_size(range, 0);
+    for (i = 0; i < tmp->len; i++) {
+        entry = g_ptr_array_index(tmp, i);
+        crs_range_insert(range, entry->base, entry->limit);
+    }
+    g_ptr_array_free(tmp, true);
+}
+
+Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
+{
+    Aml *crs = aml_resource_template();
+    CrsRangeSet temp_range_set;
+    CrsRangeEntry *entry;
+    uint8_t max_bus = pci_bus_num(host->bus);
+    uint8_t type;
+    int devfn;
+    int i;
+
+    crs_range_set_init(&temp_range_set);
+    for (devfn = 0; devfn < ARRAY_SIZE(host->bus->devices); devfn++) {
+        uint64_t range_base, range_limit;
+        PCIDevice *dev = host->bus->devices[devfn];
+
+        if (!dev) {
+            continue;
+        }
+
+        for (i = 0; i < PCI_NUM_REGIONS; i++) {
+            PCIIORegion *r = &dev->io_regions[i];
+
+            range_base = r->addr;
+            range_limit = r->addr + r->size - 1;
+
+            /*
+             * Work-around for old bioses
+             * that do not support multiple root buses
+             */
+            if (!range_base || range_base > range_limit) {
+                continue;
+            }
+
+            if (r->type & PCI_BASE_ADDRESS_SPACE_IO) {
+                crs_range_insert(temp_range_set.io_ranges,
+                                 range_base, range_limit);
+            } else { /* "memory" */
+                crs_range_insert(temp_range_set.mem_ranges,
+                                 range_base, range_limit);
+            }
+        }
+
+        type = dev->config[PCI_HEADER_TYPE] & ~PCI_HEADER_TYPE_MULTI_FUNCTION;
+        if (type == PCI_HEADER_TYPE_BRIDGE) {
+            uint8_t subordinate = dev->config[PCI_SUBORDINATE_BUS];
+            if (subordinate > max_bus) {
+                max_bus = subordinate;
+            }
+
+            range_base = pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_IO);
+            range_limit = pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_IO);
+
+            /*
+             * Work-around for old bioses
+             * that do not support multiple root buses
+             */
+            if (range_base && range_base <= range_limit) {
+                crs_range_insert(temp_range_set.io_ranges,
+                                 range_base, range_limit);
+            }
+
+            range_base =
+                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
+            range_limit =
+                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
+
+            /*
+             * Work-around for old bioses
+             * that do not support multiple root buses
+             */
+            if (range_base && range_base <= range_limit) {
+                uint64_t length = range_limit - range_base + 1;
+                if (range_limit <= UINT32_MAX && length <= UINT32_MAX)  {
+                    crs_range_insert(temp_range_set.mem_ranges,
+                                     range_base, range_limit);
+                } else {
+                    crs_range_insert(temp_range_set.mem_64bit_ranges,
+                                     range_base, range_limit);
+                }
+            }
+
+            range_base =
+                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
+            range_limit =
+                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
+
+            /*
+             * Work-around for old bioses
+             * that do not support multiple root buses
+             */
+            if (range_base && range_base <= range_limit) {
+                uint64_t length = range_limit - range_base + 1;
+                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
+                    crs_range_insert(temp_range_set.mem_ranges,
+                                     range_base, range_limit);
+                } else {
+                    crs_range_insert(temp_range_set.mem_64bit_ranges,
+                                     range_base, range_limit);
+                }
+            }
+        }
+    }
+
+    crs_range_merge(temp_range_set.io_ranges);
+    for (i = 0; i < temp_range_set.io_ranges->len; i++) {
+        entry = g_ptr_array_index(temp_range_set.io_ranges, i);
+        aml_append(crs,
+                   aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
+                               AML_POS_DECODE, AML_ENTIRE_RANGE,
+                               0, entry->base, entry->limit, 0,
+                               entry->limit - entry->base + 1));
+        crs_range_insert(range_set->io_ranges, entry->base, entry->limit);
+    }
+
+    crs_range_merge(temp_range_set.mem_ranges);
+    for (i = 0; i < temp_range_set.mem_ranges->len; i++) {
+        entry = g_ptr_array_index(temp_range_set.mem_ranges, i);
+        aml_append(crs,
+                   aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
+                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
+                                    AML_READ_WRITE,
+                                    0, entry->base, entry->limit, 0,
+                                    entry->limit - entry->base + 1));
+        crs_range_insert(range_set->mem_ranges, entry->base, entry->limit);
+    }
+
+    crs_range_merge(temp_range_set.mem_64bit_ranges);
+    for (i = 0; i < temp_range_set.mem_64bit_ranges->len; i++) {
+        entry = g_ptr_array_index(temp_range_set.mem_64bit_ranges, i);
+        aml_append(crs,
+                   aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
+                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
+                                    AML_READ_WRITE,
+                                    0, entry->base, entry->limit, 0,
+                                    entry->limit - entry->base + 1));
+        crs_range_insert(range_set->mem_64bit_ranges,
+                         entry->base, entry->limit);
+    }
+
+    crs_range_set_free(&temp_range_set);
+
+    aml_append(crs,
+        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
+                            0,
+                            pci_bus_num(host->bus),
+                            max_bus,
+                            0,
+                            max_bus - pci_bus_num(host->bus) + 1));
+
+    return crs;
+}
+
+Aml *build_osc_method(void)
+{
+    Aml *if_ctx;
+    Aml *if_ctx2;
+    Aml *else_ctx;
+    Aml *method;
+    Aml *a_cwd1 = aml_name("CDW1");
+    Aml *a_ctrl = aml_local(0);
+
+    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
+    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
+
+    if_ctx = aml_if(aml_equal(
+        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
+    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
+    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
+
+    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
+
+    /*
+     * Always allow native PME, AER (no dependencies)
+     * Allow SHPC (PCI bridges can have SHPC controller)
+     */
+    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
+
+    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
+    /* Unknown revision */
+    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
+    aml_append(if_ctx, if_ctx2);
+
+    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
+    /* Capabilities bits were masked */
+    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
+    aml_append(if_ctx, if_ctx2);
+
+    /* Update DWORD3 in the buffer */
+    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
+    aml_append(method, if_ctx);
+
+    else_ctx = aml_else();
+    /* Unrecognized UUID */
+    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
+    aml_append(method, else_ctx);
+
+    aml_append(method, aml_return(aml_arg(3)));
+    return method;
+}
+
+void
+build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info)
+{
+    AcpiTableMcfg *mcfg;
+    const char *sig;
+    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
+
+    mcfg = acpi_data_push(table_data, len);
+    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
+    /* Only a single allocation so no need to play with segments */
+    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
+    mcfg->allocation[0].start_bus_number = 0;
+    mcfg->allocation[0].end_bus_number = PCIE_MMCFG_BUS(info->mcfg_size - 1);
+
+    /* MCFG is used for ECAM which can be enabled or disabled by guest.
+     * To avoid table size changes (which create migration issues),
+     * always create the table even if there are no allocations,
+     * but set the signature to a reserved value in this case.
+     * ACPI spec requires OSPMs to ignore such tables.
+     */
+    if (info->mcfg_base == PCIE_BASE_ADDR_UNMAPPED) {
+        /* Reserved signature: ignored by OSPM */
+        sig = "QEMU";
+    } else {
+        sig = "MCFG";
+    }
+    build_header(linker, table_data, (void *)mcfg, sig, len, 1, NULL, NULL);
+}
+
+Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
+{
+    Aml *dev;
+    Aml *crs;
+    Aml *method;
+    uint32_t irqs;
+
+    dev = aml_device("%s", name);
+    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C0F")));
+    aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
+
+    crs = aml_resource_template();
+    irqs = gsi;
+    aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
+                                  AML_SHARED, &irqs, 1));
+    aml_append(dev, aml_name_decl("_PRS", crs));
+
+    aml_append(dev, aml_name_decl("_CRS", crs));
+
+    /*
+     * _DIS can be no-op because the interrupt cannot be disabled.
+     */
+    method = aml_method("_DIS", 0, AML_NOTSERIALIZED);
+    aml_append(dev, method);
+
+    method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
+    aml_append(dev, method);
+
+    return dev;
+}
+
+/**
+ * build_prt_entry:
+ * @link_name: link name for PCI route entry
+ *
+ * build AML package containing a PCI route entry for @link_name
+ */
+static Aml *build_prt_entry(const char *link_name)
+{
+    Aml *a_zero = aml_int(0);
+    Aml *pkg = aml_package(4);
+    aml_append(pkg, a_zero);
+    aml_append(pkg, a_zero);
+    aml_append(pkg, aml_name("%s", link_name));
+    aml_append(pkg, a_zero);
+    return pkg;
+}
+
+/*
+ * initialize_route - Initialize the interrupt routing rule
+ * through a specific LINK:
+ *  if (lnk_idx == idx)
+ *      route using link 'link_name'
+ */
+static Aml *initialize_route(Aml *route, const char *link_name,
+                             Aml *lnk_idx, int idx)
+{
+    Aml *if_ctx = aml_if(aml_equal(lnk_idx, aml_int(idx)));
+    Aml *pkg = build_prt_entry(link_name);
+
+    aml_append(if_ctx, aml_store(pkg, route));
+
+    return if_ctx;
+}
+
+/*
+ * build_prt - Define interrupt rounting rules
+ *
+ * Returns an array of 128 routes, one for each device,
+ * based on device location.
+ * The main goal is to equaly distribute the interrupts
+ * over the 4 existing ACPI links (works only for i440fx).
+ * The hash function is  (slot + pin) & 3 -> "LNK[D|A|B|C]".
+ *
+ */
+Aml *build_prt(bool is_pci0_prt)
+{
+    Aml *method, *while_ctx, *pin, *res;
+
+    method = aml_method("_PRT", 0, AML_NOTSERIALIZED);
+    res = aml_local(0);
+    pin = aml_local(1);
+    aml_append(method, aml_store(aml_package(128), res));
+    aml_append(method, aml_store(aml_int(0), pin));
+
+    /* while (pin < 128) */
+    while_ctx = aml_while(aml_lless(pin, aml_int(128)));
+    {
+        Aml *slot = aml_local(2);
+        Aml *lnk_idx = aml_local(3);
+        Aml *route = aml_local(4);
+
+        /* slot = pin >> 2 */
+        aml_append(while_ctx,
+                   aml_store(aml_shiftright(pin, aml_int(2), NULL), slot));
+        /* lnk_idx = (slot + pin) & 3 */
+        aml_append(while_ctx,
+            aml_store(aml_and(aml_add(pin, slot, NULL), aml_int(3), NULL),
+                      lnk_idx));
+
+        /* route[2] = "LNK[D|A|B|C]", selection based on pin % 3  */
+        aml_append(while_ctx, initialize_route(route, "LNKD", lnk_idx, 0));
+        if (is_pci0_prt) {
+            Aml *if_device_1, *if_pin_4, *else_pin_4;
+
+            /* device 1 is the power-management device, needs SCI */
+            if_device_1 = aml_if(aml_equal(lnk_idx, aml_int(1)));
+            {
+                if_pin_4 = aml_if(aml_equal(pin, aml_int(4)));
+                {
+                    aml_append(if_pin_4,
+                        aml_store(build_prt_entry("LNKS"), route));
+                }
+                aml_append(if_device_1, if_pin_4);
+                else_pin_4 = aml_else();
+                {
+                    aml_append(else_pin_4,
+                        aml_store(build_prt_entry("LNKA"), route));
+                }
+                aml_append(if_device_1, else_pin_4);
+            }
+            aml_append(while_ctx, if_device_1);
+        } else {
+            aml_append(while_ctx, initialize_route(route, "LNKA", lnk_idx, 1));
+        }
+        aml_append(while_ctx, initialize_route(route, "LNKB", lnk_idx, 2));
+        aml_append(while_ctx, initialize_route(route, "LNKC", lnk_idx, 3));
+
+        /* route[0] = 0x[slot]FFFF */
+        aml_append(while_ctx,
+            aml_store(aml_or(aml_shiftleft(slot, aml_int(16)), aml_int(0xFFFF),
+                             NULL),
+                      aml_index(route, aml_int(0))));
+        /* route[1] = pin & 3 */
+        aml_append(while_ctx,
+            aml_store(aml_and(pin, aml_int(3), NULL),
+                      aml_index(route, aml_int(1))));
+        /* res[pin] = route */
+        aml_append(while_ctx, aml_store(route, aml_index(res, pin)));
+        /* pin++ */
+        aml_append(while_ctx, aml_increment(pin));
+    }
+    aml_append(method, while_ctx);
+    /* return res*/
+    aml_append(method, aml_return(res));
+
+    return method;
+}
+
 /* Build rsdt table */
 void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 261363e20c..c9b4916ba7 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -545,7 +545,7 @@ build_srat(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 }
 
 static void
-build_mcfg(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
+virt_build_mcfg(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
     AcpiTableMcfg *mcfg;
     const MemMapEntry *memmap = vms->memmap;
@@ -790,7 +790,7 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
     build_gtdt(tables_blob, tables->linker, vms);
 
     acpi_add_table(table_offsets, tables_blob);
-    build_mcfg(tables_blob, tables->linker, vms);
+    virt_build_mcfg(tables_blob, tables->linker, vms);
 
     acpi_add_table(table_offsets, tables_blob);
     build_spcr(tables_blob, tables->linker, vms);
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index cfc2444d0d..996d8a11dc 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -27,7 +27,6 @@
 #include "qemu-common.h"
 #include "qemu/bitmap.h"
 #include "qemu/error-report.h"
-#include "hw/pci/pci.h"
 #include "qom/cpu.h"
 #include "target/i386/cpu.h"
 #include "hw/misc/pvpanic.h"
@@ -53,7 +52,6 @@
 #include "hw/acpi/piix4.h"
 #include "hw/acpi/pcihp.h"
 #include "hw/i386/ich9.h"
-#include "hw/pci/pci_bus.h"
 #include "hw/pci-host/q35.h"
 #include "hw/i386/x86-iommu.h"
 
@@ -86,11 +84,6 @@
 /* Default IOAPIC ID */
 #define ACPI_BUILD_IOAPIC_ID 0x0
 
-typedef struct AcpiMcfgInfo {
-    uint64_t mcfg_base;
-    uint32_t mcfg_size;
-} AcpiMcfgInfo;
-
 typedef struct AcpiPmInfo {
     bool s3_disabled;
     bool s4_disabled;
@@ -567,403 +560,6 @@ static void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
     qobject_unref(bsel);
 }
 
-/**
- * build_prt_entry:
- * @link_name: link name for PCI route entry
- *
- * build AML package containing a PCI route entry for @link_name
- */
-static Aml *build_prt_entry(const char *link_name)
-{
-    Aml *a_zero = aml_int(0);
-    Aml *pkg = aml_package(4);
-    aml_append(pkg, a_zero);
-    aml_append(pkg, a_zero);
-    aml_append(pkg, aml_name("%s", link_name));
-    aml_append(pkg, a_zero);
-    return pkg;
-}
-
-/*
- * initialize_route - Initialize the interrupt routing rule
- * through a specific LINK:
- *  if (lnk_idx == idx)
- *      route using link 'link_name'
- */
-static Aml *initialize_route(Aml *route, const char *link_name,
-                             Aml *lnk_idx, int idx)
-{
-    Aml *if_ctx = aml_if(aml_equal(lnk_idx, aml_int(idx)));
-    Aml *pkg = build_prt_entry(link_name);
-
-    aml_append(if_ctx, aml_store(pkg, route));
-
-    return if_ctx;
-}
-
-/*
- * build_prt - Define interrupt rounting rules
- *
- * Returns an array of 128 routes, one for each device,
- * based on device location.
- * The main goal is to equaly distribute the interrupts
- * over the 4 existing ACPI links (works only for i440fx).
- * The hash function is  (slot + pin) & 3 -> "LNK[D|A|B|C]".
- *
- */
-static Aml *build_prt(bool is_pci0_prt)
-{
-    Aml *method, *while_ctx, *pin, *res;
-
-    method = aml_method("_PRT", 0, AML_NOTSERIALIZED);
-    res = aml_local(0);
-    pin = aml_local(1);
-    aml_append(method, aml_store(aml_package(128), res));
-    aml_append(method, aml_store(aml_int(0), pin));
-
-    /* while (pin < 128) */
-    while_ctx = aml_while(aml_lless(pin, aml_int(128)));
-    {
-        Aml *slot = aml_local(2);
-        Aml *lnk_idx = aml_local(3);
-        Aml *route = aml_local(4);
-
-        /* slot = pin >> 2 */
-        aml_append(while_ctx,
-                   aml_store(aml_shiftright(pin, aml_int(2), NULL), slot));
-        /* lnk_idx = (slot + pin) & 3 */
-        aml_append(while_ctx,
-            aml_store(aml_and(aml_add(pin, slot, NULL), aml_int(3), NULL),
-                      lnk_idx));
-
-        /* route[2] = "LNK[D|A|B|C]", selection based on pin % 3  */
-        aml_append(while_ctx, initialize_route(route, "LNKD", lnk_idx, 0));
-        if (is_pci0_prt) {
-            Aml *if_device_1, *if_pin_4, *else_pin_4;
-
-            /* device 1 is the power-management device, needs SCI */
-            if_device_1 = aml_if(aml_equal(lnk_idx, aml_int(1)));
-            {
-                if_pin_4 = aml_if(aml_equal(pin, aml_int(4)));
-                {
-                    aml_append(if_pin_4,
-                        aml_store(build_prt_entry("LNKS"), route));
-                }
-                aml_append(if_device_1, if_pin_4);
-                else_pin_4 = aml_else();
-                {
-                    aml_append(else_pin_4,
-                        aml_store(build_prt_entry("LNKA"), route));
-                }
-                aml_append(if_device_1, else_pin_4);
-            }
-            aml_append(while_ctx, if_device_1);
-        } else {
-            aml_append(while_ctx, initialize_route(route, "LNKA", lnk_idx, 1));
-        }
-        aml_append(while_ctx, initialize_route(route, "LNKB", lnk_idx, 2));
-        aml_append(while_ctx, initialize_route(route, "LNKC", lnk_idx, 3));
-
-        /* route[0] = 0x[slot]FFFF */
-        aml_append(while_ctx,
-            aml_store(aml_or(aml_shiftleft(slot, aml_int(16)), aml_int(0xFFFF),
-                             NULL),
-                      aml_index(route, aml_int(0))));
-        /* route[1] = pin & 3 */
-        aml_append(while_ctx,
-            aml_store(aml_and(pin, aml_int(3), NULL),
-                      aml_index(route, aml_int(1))));
-        /* res[pin] = route */
-        aml_append(while_ctx, aml_store(route, aml_index(res, pin)));
-        /* pin++ */
-        aml_append(while_ctx, aml_increment(pin));
-    }
-    aml_append(method, while_ctx);
-    /* return res*/
-    aml_append(method, aml_return(res));
-
-    return method;
-}
-
-typedef struct CrsRangeEntry {
-    uint64_t base;
-    uint64_t limit;
-} CrsRangeEntry;
-
-static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
-{
-    CrsRangeEntry *entry;
-
-    entry = g_malloc(sizeof(*entry));
-    entry->base = base;
-    entry->limit = limit;
-
-    g_ptr_array_add(ranges, entry);
-}
-
-static void crs_range_free(gpointer data)
-{
-    CrsRangeEntry *entry = (CrsRangeEntry *)data;
-    g_free(entry);
-}
-
-typedef struct CrsRangeSet {
-    GPtrArray *io_ranges;
-    GPtrArray *mem_ranges;
-    GPtrArray *mem_64bit_ranges;
- } CrsRangeSet;
-
-static void crs_range_set_init(CrsRangeSet *range_set)
-{
-    range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
-    range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
-    range_set->mem_64bit_ranges =
-            g_ptr_array_new_with_free_func(crs_range_free);
-}
-
-static void crs_range_set_free(CrsRangeSet *range_set)
-{
-    g_ptr_array_free(range_set->io_ranges, true);
-    g_ptr_array_free(range_set->mem_ranges, true);
-    g_ptr_array_free(range_set->mem_64bit_ranges, true);
-}
-
-static gint crs_range_compare(gconstpointer a, gconstpointer b)
-{
-     CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
-     CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
-
-     return (int64_t)entry_a->base - (int64_t)entry_b->base;
-}
-
-/*
- * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
- * interval, computes the 'free' ranges from the same interval.
- * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
- * will return { [base - a1], [a2 - b1], [b2 - limit] }.
- */
-static void crs_replace_with_free_ranges(GPtrArray *ranges,
-                                         uint64_t start, uint64_t end)
-{
-    GPtrArray *free_ranges = g_ptr_array_new();
-    uint64_t free_base = start;
-    int i;
-
-    g_ptr_array_sort(ranges, crs_range_compare);
-    for (i = 0; i < ranges->len; i++) {
-        CrsRangeEntry *used = g_ptr_array_index(ranges, i);
-
-        if (free_base < used->base) {
-            crs_range_insert(free_ranges, free_base, used->base - 1);
-        }
-
-        free_base = used->limit + 1;
-    }
-
-    if (free_base < end) {
-        crs_range_insert(free_ranges, free_base, end);
-    }
-
-    g_ptr_array_set_size(ranges, 0);
-    for (i = 0; i < free_ranges->len; i++) {
-        g_ptr_array_add(ranges, g_ptr_array_index(free_ranges, i));
-    }
-
-    g_ptr_array_free(free_ranges, true);
-}
-
-/*
- * crs_range_merge - merges adjacent ranges in the given array.
- * Array elements are deleted and replaced with the merged ranges.
- */
-static void crs_range_merge(GPtrArray *range)
-{
-    GPtrArray *tmp =  g_ptr_array_new_with_free_func(crs_range_free);
-    CrsRangeEntry *entry;
-    uint64_t range_base, range_limit;
-    int i;
-
-    if (!range->len) {
-        return;
-    }
-
-    g_ptr_array_sort(range, crs_range_compare);
-
-    entry = g_ptr_array_index(range, 0);
-    range_base = entry->base;
-    range_limit = entry->limit;
-    for (i = 1; i < range->len; i++) {
-        entry = g_ptr_array_index(range, i);
-        if (entry->base - 1 == range_limit) {
-            range_limit = entry->limit;
-        } else {
-            crs_range_insert(tmp, range_base, range_limit);
-            range_base = entry->base;
-            range_limit = entry->limit;
-        }
-    }
-    crs_range_insert(tmp, range_base, range_limit);
-
-    g_ptr_array_set_size(range, 0);
-    for (i = 0; i < tmp->len; i++) {
-        entry = g_ptr_array_index(tmp, i);
-        crs_range_insert(range, entry->base, entry->limit);
-    }
-    g_ptr_array_free(tmp, true);
-}
-
-static Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
-{
-    Aml *crs = aml_resource_template();
-    CrsRangeSet temp_range_set;
-    CrsRangeEntry *entry;
-    uint8_t max_bus = pci_bus_num(host->bus);
-    uint8_t type;
-    int devfn;
-    int i;
-
-    crs_range_set_init(&temp_range_set);
-    for (devfn = 0; devfn < ARRAY_SIZE(host->bus->devices); devfn++) {
-        uint64_t range_base, range_limit;
-        PCIDevice *dev = host->bus->devices[devfn];
-
-        if (!dev) {
-            continue;
-        }
-
-        for (i = 0; i < PCI_NUM_REGIONS; i++) {
-            PCIIORegion *r = &dev->io_regions[i];
-
-            range_base = r->addr;
-            range_limit = r->addr + r->size - 1;
-
-            /*
-             * Work-around for old bioses
-             * that do not support multiple root buses
-             */
-            if (!range_base || range_base > range_limit) {
-                continue;
-            }
-
-            if (r->type & PCI_BASE_ADDRESS_SPACE_IO) {
-                crs_range_insert(temp_range_set.io_ranges,
-                                 range_base, range_limit);
-            } else { /* "memory" */
-                crs_range_insert(temp_range_set.mem_ranges,
-                                 range_base, range_limit);
-            }
-        }
-
-        type = dev->config[PCI_HEADER_TYPE] & ~PCI_HEADER_TYPE_MULTI_FUNCTION;
-        if (type == PCI_HEADER_TYPE_BRIDGE) {
-            uint8_t subordinate = dev->config[PCI_SUBORDINATE_BUS];
-            if (subordinate > max_bus) {
-                max_bus = subordinate;
-            }
-
-            range_base = pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_IO);
-            range_limit = pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_IO);
-
-            /*
-             * Work-around for old bioses
-             * that do not support multiple root buses
-             */
-            if (range_base && range_base <= range_limit) {
-                crs_range_insert(temp_range_set.io_ranges,
-                                 range_base, range_limit);
-            }
-
-            range_base =
-                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
-            range_limit =
-                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
-
-            /*
-             * Work-around for old bioses
-             * that do not support multiple root buses
-             */
-            if (range_base && range_base <= range_limit) {
-                uint64_t length = range_limit - range_base + 1;
-                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
-                    crs_range_insert(temp_range_set.mem_ranges,
-                                     range_base, range_limit);
-                } else {
-                    crs_range_insert(temp_range_set.mem_64bit_ranges,
-                                     range_base, range_limit);
-                }
-            }
-
-            range_base =
-                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
-            range_limit =
-                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
-
-            /*
-             * Work-around for old bioses
-             * that do not support multiple root buses
-             */
-            if (range_base && range_base <= range_limit) {
-                uint64_t length = range_limit - range_base + 1;
-                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
-                    crs_range_insert(temp_range_set.mem_ranges,
-                                     range_base, range_limit);
-                } else {
-                    crs_range_insert(temp_range_set.mem_64bit_ranges,
-                                     range_base, range_limit);
-                }
-            }
-        }
-    }
-
-    crs_range_merge(temp_range_set.io_ranges);
-    for (i = 0; i < temp_range_set.io_ranges->len; i++) {
-        entry = g_ptr_array_index(temp_range_set.io_ranges, i);
-        aml_append(crs,
-                   aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
-                               AML_POS_DECODE, AML_ENTIRE_RANGE,
-                               0, entry->base, entry->limit, 0,
-                               entry->limit - entry->base + 1));
-        crs_range_insert(range_set->io_ranges, entry->base, entry->limit);
-    }
-
-    crs_range_merge(temp_range_set.mem_ranges);
-    for (i = 0; i < temp_range_set.mem_ranges->len; i++) {
-        entry = g_ptr_array_index(temp_range_set.mem_ranges, i);
-        aml_append(crs,
-                   aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
-                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
-                                    AML_READ_WRITE,
-                                    0, entry->base, entry->limit, 0,
-                                    entry->limit - entry->base + 1));
-        crs_range_insert(range_set->mem_ranges, entry->base, entry->limit);
-    }
-
-    crs_range_merge(temp_range_set.mem_64bit_ranges);
-    for (i = 0; i < temp_range_set.mem_64bit_ranges->len; i++) {
-        entry = g_ptr_array_index(temp_range_set.mem_64bit_ranges, i);
-        aml_append(crs,
-                   aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
-                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
-                                    AML_READ_WRITE,
-                                    0, entry->base, entry->limit, 0,
-                                    entry->limit - entry->base + 1));
-        crs_range_insert(range_set->mem_64bit_ranges,
-                         entry->base, entry->limit);
-    }
-
-    crs_range_set_free(&temp_range_set);
-
-    aml_append(crs,
-        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
-                            0,
-                            pci_bus_num(host->bus),
-                            max_bus,
-                            0,
-                            max_bus - pci_bus_num(host->bus) + 1));
-
-    return crs;
-}
-
 static void build_hpet_aml(Aml *table)
 {
     Aml *crs;
@@ -1334,37 +930,6 @@ static Aml *build_link_dev(const char *name, uint8_t uid, Aml *reg)
     return dev;
  }
 
-static Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
-{
-    Aml *dev;
-    Aml *crs;
-    Aml *method;
-    uint32_t irqs;
-
-    dev = aml_device("%s", name);
-    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C0F")));
-    aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
-
-    crs = aml_resource_template();
-    irqs = gsi;
-    aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
-                                  AML_SHARED, &irqs, 1));
-    aml_append(dev, aml_name_decl("_PRS", crs));
-
-    aml_append(dev, aml_name_decl("_CRS", crs));
-
-    /*
-     * _DIS can be no-op because the interrupt cannot be disabled.
-     */
-    method = aml_method("_DIS", 0, AML_NOTSERIALIZED);
-    aml_append(dev, method);
-
-    method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
-    aml_append(dev, method);
-
-    return dev;
-}
-
 /* _CRS method - get current settings */
 static Aml *build_iqcr_method(bool is_piix4)
 {
@@ -1728,54 +1293,6 @@ static void build_piix4_pci_hotplug(Aml *table)
     aml_append(table, scope);
 }
 
-static Aml *build_q35_osc_method(void)
-{
-    Aml *if_ctx;
-    Aml *if_ctx2;
-    Aml *else_ctx;
-    Aml *method;
-    Aml *a_cwd1 = aml_name("CDW1");
-    Aml *a_ctrl = aml_local(0);
-
-    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
-    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
-
-    if_ctx = aml_if(aml_equal(
-        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
-    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
-    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
-
-    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
-
-    /*
-     * Always allow native PME, AER (no dependencies)
-     * Allow SHPC (PCI bridges can have SHPC controller)
-     */
-    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
-
-    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
-    /* Unknown revision */
-    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
-    aml_append(if_ctx, if_ctx2);
-
-    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
-    /* Capabilities bits were masked */
-    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
-    aml_append(if_ctx, if_ctx2);
-
-    /* Update DWORD3 in the buffer */
-    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
-    aml_append(method, if_ctx);
-
-    else_ctx = aml_else();
-    /* Unrecognized UUID */
-    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
-    aml_append(method, else_ctx);
-
-    aml_append(method, aml_return(aml_arg(3)));
-    return method;
-}
-
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
@@ -1818,7 +1335,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
         aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
         aml_append(dev, aml_name_decl("_UID", aml_int(1)));
-        aml_append(dev, build_q35_osc_method());
+        aml_append(dev, build_osc_method());
         aml_append(sb_scope, dev);
         aml_append(dsdt, sb_scope);
 
@@ -1883,7 +1400,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
             if (pci_bus_is_express(bus)) {
-                aml_append(dev, build_q35_osc_method());
+                aml_append(dev, build_osc_method());
             }
 
             if (numa_node != NUMA_NODE_UNASSIGNED) {
@@ -2370,35 +1887,6 @@ build_srat(GArray *table_data, BIOSLinker *linker,
                  table_data->len - srat_start, 1, NULL, NULL);
 }
 
-static void
-build_mcfg_q35(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info)
-{
-    AcpiTableMcfg *mcfg;
-    const char *sig;
-    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
-
-    mcfg = acpi_data_push(table_data, len);
-    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
-    /* Only a single allocation so no need to play with segments */
-    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
-    mcfg->allocation[0].start_bus_number = 0;
-    mcfg->allocation[0].end_bus_number = PCIE_MMCFG_BUS(info->mcfg_size - 1);
-
-    /* MCFG is used for ECAM which can be enabled or disabled by guest.
-     * To avoid table size changes (which create migration issues),
-     * always create the table even if there are no allocations,
-     * but set the signature to a reserved value in this case.
-     * ACPI spec requires OSPMs to ignore such tables.
-     */
-    if (info->mcfg_base == PCIE_BASE_ADDR_UNMAPPED) {
-        /* Reserved signature: ignored by OSPM */
-        sig = "QEMU";
-    } else {
-        sig = "MCFG";
-    }
-    build_header(linker, table_data, (void *)mcfg, sig, len, 1, NULL, NULL);
-}
-
 /*
  * VT-d spec 8.1 DMA Remapping Reporting Structure
  * (version Oct. 2014 or later)
@@ -2626,7 +2114,7 @@ void acpi_build(AcpiBuildTables *tables,
     }
     if (acpi_get_mcfg(&mcfg)) {
         acpi_add_table(table_offsets, tables_blob);
-        build_mcfg_q35(tables_blob, tables->linker, &mcfg);
+        build_mcfg(tables_blob, tables->linker, &mcfg);
     }
     if (x86_iommu_get_default()) {
         IommuType IOMMUType = x86_iommu_get_type();
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 07/24] hw: acpi: Generalize AML build routines
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

From: Yang Zhong <yang.zhong@intel.com>

Most of the AML build routines under acpi-build are not even
architecture specific. They can be moved to the more generic hw/acpi
folder where they could be shared across machine types and
architectures.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 include/hw/acpi/aml-build.h |  25 ++
 hw/acpi/aml-build.c         | 498 ++++++++++++++++++++++++++++++++++
 hw/arm/virt-acpi-build.c    |   4 +-
 hw/i386/acpi-build.c        | 518 +-----------------------------------
 4 files changed, 528 insertions(+), 517 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index a2ef8b6f31..4f678c45a5 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -3,6 +3,7 @@
 
 #include "hw/acpi/acpi-defs.h"
 #include "hw/acpi/bios-linker-loader.h"
+#include "hw/pci/pcie_host.h"
 
 /* Reserve RAM space for tables: add another order of magnitude. */
 #define ACPI_BUILD_TABLE_MAX_SIZE         0x200000
@@ -223,6 +224,21 @@ struct AcpiBuildTables {
     BIOSLinker *linker;
 } AcpiBuildTables;
 
+typedef struct AcpiMcfgInfo {
+    uint64_t mcfg_base;
+    uint32_t mcfg_size;
+} AcpiMcfgInfo;
+
+typedef struct CrsRangeEntry {
+    uint64_t base;
+    uint64_t limit;
+} CrsRangeEntry;
+
+typedef struct CrsRangeSet {
+    GPtrArray *io_ranges;
+    GPtrArray *mem_ranges;
+    GPtrArray *mem_64bit_ranges;
+} CrsRangeSet;
 /**
  * init_aml_allocator:
  *
@@ -389,6 +405,15 @@ void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
+Aml *build_osc_method(void);
+void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
+Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
+Aml *build_prt(bool is_pci0_prt);
+void crs_range_set_init(CrsRangeSet *range_set);
+Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
+void crs_replace_with_free_ranges(GPtrArray *ranges,
+                                  uint64_t start, uint64_t end);
+void crs_range_set_free(CrsRangeSet *range_set);
 void
 build_rsdp_rsdt(GArray *table_data,
                 BIOSLinker *linker, unsigned rsdt_tbl_offset);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 8c2388274c..d3242c6b31 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -25,6 +25,10 @@
 #include "qemu/bswap.h"
 #include "qemu/bitops.h"
 #include "sysemu/numa.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/pci_bus.h"
+#include "qemu/range.h"
+#include "hw/pci/pci_bridge.h"
 
 static GArray *build_alloc_array(void)
 {
@@ -1597,6 +1601,500 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
     g_array_free(tables->vmgenid, mfre);
 }
 
+static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
+{
+    CrsRangeEntry *entry;
+
+    entry = g_malloc(sizeof(*entry));
+    entry->base = base;
+    entry->limit = limit;
+
+    g_ptr_array_add(ranges, entry);
+}
+
+static void crs_range_free(gpointer data)
+{
+    CrsRangeEntry *entry = (CrsRangeEntry *)data;
+    g_free(entry);
+}
+
+void crs_range_set_init(CrsRangeSet *range_set)
+{
+    range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
+    range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
+    range_set->mem_64bit_ranges =
+            g_ptr_array_new_with_free_func(crs_range_free);
+}
+
+void crs_range_set_free(CrsRangeSet *range_set)
+{
+    g_ptr_array_free(range_set->io_ranges, true);
+    g_ptr_array_free(range_set->mem_ranges, true);
+    g_ptr_array_free(range_set->mem_64bit_ranges, true);
+}
+
+static gint crs_range_compare(gconstpointer a, gconstpointer b)
+{
+     CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
+     CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
+
+     return (int64_t)entry_a->base - (int64_t)entry_b->base;
+}
+
+/*
+ * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
+ * interval, computes the 'free' ranges from the same interval.
+ * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
+ * will return { [base - a1], [a2 - b1], [b2 - limit] }.
+ */
+void crs_replace_with_free_ranges(GPtrArray *ranges,
+                                         uint64_t start, uint64_t end)
+{
+    GPtrArray *free_ranges = g_ptr_array_new();
+    uint64_t free_base = start;
+    int i;
+
+    g_ptr_array_sort(ranges, crs_range_compare);
+    for (i = 0; i < ranges->len; i++) {
+        CrsRangeEntry *used = g_ptr_array_index(ranges, i);
+
+        if (free_base < used->base) {
+            crs_range_insert(free_ranges, free_base, used->base - 1);
+        }
+
+        free_base = used->limit + 1;
+    }
+
+    if (free_base < end) {
+        crs_range_insert(free_ranges, free_base, end);
+    }
+
+    g_ptr_array_set_size(ranges, 0);
+    for (i = 0; i < free_ranges->len; i++) {
+        g_ptr_array_add(ranges, g_ptr_array_index(free_ranges, i));
+    }
+
+    g_ptr_array_free(free_ranges, true);
+}
+
+/*
+ * crs_range_merge - merges adjacent ranges in the given array.
+ * Array elements are deleted and replaced with the merged ranges.
+ */
+static void crs_range_merge(GPtrArray *range)
+{
+    GPtrArray *tmp =  g_ptr_array_new_with_free_func(crs_range_free);
+    CrsRangeEntry *entry;
+    uint64_t range_base, range_limit;
+    int i;
+
+    if (!range->len) {
+        return;
+    }
+
+    g_ptr_array_sort(range, crs_range_compare);
+
+    entry = g_ptr_array_index(range, 0);
+    range_base = entry->base;
+    range_limit = entry->limit;
+    for (i = 1; i < range->len; i++) {
+        entry = g_ptr_array_index(range, i);
+        if (entry->base - 1 == range_limit) {
+            range_limit = entry->limit;
+        } else {
+            crs_range_insert(tmp, range_base, range_limit);
+            range_base = entry->base;
+            range_limit = entry->limit;
+        }
+    }
+    crs_range_insert(tmp, range_base, range_limit);
+
+    g_ptr_array_set_size(range, 0);
+    for (i = 0; i < tmp->len; i++) {
+        entry = g_ptr_array_index(tmp, i);
+        crs_range_insert(range, entry->base, entry->limit);
+    }
+    g_ptr_array_free(tmp, true);
+}
+
+Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
+{
+    Aml *crs = aml_resource_template();
+    CrsRangeSet temp_range_set;
+    CrsRangeEntry *entry;
+    uint8_t max_bus = pci_bus_num(host->bus);
+    uint8_t type;
+    int devfn;
+    int i;
+
+    crs_range_set_init(&temp_range_set);
+    for (devfn = 0; devfn < ARRAY_SIZE(host->bus->devices); devfn++) {
+        uint64_t range_base, range_limit;
+        PCIDevice *dev = host->bus->devices[devfn];
+
+        if (!dev) {
+            continue;
+        }
+
+        for (i = 0; i < PCI_NUM_REGIONS; i++) {
+            PCIIORegion *r = &dev->io_regions[i];
+
+            range_base = r->addr;
+            range_limit = r->addr + r->size - 1;
+
+            /*
+             * Work-around for old bioses
+             * that do not support multiple root buses
+             */
+            if (!range_base || range_base > range_limit) {
+                continue;
+            }
+
+            if (r->type & PCI_BASE_ADDRESS_SPACE_IO) {
+                crs_range_insert(temp_range_set.io_ranges,
+                                 range_base, range_limit);
+            } else { /* "memory" */
+                crs_range_insert(temp_range_set.mem_ranges,
+                                 range_base, range_limit);
+            }
+        }
+
+        type = dev->config[PCI_HEADER_TYPE] & ~PCI_HEADER_TYPE_MULTI_FUNCTION;
+        if (type == PCI_HEADER_TYPE_BRIDGE) {
+            uint8_t subordinate = dev->config[PCI_SUBORDINATE_BUS];
+            if (subordinate > max_bus) {
+                max_bus = subordinate;
+            }
+
+            range_base = pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_IO);
+            range_limit = pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_IO);
+
+            /*
+             * Work-around for old bioses
+             * that do not support multiple root buses
+             */
+            if (range_base && range_base <= range_limit) {
+                crs_range_insert(temp_range_set.io_ranges,
+                                 range_base, range_limit);
+            }
+
+            range_base =
+                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
+            range_limit =
+                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
+
+            /*
+             * Work-around for old bioses
+             * that do not support multiple root buses
+             */
+            if (range_base && range_base <= range_limit) {
+                uint64_t length = range_limit - range_base + 1;
+                if (range_limit <= UINT32_MAX && length <= UINT32_MAX)  {
+                    crs_range_insert(temp_range_set.mem_ranges,
+                                     range_base, range_limit);
+                } else {
+                    crs_range_insert(temp_range_set.mem_64bit_ranges,
+                                     range_base, range_limit);
+                }
+            }
+
+            range_base =
+                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
+            range_limit =
+                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
+
+            /*
+             * Work-around for old bioses
+             * that do not support multiple root buses
+             */
+            if (range_base && range_base <= range_limit) {
+                uint64_t length = range_limit - range_base + 1;
+                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
+                    crs_range_insert(temp_range_set.mem_ranges,
+                                     range_base, range_limit);
+                } else {
+                    crs_range_insert(temp_range_set.mem_64bit_ranges,
+                                     range_base, range_limit);
+                }
+            }
+        }
+    }
+
+    crs_range_merge(temp_range_set.io_ranges);
+    for (i = 0; i < temp_range_set.io_ranges->len; i++) {
+        entry = g_ptr_array_index(temp_range_set.io_ranges, i);
+        aml_append(crs,
+                   aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
+                               AML_POS_DECODE, AML_ENTIRE_RANGE,
+                               0, entry->base, entry->limit, 0,
+                               entry->limit - entry->base + 1));
+        crs_range_insert(range_set->io_ranges, entry->base, entry->limit);
+    }
+
+    crs_range_merge(temp_range_set.mem_ranges);
+    for (i = 0; i < temp_range_set.mem_ranges->len; i++) {
+        entry = g_ptr_array_index(temp_range_set.mem_ranges, i);
+        aml_append(crs,
+                   aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
+                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
+                                    AML_READ_WRITE,
+                                    0, entry->base, entry->limit, 0,
+                                    entry->limit - entry->base + 1));
+        crs_range_insert(range_set->mem_ranges, entry->base, entry->limit);
+    }
+
+    crs_range_merge(temp_range_set.mem_64bit_ranges);
+    for (i = 0; i < temp_range_set.mem_64bit_ranges->len; i++) {
+        entry = g_ptr_array_index(temp_range_set.mem_64bit_ranges, i);
+        aml_append(crs,
+                   aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
+                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
+                                    AML_READ_WRITE,
+                                    0, entry->base, entry->limit, 0,
+                                    entry->limit - entry->base + 1));
+        crs_range_insert(range_set->mem_64bit_ranges,
+                         entry->base, entry->limit);
+    }
+
+    crs_range_set_free(&temp_range_set);
+
+    aml_append(crs,
+        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
+                            0,
+                            pci_bus_num(host->bus),
+                            max_bus,
+                            0,
+                            max_bus - pci_bus_num(host->bus) + 1));
+
+    return crs;
+}
+
+Aml *build_osc_method(void)
+{
+    Aml *if_ctx;
+    Aml *if_ctx2;
+    Aml *else_ctx;
+    Aml *method;
+    Aml *a_cwd1 = aml_name("CDW1");
+    Aml *a_ctrl = aml_local(0);
+
+    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
+    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
+
+    if_ctx = aml_if(aml_equal(
+        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
+    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
+    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
+
+    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
+
+    /*
+     * Always allow native PME, AER (no dependencies)
+     * Allow SHPC (PCI bridges can have SHPC controller)
+     */
+    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
+
+    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
+    /* Unknown revision */
+    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
+    aml_append(if_ctx, if_ctx2);
+
+    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
+    /* Capabilities bits were masked */
+    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
+    aml_append(if_ctx, if_ctx2);
+
+    /* Update DWORD3 in the buffer */
+    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
+    aml_append(method, if_ctx);
+
+    else_ctx = aml_else();
+    /* Unrecognized UUID */
+    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
+    aml_append(method, else_ctx);
+
+    aml_append(method, aml_return(aml_arg(3)));
+    return method;
+}
+
+void
+build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info)
+{
+    AcpiTableMcfg *mcfg;
+    const char *sig;
+    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
+
+    mcfg = acpi_data_push(table_data, len);
+    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
+    /* Only a single allocation so no need to play with segments */
+    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
+    mcfg->allocation[0].start_bus_number = 0;
+    mcfg->allocation[0].end_bus_number = PCIE_MMCFG_BUS(info->mcfg_size - 1);
+
+    /* MCFG is used for ECAM which can be enabled or disabled by guest.
+     * To avoid table size changes (which create migration issues),
+     * always create the table even if there are no allocations,
+     * but set the signature to a reserved value in this case.
+     * ACPI spec requires OSPMs to ignore such tables.
+     */
+    if (info->mcfg_base == PCIE_BASE_ADDR_UNMAPPED) {
+        /* Reserved signature: ignored by OSPM */
+        sig = "QEMU";
+    } else {
+        sig = "MCFG";
+    }
+    build_header(linker, table_data, (void *)mcfg, sig, len, 1, NULL, NULL);
+}
+
+Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
+{
+    Aml *dev;
+    Aml *crs;
+    Aml *method;
+    uint32_t irqs;
+
+    dev = aml_device("%s", name);
+    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C0F")));
+    aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
+
+    crs = aml_resource_template();
+    irqs = gsi;
+    aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
+                                  AML_SHARED, &irqs, 1));
+    aml_append(dev, aml_name_decl("_PRS", crs));
+
+    aml_append(dev, aml_name_decl("_CRS", crs));
+
+    /*
+     * _DIS can be no-op because the interrupt cannot be disabled.
+     */
+    method = aml_method("_DIS", 0, AML_NOTSERIALIZED);
+    aml_append(dev, method);
+
+    method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
+    aml_append(dev, method);
+
+    return dev;
+}
+
+/**
+ * build_prt_entry:
+ * @link_name: link name for PCI route entry
+ *
+ * build AML package containing a PCI route entry for @link_name
+ */
+static Aml *build_prt_entry(const char *link_name)
+{
+    Aml *a_zero = aml_int(0);
+    Aml *pkg = aml_package(4);
+    aml_append(pkg, a_zero);
+    aml_append(pkg, a_zero);
+    aml_append(pkg, aml_name("%s", link_name));
+    aml_append(pkg, a_zero);
+    return pkg;
+}
+
+/*
+ * initialize_route - Initialize the interrupt routing rule
+ * through a specific LINK:
+ *  if (lnk_idx == idx)
+ *      route using link 'link_name'
+ */
+static Aml *initialize_route(Aml *route, const char *link_name,
+                             Aml *lnk_idx, int idx)
+{
+    Aml *if_ctx = aml_if(aml_equal(lnk_idx, aml_int(idx)));
+    Aml *pkg = build_prt_entry(link_name);
+
+    aml_append(if_ctx, aml_store(pkg, route));
+
+    return if_ctx;
+}
+
+/*
+ * build_prt - Define interrupt rounting rules
+ *
+ * Returns an array of 128 routes, one for each device,
+ * based on device location.
+ * The main goal is to equaly distribute the interrupts
+ * over the 4 existing ACPI links (works only for i440fx).
+ * The hash function is  (slot + pin) & 3 -> "LNK[D|A|B|C]".
+ *
+ */
+Aml *build_prt(bool is_pci0_prt)
+{
+    Aml *method, *while_ctx, *pin, *res;
+
+    method = aml_method("_PRT", 0, AML_NOTSERIALIZED);
+    res = aml_local(0);
+    pin = aml_local(1);
+    aml_append(method, aml_store(aml_package(128), res));
+    aml_append(method, aml_store(aml_int(0), pin));
+
+    /* while (pin < 128) */
+    while_ctx = aml_while(aml_lless(pin, aml_int(128)));
+    {
+        Aml *slot = aml_local(2);
+        Aml *lnk_idx = aml_local(3);
+        Aml *route = aml_local(4);
+
+        /* slot = pin >> 2 */
+        aml_append(while_ctx,
+                   aml_store(aml_shiftright(pin, aml_int(2), NULL), slot));
+        /* lnk_idx = (slot + pin) & 3 */
+        aml_append(while_ctx,
+            aml_store(aml_and(aml_add(pin, slot, NULL), aml_int(3), NULL),
+                      lnk_idx));
+
+        /* route[2] = "LNK[D|A|B|C]", selection based on pin % 3  */
+        aml_append(while_ctx, initialize_route(route, "LNKD", lnk_idx, 0));
+        if (is_pci0_prt) {
+            Aml *if_device_1, *if_pin_4, *else_pin_4;
+
+            /* device 1 is the power-management device, needs SCI */
+            if_device_1 = aml_if(aml_equal(lnk_idx, aml_int(1)));
+            {
+                if_pin_4 = aml_if(aml_equal(pin, aml_int(4)));
+                {
+                    aml_append(if_pin_4,
+                        aml_store(build_prt_entry("LNKS"), route));
+                }
+                aml_append(if_device_1, if_pin_4);
+                else_pin_4 = aml_else();
+                {
+                    aml_append(else_pin_4,
+                        aml_store(build_prt_entry("LNKA"), route));
+                }
+                aml_append(if_device_1, else_pin_4);
+            }
+            aml_append(while_ctx, if_device_1);
+        } else {
+            aml_append(while_ctx, initialize_route(route, "LNKA", lnk_idx, 1));
+        }
+        aml_append(while_ctx, initialize_route(route, "LNKB", lnk_idx, 2));
+        aml_append(while_ctx, initialize_route(route, "LNKC", lnk_idx, 3));
+
+        /* route[0] = 0x[slot]FFFF */
+        aml_append(while_ctx,
+            aml_store(aml_or(aml_shiftleft(slot, aml_int(16)), aml_int(0xFFFF),
+                             NULL),
+                      aml_index(route, aml_int(0))));
+        /* route[1] = pin & 3 */
+        aml_append(while_ctx,
+            aml_store(aml_and(pin, aml_int(3), NULL),
+                      aml_index(route, aml_int(1))));
+        /* res[pin] = route */
+        aml_append(while_ctx, aml_store(route, aml_index(res, pin)));
+        /* pin++ */
+        aml_append(while_ctx, aml_increment(pin));
+    }
+    aml_append(method, while_ctx);
+    /* return res*/
+    aml_append(method, aml_return(res));
+
+    return method;
+}
+
 /* Build rsdt table */
 void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 261363e20c..c9b4916ba7 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -545,7 +545,7 @@ build_srat(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 }
 
 static void
-build_mcfg(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
+virt_build_mcfg(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
     AcpiTableMcfg *mcfg;
     const MemMapEntry *memmap = vms->memmap;
@@ -790,7 +790,7 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
     build_gtdt(tables_blob, tables->linker, vms);
 
     acpi_add_table(table_offsets, tables_blob);
-    build_mcfg(tables_blob, tables->linker, vms);
+    virt_build_mcfg(tables_blob, tables->linker, vms);
 
     acpi_add_table(table_offsets, tables_blob);
     build_spcr(tables_blob, tables->linker, vms);
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index cfc2444d0d..996d8a11dc 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -27,7 +27,6 @@
 #include "qemu-common.h"
 #include "qemu/bitmap.h"
 #include "qemu/error-report.h"
-#include "hw/pci/pci.h"
 #include "qom/cpu.h"
 #include "target/i386/cpu.h"
 #include "hw/misc/pvpanic.h"
@@ -53,7 +52,6 @@
 #include "hw/acpi/piix4.h"
 #include "hw/acpi/pcihp.h"
 #include "hw/i386/ich9.h"
-#include "hw/pci/pci_bus.h"
 #include "hw/pci-host/q35.h"
 #include "hw/i386/x86-iommu.h"
 
@@ -86,11 +84,6 @@
 /* Default IOAPIC ID */
 #define ACPI_BUILD_IOAPIC_ID 0x0
 
-typedef struct AcpiMcfgInfo {
-    uint64_t mcfg_base;
-    uint32_t mcfg_size;
-} AcpiMcfgInfo;
-
 typedef struct AcpiPmInfo {
     bool s3_disabled;
     bool s4_disabled;
@@ -567,403 +560,6 @@ static void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
     qobject_unref(bsel);
 }
 
-/**
- * build_prt_entry:
- * @link_name: link name for PCI route entry
- *
- * build AML package containing a PCI route entry for @link_name
- */
-static Aml *build_prt_entry(const char *link_name)
-{
-    Aml *a_zero = aml_int(0);
-    Aml *pkg = aml_package(4);
-    aml_append(pkg, a_zero);
-    aml_append(pkg, a_zero);
-    aml_append(pkg, aml_name("%s", link_name));
-    aml_append(pkg, a_zero);
-    return pkg;
-}
-
-/*
- * initialize_route - Initialize the interrupt routing rule
- * through a specific LINK:
- *  if (lnk_idx == idx)
- *      route using link 'link_name'
- */
-static Aml *initialize_route(Aml *route, const char *link_name,
-                             Aml *lnk_idx, int idx)
-{
-    Aml *if_ctx = aml_if(aml_equal(lnk_idx, aml_int(idx)));
-    Aml *pkg = build_prt_entry(link_name);
-
-    aml_append(if_ctx, aml_store(pkg, route));
-
-    return if_ctx;
-}
-
-/*
- * build_prt - Define interrupt rounting rules
- *
- * Returns an array of 128 routes, one for each device,
- * based on device location.
- * The main goal is to equaly distribute the interrupts
- * over the 4 existing ACPI links (works only for i440fx).
- * The hash function is  (slot + pin) & 3 -> "LNK[D|A|B|C]".
- *
- */
-static Aml *build_prt(bool is_pci0_prt)
-{
-    Aml *method, *while_ctx, *pin, *res;
-
-    method = aml_method("_PRT", 0, AML_NOTSERIALIZED);
-    res = aml_local(0);
-    pin = aml_local(1);
-    aml_append(method, aml_store(aml_package(128), res));
-    aml_append(method, aml_store(aml_int(0), pin));
-
-    /* while (pin < 128) */
-    while_ctx = aml_while(aml_lless(pin, aml_int(128)));
-    {
-        Aml *slot = aml_local(2);
-        Aml *lnk_idx = aml_local(3);
-        Aml *route = aml_local(4);
-
-        /* slot = pin >> 2 */
-        aml_append(while_ctx,
-                   aml_store(aml_shiftright(pin, aml_int(2), NULL), slot));
-        /* lnk_idx = (slot + pin) & 3 */
-        aml_append(while_ctx,
-            aml_store(aml_and(aml_add(pin, slot, NULL), aml_int(3), NULL),
-                      lnk_idx));
-
-        /* route[2] = "LNK[D|A|B|C]", selection based on pin % 3  */
-        aml_append(while_ctx, initialize_route(route, "LNKD", lnk_idx, 0));
-        if (is_pci0_prt) {
-            Aml *if_device_1, *if_pin_4, *else_pin_4;
-
-            /* device 1 is the power-management device, needs SCI */
-            if_device_1 = aml_if(aml_equal(lnk_idx, aml_int(1)));
-            {
-                if_pin_4 = aml_if(aml_equal(pin, aml_int(4)));
-                {
-                    aml_append(if_pin_4,
-                        aml_store(build_prt_entry("LNKS"), route));
-                }
-                aml_append(if_device_1, if_pin_4);
-                else_pin_4 = aml_else();
-                {
-                    aml_append(else_pin_4,
-                        aml_store(build_prt_entry("LNKA"), route));
-                }
-                aml_append(if_device_1, else_pin_4);
-            }
-            aml_append(while_ctx, if_device_1);
-        } else {
-            aml_append(while_ctx, initialize_route(route, "LNKA", lnk_idx, 1));
-        }
-        aml_append(while_ctx, initialize_route(route, "LNKB", lnk_idx, 2));
-        aml_append(while_ctx, initialize_route(route, "LNKC", lnk_idx, 3));
-
-        /* route[0] = 0x[slot]FFFF */
-        aml_append(while_ctx,
-            aml_store(aml_or(aml_shiftleft(slot, aml_int(16)), aml_int(0xFFFF),
-                             NULL),
-                      aml_index(route, aml_int(0))));
-        /* route[1] = pin & 3 */
-        aml_append(while_ctx,
-            aml_store(aml_and(pin, aml_int(3), NULL),
-                      aml_index(route, aml_int(1))));
-        /* res[pin] = route */
-        aml_append(while_ctx, aml_store(route, aml_index(res, pin)));
-        /* pin++ */
-        aml_append(while_ctx, aml_increment(pin));
-    }
-    aml_append(method, while_ctx);
-    /* return res*/
-    aml_append(method, aml_return(res));
-
-    return method;
-}
-
-typedef struct CrsRangeEntry {
-    uint64_t base;
-    uint64_t limit;
-} CrsRangeEntry;
-
-static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
-{
-    CrsRangeEntry *entry;
-
-    entry = g_malloc(sizeof(*entry));
-    entry->base = base;
-    entry->limit = limit;
-
-    g_ptr_array_add(ranges, entry);
-}
-
-static void crs_range_free(gpointer data)
-{
-    CrsRangeEntry *entry = (CrsRangeEntry *)data;
-    g_free(entry);
-}
-
-typedef struct CrsRangeSet {
-    GPtrArray *io_ranges;
-    GPtrArray *mem_ranges;
-    GPtrArray *mem_64bit_ranges;
- } CrsRangeSet;
-
-static void crs_range_set_init(CrsRangeSet *range_set)
-{
-    range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
-    range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
-    range_set->mem_64bit_ranges =
-            g_ptr_array_new_with_free_func(crs_range_free);
-}
-
-static void crs_range_set_free(CrsRangeSet *range_set)
-{
-    g_ptr_array_free(range_set->io_ranges, true);
-    g_ptr_array_free(range_set->mem_ranges, true);
-    g_ptr_array_free(range_set->mem_64bit_ranges, true);
-}
-
-static gint crs_range_compare(gconstpointer a, gconstpointer b)
-{
-     CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
-     CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
-
-     return (int64_t)entry_a->base - (int64_t)entry_b->base;
-}
-
-/*
- * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
- * interval, computes the 'free' ranges from the same interval.
- * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
- * will return { [base - a1], [a2 - b1], [b2 - limit] }.
- */
-static void crs_replace_with_free_ranges(GPtrArray *ranges,
-                                         uint64_t start, uint64_t end)
-{
-    GPtrArray *free_ranges = g_ptr_array_new();
-    uint64_t free_base = start;
-    int i;
-
-    g_ptr_array_sort(ranges, crs_range_compare);
-    for (i = 0; i < ranges->len; i++) {
-        CrsRangeEntry *used = g_ptr_array_index(ranges, i);
-
-        if (free_base < used->base) {
-            crs_range_insert(free_ranges, free_base, used->base - 1);
-        }
-
-        free_base = used->limit + 1;
-    }
-
-    if (free_base < end) {
-        crs_range_insert(free_ranges, free_base, end);
-    }
-
-    g_ptr_array_set_size(ranges, 0);
-    for (i = 0; i < free_ranges->len; i++) {
-        g_ptr_array_add(ranges, g_ptr_array_index(free_ranges, i));
-    }
-
-    g_ptr_array_free(free_ranges, true);
-}
-
-/*
- * crs_range_merge - merges adjacent ranges in the given array.
- * Array elements are deleted and replaced with the merged ranges.
- */
-static void crs_range_merge(GPtrArray *range)
-{
-    GPtrArray *tmp =  g_ptr_array_new_with_free_func(crs_range_free);
-    CrsRangeEntry *entry;
-    uint64_t range_base, range_limit;
-    int i;
-
-    if (!range->len) {
-        return;
-    }
-
-    g_ptr_array_sort(range, crs_range_compare);
-
-    entry = g_ptr_array_index(range, 0);
-    range_base = entry->base;
-    range_limit = entry->limit;
-    for (i = 1; i < range->len; i++) {
-        entry = g_ptr_array_index(range, i);
-        if (entry->base - 1 == range_limit) {
-            range_limit = entry->limit;
-        } else {
-            crs_range_insert(tmp, range_base, range_limit);
-            range_base = entry->base;
-            range_limit = entry->limit;
-        }
-    }
-    crs_range_insert(tmp, range_base, range_limit);
-
-    g_ptr_array_set_size(range, 0);
-    for (i = 0; i < tmp->len; i++) {
-        entry = g_ptr_array_index(tmp, i);
-        crs_range_insert(range, entry->base, entry->limit);
-    }
-    g_ptr_array_free(tmp, true);
-}
-
-static Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
-{
-    Aml *crs = aml_resource_template();
-    CrsRangeSet temp_range_set;
-    CrsRangeEntry *entry;
-    uint8_t max_bus = pci_bus_num(host->bus);
-    uint8_t type;
-    int devfn;
-    int i;
-
-    crs_range_set_init(&temp_range_set);
-    for (devfn = 0; devfn < ARRAY_SIZE(host->bus->devices); devfn++) {
-        uint64_t range_base, range_limit;
-        PCIDevice *dev = host->bus->devices[devfn];
-
-        if (!dev) {
-            continue;
-        }
-
-        for (i = 0; i < PCI_NUM_REGIONS; i++) {
-            PCIIORegion *r = &dev->io_regions[i];
-
-            range_base = r->addr;
-            range_limit = r->addr + r->size - 1;
-
-            /*
-             * Work-around for old bioses
-             * that do not support multiple root buses
-             */
-            if (!range_base || range_base > range_limit) {
-                continue;
-            }
-
-            if (r->type & PCI_BASE_ADDRESS_SPACE_IO) {
-                crs_range_insert(temp_range_set.io_ranges,
-                                 range_base, range_limit);
-            } else { /* "memory" */
-                crs_range_insert(temp_range_set.mem_ranges,
-                                 range_base, range_limit);
-            }
-        }
-
-        type = dev->config[PCI_HEADER_TYPE] & ~PCI_HEADER_TYPE_MULTI_FUNCTION;
-        if (type == PCI_HEADER_TYPE_BRIDGE) {
-            uint8_t subordinate = dev->config[PCI_SUBORDINATE_BUS];
-            if (subordinate > max_bus) {
-                max_bus = subordinate;
-            }
-
-            range_base = pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_IO);
-            range_limit = pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_IO);
-
-            /*
-             * Work-around for old bioses
-             * that do not support multiple root buses
-             */
-            if (range_base && range_base <= range_limit) {
-                crs_range_insert(temp_range_set.io_ranges,
-                                 range_base, range_limit);
-            }
-
-            range_base =
-                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
-            range_limit =
-                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
-
-            /*
-             * Work-around for old bioses
-             * that do not support multiple root buses
-             */
-            if (range_base && range_base <= range_limit) {
-                uint64_t length = range_limit - range_base + 1;
-                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
-                    crs_range_insert(temp_range_set.mem_ranges,
-                                     range_base, range_limit);
-                } else {
-                    crs_range_insert(temp_range_set.mem_64bit_ranges,
-                                     range_base, range_limit);
-                }
-            }
-
-            range_base =
-                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
-            range_limit =
-                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
-
-            /*
-             * Work-around for old bioses
-             * that do not support multiple root buses
-             */
-            if (range_base && range_base <= range_limit) {
-                uint64_t length = range_limit - range_base + 1;
-                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
-                    crs_range_insert(temp_range_set.mem_ranges,
-                                     range_base, range_limit);
-                } else {
-                    crs_range_insert(temp_range_set.mem_64bit_ranges,
-                                     range_base, range_limit);
-                }
-            }
-        }
-    }
-
-    crs_range_merge(temp_range_set.io_ranges);
-    for (i = 0; i < temp_range_set.io_ranges->len; i++) {
-        entry = g_ptr_array_index(temp_range_set.io_ranges, i);
-        aml_append(crs,
-                   aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
-                               AML_POS_DECODE, AML_ENTIRE_RANGE,
-                               0, entry->base, entry->limit, 0,
-                               entry->limit - entry->base + 1));
-        crs_range_insert(range_set->io_ranges, entry->base, entry->limit);
-    }
-
-    crs_range_merge(temp_range_set.mem_ranges);
-    for (i = 0; i < temp_range_set.mem_ranges->len; i++) {
-        entry = g_ptr_array_index(temp_range_set.mem_ranges, i);
-        aml_append(crs,
-                   aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
-                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
-                                    AML_READ_WRITE,
-                                    0, entry->base, entry->limit, 0,
-                                    entry->limit - entry->base + 1));
-        crs_range_insert(range_set->mem_ranges, entry->base, entry->limit);
-    }
-
-    crs_range_merge(temp_range_set.mem_64bit_ranges);
-    for (i = 0; i < temp_range_set.mem_64bit_ranges->len; i++) {
-        entry = g_ptr_array_index(temp_range_set.mem_64bit_ranges, i);
-        aml_append(crs,
-                   aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
-                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
-                                    AML_READ_WRITE,
-                                    0, entry->base, entry->limit, 0,
-                                    entry->limit - entry->base + 1));
-        crs_range_insert(range_set->mem_64bit_ranges,
-                         entry->base, entry->limit);
-    }
-
-    crs_range_set_free(&temp_range_set);
-
-    aml_append(crs,
-        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
-                            0,
-                            pci_bus_num(host->bus),
-                            max_bus,
-                            0,
-                            max_bus - pci_bus_num(host->bus) + 1));
-
-    return crs;
-}
-
 static void build_hpet_aml(Aml *table)
 {
     Aml *crs;
@@ -1334,37 +930,6 @@ static Aml *build_link_dev(const char *name, uint8_t uid, Aml *reg)
     return dev;
  }
 
-static Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
-{
-    Aml *dev;
-    Aml *crs;
-    Aml *method;
-    uint32_t irqs;
-
-    dev = aml_device("%s", name);
-    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C0F")));
-    aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
-
-    crs = aml_resource_template();
-    irqs = gsi;
-    aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
-                                  AML_SHARED, &irqs, 1));
-    aml_append(dev, aml_name_decl("_PRS", crs));
-
-    aml_append(dev, aml_name_decl("_CRS", crs));
-
-    /*
-     * _DIS can be no-op because the interrupt cannot be disabled.
-     */
-    method = aml_method("_DIS", 0, AML_NOTSERIALIZED);
-    aml_append(dev, method);
-
-    method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
-    aml_append(dev, method);
-
-    return dev;
-}
-
 /* _CRS method - get current settings */
 static Aml *build_iqcr_method(bool is_piix4)
 {
@@ -1728,54 +1293,6 @@ static void build_piix4_pci_hotplug(Aml *table)
     aml_append(table, scope);
 }
 
-static Aml *build_q35_osc_method(void)
-{
-    Aml *if_ctx;
-    Aml *if_ctx2;
-    Aml *else_ctx;
-    Aml *method;
-    Aml *a_cwd1 = aml_name("CDW1");
-    Aml *a_ctrl = aml_local(0);
-
-    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
-    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
-
-    if_ctx = aml_if(aml_equal(
-        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
-    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
-    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
-
-    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
-
-    /*
-     * Always allow native PME, AER (no dependencies)
-     * Allow SHPC (PCI bridges can have SHPC controller)
-     */
-    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
-
-    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
-    /* Unknown revision */
-    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
-    aml_append(if_ctx, if_ctx2);
-
-    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
-    /* Capabilities bits were masked */
-    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
-    aml_append(if_ctx, if_ctx2);
-
-    /* Update DWORD3 in the buffer */
-    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
-    aml_append(method, if_ctx);
-
-    else_ctx = aml_else();
-    /* Unrecognized UUID */
-    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
-    aml_append(method, else_ctx);
-
-    aml_append(method, aml_return(aml_arg(3)));
-    return method;
-}
-
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
@@ -1818,7 +1335,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
         aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
         aml_append(dev, aml_name_decl("_UID", aml_int(1)));
-        aml_append(dev, build_q35_osc_method());
+        aml_append(dev, build_osc_method());
         aml_append(sb_scope, dev);
         aml_append(dsdt, sb_scope);
 
@@ -1883,7 +1400,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
             if (pci_bus_is_express(bus)) {
-                aml_append(dev, build_q35_osc_method());
+                aml_append(dev, build_osc_method());
             }
 
             if (numa_node != NUMA_NODE_UNASSIGNED) {
@@ -2370,35 +1887,6 @@ build_srat(GArray *table_data, BIOSLinker *linker,
                  table_data->len - srat_start, 1, NULL, NULL);
 }
 
-static void
-build_mcfg_q35(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info)
-{
-    AcpiTableMcfg *mcfg;
-    const char *sig;
-    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
-
-    mcfg = acpi_data_push(table_data, len);
-    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
-    /* Only a single allocation so no need to play with segments */
-    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
-    mcfg->allocation[0].start_bus_number = 0;
-    mcfg->allocation[0].end_bus_number = PCIE_MMCFG_BUS(info->mcfg_size - 1);
-
-    /* MCFG is used for ECAM which can be enabled or disabled by guest.
-     * To avoid table size changes (which create migration issues),
-     * always create the table even if there are no allocations,
-     * but set the signature to a reserved value in this case.
-     * ACPI spec requires OSPMs to ignore such tables.
-     */
-    if (info->mcfg_base == PCIE_BASE_ADDR_UNMAPPED) {
-        /* Reserved signature: ignored by OSPM */
-        sig = "QEMU";
-    } else {
-        sig = "MCFG";
-    }
-    build_header(linker, table_data, (void *)mcfg, sig, len, 1, NULL, NULL);
-}
-
 /*
  * VT-d spec 8.1 DMA Remapping Reporting Structure
  * (version Oct. 2014 or later)
@@ -2626,7 +2114,7 @@ void acpi_build(AcpiBuildTables *tables,
     }
     if (acpi_get_mcfg(&mcfg)) {
         acpi_add_table(table_offsets, tables_blob);
-        build_mcfg_q35(tables_blob, tables->linker, &mcfg);
+        build_mcfg(tables_blob, tables->linker, &mcfg);
     }
     if (x86_iommu_get_default()) {
         IommuType IOMMUType = x86_iommu_get_type();
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 08/24] hw: acpi: Factorize _OSC AML across architectures
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost, Yang Zhong

From: Yang Zhong <yang.zhong@intel.com>

The _OSC AML table is almost identical between the i386 Q35 and arm virt
machine types. We can make it slightly more generic and share it across
all PCIe architectures.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 include/hw/acpi/acpi-defs.h | 14 +++++++
 include/hw/acpi/aml-build.h |  2 +-
 hw/acpi/aml-build.c         | 84 +++++++++++++++++++------------------
 hw/arm/virt-acpi-build.c    | 45 ++------------------
 hw/i386/acpi-build.c        |  6 ++-
 5 files changed, 66 insertions(+), 85 deletions(-)

diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
index af8e023968..6e1726e0a2 100644
--- a/include/hw/acpi/acpi-defs.h
+++ b/include/hw/acpi/acpi-defs.h
@@ -652,4 +652,18 @@ struct AcpiIortRC {
 } QEMU_PACKED;
 typedef struct AcpiIortRC AcpiIortRC;
 
+/* _OSC */
+
+#define ACPI_OSC_CTRL_PCIE_NATIVE_HP (1 << 0)
+#define ACPI_OSC_CTRL_SHPC_NATIVE_HP (1 << 1)
+#define ACPI_OSC_CTRL_PCIE_PM_EVT    (1 << 2)
+#define ACPI_OSC_CTRL_PCIE_AER       (1 << 3)
+#define ACPI_OSC_CTRL_PCIE_CAP_CTRL  (1 << 4)
+#define ACPI_OSC_CTRL_PCI_ALL \
+    (ACPI_OSC_CTRL_PCIE_NATIVE_HP |             \
+     ACPI_OSC_CTRL_SHPC_NATIVE_HP |             \
+     ACPI_OSC_CTRL_PCIE_PM_EVT |                \
+     ACPI_OSC_CTRL_PCIE_AER |                   \
+     ACPI_OSC_CTRL_PCIE_CAP_CTRL)
+
 #endif
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 4f678c45a5..c27c0935ae 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -405,7 +405,7 @@ void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
-Aml *build_osc_method(void);
+Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index d3242c6b31..2b9a636e75 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1869,51 +1869,55 @@ Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
     return crs;
 }
 
-Aml *build_osc_method(void)
+/*
+ * ctrl_mask is the _OSC capabilities buffer control field mask.
+ */
+Aml *build_osc_method(uint32_t ctrl_mask)
 {
-    Aml *if_ctx;
-    Aml *if_ctx2;
-    Aml *else_ctx;
-    Aml *method;
-    Aml *a_cwd1 = aml_name("CDW1");
-    Aml *a_ctrl = aml_local(0);
+    Aml *ifctx, *ifctx1, *elsectx, *method, *UUID;
 
     method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
-    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
-
-    if_ctx = aml_if(aml_equal(
-        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
-    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
-    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
-
-    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
-
-    /*
-     * Always allow native PME, AER (no dependencies)
-     * Allow SHPC (PCI bridges can have SHPC controller)
+    aml_append(method,
+        aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
+
+    /* PCI Firmware Specification 3.0
+     * 4.5.1. _OSC Interface for PCI Host Bridge Devices
+     * The _OSC interface for a PCI/PCI-X/PCI Express hierarchy is
+     * identified by the Universal Unique IDentifier (UUID)
+     * 33DB4D5B-1FF7-401C-9657-7441C03DD766
      */
-    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
-
-    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
-    /* Unknown revision */
-    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
-    aml_append(if_ctx, if_ctx2);
-
-    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
-    /* Capabilities bits were masked */
-    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
-    aml_append(if_ctx, if_ctx2);
-
-    /* Update DWORD3 in the buffer */
-    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
-    aml_append(method, if_ctx);
-
-    else_ctx = aml_else();
-    /* Unrecognized UUID */
-    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
-    aml_append(method, else_ctx);
+    UUID = aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766");
+    ifctx = aml_if(aml_equal(aml_arg(0), UUID));
+    aml_append(ifctx,
+        aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
+    aml_append(ifctx,
+        aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
+    aml_append(ifctx, aml_store(aml_name("CDW2"), aml_name("SUPP")));
+    aml_append(ifctx, aml_store(aml_name("CDW3"), aml_name("CTRL")));
+    aml_append(ifctx, aml_store(aml_and(aml_name("CTRL"),
+                                        aml_int(ctrl_mask), NULL),
+                                aml_name("CTRL")));
+
+    ifctx1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
+    aml_append(ifctx1, aml_store(aml_or(aml_name("CDW1"), aml_int(0x08), NULL),
+                                 aml_name("CDW1")));
+    aml_append(ifctx, ifctx1);
+
+    ifctx1 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), aml_name("CTRL"))));
+    aml_append(ifctx1, aml_store(aml_or(aml_name("CDW1"), aml_int(0x10), NULL),
+                                 aml_name("CDW1")));
+    aml_append(ifctx, ifctx1);
+
+    aml_append(ifctx, aml_store(aml_name("CTRL"), aml_name("CDW3")));
+    aml_append(ifctx, aml_return(aml_arg(3)));
+    aml_append(method, ifctx);
+
+    elsectx = aml_else();
+    aml_append(elsectx, aml_store(aml_or(aml_name("CDW1"), aml_int(4), NULL),
+                                  aml_name("CDW1")));
+    aml_append(elsectx, aml_return(aml_arg(3)));
+    aml_append(method, elsectx);
 
-    aml_append(method, aml_return(aml_arg(3)));
     return method;
 }
 
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index c9b4916ba7..b5e165543a 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -153,7 +153,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
                               uint32_t irq, bool use_highmem, bool highmem_ecam)
 {
     int ecam_id = VIRT_ECAM_ID(highmem_ecam);
-    Aml *method, *crs, *ifctx, *UUID, *ifctx1, *elsectx, *buf;
+    Aml *method, *crs, *ifctx, *UUID, *ifctx1, *buf;
     int i, bus_no;
     hwaddr base_mmio = memmap[VIRT_PCIE_MMIO].base;
     hwaddr size_mmio = memmap[VIRT_PCIE_MMIO].size;
@@ -247,47 +247,8 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
     /* Declare an _OSC (OS Control Handoff) method */
     aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
     aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
-    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
-    aml_append(method,
-        aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
-
-    /* PCI Firmware Specification 3.0
-     * 4.5.1. _OSC Interface for PCI Host Bridge Devices
-     * The _OSC interface for a PCI/PCI-X/PCI Express hierarchy is
-     * identified by the Universal Unique IDentifier (UUID)
-     * 33DB4D5B-1FF7-401C-9657-7441C03DD766
-     */
-    UUID = aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766");
-    ifctx = aml_if(aml_equal(aml_arg(0), UUID));
-    aml_append(ifctx,
-        aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
-    aml_append(ifctx,
-        aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
-    aml_append(ifctx, aml_store(aml_name("CDW2"), aml_name("SUPP")));
-    aml_append(ifctx, aml_store(aml_name("CDW3"), aml_name("CTRL")));
-    aml_append(ifctx, aml_store(aml_and(aml_name("CTRL"), aml_int(0x1D), NULL),
-                                aml_name("CTRL")));
-
-    ifctx1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
-    aml_append(ifctx1, aml_store(aml_or(aml_name("CDW1"), aml_int(0x08), NULL),
-                                 aml_name("CDW1")));
-    aml_append(ifctx, ifctx1);
-
-    ifctx1 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), aml_name("CTRL"))));
-    aml_append(ifctx1, aml_store(aml_or(aml_name("CDW1"), aml_int(0x10), NULL),
-                                 aml_name("CDW1")));
-    aml_append(ifctx, ifctx1);
-
-    aml_append(ifctx, aml_store(aml_name("CTRL"), aml_name("CDW3")));
-    aml_append(ifctx, aml_return(aml_arg(3)));
-    aml_append(method, ifctx);
-
-    elsectx = aml_else();
-    aml_append(elsectx, aml_store(aml_or(aml_name("CDW1"), aml_int(4), NULL),
-                                  aml_name("CDW1")));
-    aml_append(elsectx, aml_return(aml_arg(3)));
-    aml_append(method, elsectx);
-    aml_append(dev, method);
+    aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL &
+                                     ~ACPI_OSC_CTRL_SHPC_NATIVE_HP));
 
     method = aml_method("_DSM", 4, AML_NOTSERIALIZED);
 
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 996d8a11dc..bd147a6bd2 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1335,7 +1335,9 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
         aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
         aml_append(dev, aml_name_decl("_UID", aml_int(1)));
-        aml_append(dev, build_osc_method());
+        aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+        aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+        aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL));
         aml_append(sb_scope, dev);
         aml_append(dsdt, sb_scope);
 
@@ -1400,7 +1402,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
             if (pci_bus_is_express(bus)) {
-                aml_append(dev, build_osc_method());
+                aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL));
             }
 
             if (numa_node != NUMA_NODE_UNASSIGNED) {
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 08/24] hw: acpi: Factorize _OSC AML across architectures
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

From: Yang Zhong <yang.zhong@intel.com>

The _OSC AML table is almost identical between the i386 Q35 and arm virt
machine types. We can make it slightly more generic and share it across
all PCIe architectures.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 include/hw/acpi/acpi-defs.h | 14 +++++++
 include/hw/acpi/aml-build.h |  2 +-
 hw/acpi/aml-build.c         | 84 +++++++++++++++++++------------------
 hw/arm/virt-acpi-build.c    | 45 ++------------------
 hw/i386/acpi-build.c        |  6 ++-
 5 files changed, 66 insertions(+), 85 deletions(-)

diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
index af8e023968..6e1726e0a2 100644
--- a/include/hw/acpi/acpi-defs.h
+++ b/include/hw/acpi/acpi-defs.h
@@ -652,4 +652,18 @@ struct AcpiIortRC {
 } QEMU_PACKED;
 typedef struct AcpiIortRC AcpiIortRC;
 
+/* _OSC */
+
+#define ACPI_OSC_CTRL_PCIE_NATIVE_HP (1 << 0)
+#define ACPI_OSC_CTRL_SHPC_NATIVE_HP (1 << 1)
+#define ACPI_OSC_CTRL_PCIE_PM_EVT    (1 << 2)
+#define ACPI_OSC_CTRL_PCIE_AER       (1 << 3)
+#define ACPI_OSC_CTRL_PCIE_CAP_CTRL  (1 << 4)
+#define ACPI_OSC_CTRL_PCI_ALL \
+    (ACPI_OSC_CTRL_PCIE_NATIVE_HP |             \
+     ACPI_OSC_CTRL_SHPC_NATIVE_HP |             \
+     ACPI_OSC_CTRL_PCIE_PM_EVT |                \
+     ACPI_OSC_CTRL_PCIE_AER |                   \
+     ACPI_OSC_CTRL_PCIE_CAP_CTRL)
+
 #endif
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 4f678c45a5..c27c0935ae 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -405,7 +405,7 @@ void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
-Aml *build_osc_method(void);
+Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index d3242c6b31..2b9a636e75 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1869,51 +1869,55 @@ Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
     return crs;
 }
 
-Aml *build_osc_method(void)
+/*
+ * ctrl_mask is the _OSC capabilities buffer control field mask.
+ */
+Aml *build_osc_method(uint32_t ctrl_mask)
 {
-    Aml *if_ctx;
-    Aml *if_ctx2;
-    Aml *else_ctx;
-    Aml *method;
-    Aml *a_cwd1 = aml_name("CDW1");
-    Aml *a_ctrl = aml_local(0);
+    Aml *ifctx, *ifctx1, *elsectx, *method, *UUID;
 
     method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
-    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
-
-    if_ctx = aml_if(aml_equal(
-        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
-    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
-    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
-
-    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
-
-    /*
-     * Always allow native PME, AER (no dependencies)
-     * Allow SHPC (PCI bridges can have SHPC controller)
+    aml_append(method,
+        aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
+
+    /* PCI Firmware Specification 3.0
+     * 4.5.1. _OSC Interface for PCI Host Bridge Devices
+     * The _OSC interface for a PCI/PCI-X/PCI Express hierarchy is
+     * identified by the Universal Unique IDentifier (UUID)
+     * 33DB4D5B-1FF7-401C-9657-7441C03DD766
      */
-    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
-
-    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
-    /* Unknown revision */
-    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
-    aml_append(if_ctx, if_ctx2);
-
-    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
-    /* Capabilities bits were masked */
-    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
-    aml_append(if_ctx, if_ctx2);
-
-    /* Update DWORD3 in the buffer */
-    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
-    aml_append(method, if_ctx);
-
-    else_ctx = aml_else();
-    /* Unrecognized UUID */
-    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
-    aml_append(method, else_ctx);
+    UUID = aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766");
+    ifctx = aml_if(aml_equal(aml_arg(0), UUID));
+    aml_append(ifctx,
+        aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
+    aml_append(ifctx,
+        aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
+    aml_append(ifctx, aml_store(aml_name("CDW2"), aml_name("SUPP")));
+    aml_append(ifctx, aml_store(aml_name("CDW3"), aml_name("CTRL")));
+    aml_append(ifctx, aml_store(aml_and(aml_name("CTRL"),
+                                        aml_int(ctrl_mask), NULL),
+                                aml_name("CTRL")));
+
+    ifctx1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
+    aml_append(ifctx1, aml_store(aml_or(aml_name("CDW1"), aml_int(0x08), NULL),
+                                 aml_name("CDW1")));
+    aml_append(ifctx, ifctx1);
+
+    ifctx1 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), aml_name("CTRL"))));
+    aml_append(ifctx1, aml_store(aml_or(aml_name("CDW1"), aml_int(0x10), NULL),
+                                 aml_name("CDW1")));
+    aml_append(ifctx, ifctx1);
+
+    aml_append(ifctx, aml_store(aml_name("CTRL"), aml_name("CDW3")));
+    aml_append(ifctx, aml_return(aml_arg(3)));
+    aml_append(method, ifctx);
+
+    elsectx = aml_else();
+    aml_append(elsectx, aml_store(aml_or(aml_name("CDW1"), aml_int(4), NULL),
+                                  aml_name("CDW1")));
+    aml_append(elsectx, aml_return(aml_arg(3)));
+    aml_append(method, elsectx);
 
-    aml_append(method, aml_return(aml_arg(3)));
     return method;
 }
 
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index c9b4916ba7..b5e165543a 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -153,7 +153,7 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
                               uint32_t irq, bool use_highmem, bool highmem_ecam)
 {
     int ecam_id = VIRT_ECAM_ID(highmem_ecam);
-    Aml *method, *crs, *ifctx, *UUID, *ifctx1, *elsectx, *buf;
+    Aml *method, *crs, *ifctx, *UUID, *ifctx1, *buf;
     int i, bus_no;
     hwaddr base_mmio = memmap[VIRT_PCIE_MMIO].base;
     hwaddr size_mmio = memmap[VIRT_PCIE_MMIO].size;
@@ -247,47 +247,8 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
     /* Declare an _OSC (OS Control Handoff) method */
     aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
     aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
-    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
-    aml_append(method,
-        aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
-
-    /* PCI Firmware Specification 3.0
-     * 4.5.1. _OSC Interface for PCI Host Bridge Devices
-     * The _OSC interface for a PCI/PCI-X/PCI Express hierarchy is
-     * identified by the Universal Unique IDentifier (UUID)
-     * 33DB4D5B-1FF7-401C-9657-7441C03DD766
-     */
-    UUID = aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766");
-    ifctx = aml_if(aml_equal(aml_arg(0), UUID));
-    aml_append(ifctx,
-        aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
-    aml_append(ifctx,
-        aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
-    aml_append(ifctx, aml_store(aml_name("CDW2"), aml_name("SUPP")));
-    aml_append(ifctx, aml_store(aml_name("CDW3"), aml_name("CTRL")));
-    aml_append(ifctx, aml_store(aml_and(aml_name("CTRL"), aml_int(0x1D), NULL),
-                                aml_name("CTRL")));
-
-    ifctx1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
-    aml_append(ifctx1, aml_store(aml_or(aml_name("CDW1"), aml_int(0x08), NULL),
-                                 aml_name("CDW1")));
-    aml_append(ifctx, ifctx1);
-
-    ifctx1 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), aml_name("CTRL"))));
-    aml_append(ifctx1, aml_store(aml_or(aml_name("CDW1"), aml_int(0x10), NULL),
-                                 aml_name("CDW1")));
-    aml_append(ifctx, ifctx1);
-
-    aml_append(ifctx, aml_store(aml_name("CTRL"), aml_name("CDW3")));
-    aml_append(ifctx, aml_return(aml_arg(3)));
-    aml_append(method, ifctx);
-
-    elsectx = aml_else();
-    aml_append(elsectx, aml_store(aml_or(aml_name("CDW1"), aml_int(4), NULL),
-                                  aml_name("CDW1")));
-    aml_append(elsectx, aml_return(aml_arg(3)));
-    aml_append(method, elsectx);
-    aml_append(dev, method);
+    aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL &
+                                     ~ACPI_OSC_CTRL_SHPC_NATIVE_HP));
 
     method = aml_method("_DSM", 4, AML_NOTSERIALIZED);
 
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 996d8a11dc..bd147a6bd2 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1335,7 +1335,9 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
         aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
         aml_append(dev, aml_name_decl("_UID", aml_int(1)));
-        aml_append(dev, build_osc_method());
+        aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+        aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+        aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL));
         aml_append(sb_scope, dev);
         aml_append(dsdt, sb_scope);
 
@@ -1400,7 +1402,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
             if (pci_bus_is_express(bus)) {
-                aml_append(dev, build_osc_method());
+                aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL));
             }
 
             if (numa_node != NUMA_NODE_UNASSIGNED) {
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 09/24] hw: i386: Move PCI host definitions to pci_host.h
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

The PCI hole properties are not pc or i386 specific. They belong to the
PCI host header instead.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/i386/pc.h      | 5 -----
 include/hw/pci/pci_host.h | 6 ++++++
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index fed136fcdd..bbbdb33ea3 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -182,11 +182,6 @@ void pc_acpi_init(const char *default_dsdt);
 
 void pc_guest_info_init(PCMachineState *pcms);
 
-#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
-#define PCI_HOST_PROP_PCI_HOLE_END     "pci-hole-end"
-#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
-#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
-#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
 #define PCI_HOST_BELOW_4G_MEM_SIZE     "below-4g-mem-size"
 #define PCI_HOST_ABOVE_4G_MEM_SIZE     "above-4g-mem-size"
 
diff --git a/include/hw/pci/pci_host.h b/include/hw/pci/pci_host.h
index ba31595fc7..e343f4d9ca 100644
--- a/include/hw/pci/pci_host.h
+++ b/include/hw/pci/pci_host.h
@@ -38,6 +38,12 @@
 #define PCI_HOST_BRIDGE_GET_CLASS(obj) \
      OBJECT_GET_CLASS(PCIHostBridgeClass, (obj), TYPE_PCI_HOST_BRIDGE)
 
+#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
+#define PCI_HOST_PROP_PCI_HOLE_END     "pci-hole-end"
+#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
+#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
+#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
+
 struct PCIHostState {
     SysBusDevice busdev;
 
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 09/24] hw: i386: Move PCI host definitions to pci_host.h
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

The PCI hole properties are not pc or i386 specific. They belong to the
PCI host header instead.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/i386/pc.h      | 5 -----
 include/hw/pci/pci_host.h | 6 ++++++
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index fed136fcdd..bbbdb33ea3 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -182,11 +182,6 @@ void pc_acpi_init(const char *default_dsdt);
 
 void pc_guest_info_init(PCMachineState *pcms);
 
-#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
-#define PCI_HOST_PROP_PCI_HOLE_END     "pci-hole-end"
-#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
-#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
-#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
 #define PCI_HOST_BELOW_4G_MEM_SIZE     "below-4g-mem-size"
 #define PCI_HOST_ABOVE_4G_MEM_SIZE     "above-4g-mem-size"
 
diff --git a/include/hw/pci/pci_host.h b/include/hw/pci/pci_host.h
index ba31595fc7..e343f4d9ca 100644
--- a/include/hw/pci/pci_host.h
+++ b/include/hw/pci/pci_host.h
@@ -38,6 +38,12 @@
 #define PCI_HOST_BRIDGE_GET_CLASS(obj) \
      OBJECT_GET_CLASS(PCIHostBridgeClass, (obj), TYPE_PCI_HOST_BRIDGE)
 
+#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
+#define PCI_HOST_PROP_PCI_HOLE_END     "pci-hole-end"
+#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
+#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
+#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
+
 struct PCIHostState {
     SysBusDevice busdev;
 
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

This is going to be needed by the hardware reduced implementation, so
let's export it.
Once the ACPI builder methods and getters will be implemented, the
acpi_get_pci_host() implementation will become hardware agnostic.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h |  2 ++
 hw/acpi/aml-build.c         | 43 +++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c        | 47 ++-----------------------------------
 3 files changed, 47 insertions(+), 45 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index c27c0935ae..fde2785b9a 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
              const char *oem_id, const char *oem_table_id);
 void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
+Object *acpi_get_pci_host(void);
+void acpi_get_pci_holes(Range *hole, Range *hole64);
 /* Align AML blob size to a multiple of 'align' */
 void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 2b9a636e75..b8e32f15f7 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
     g_array_free(tables->vmgenid, mfre);
 }
 
+/*
+ * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
+ */
+Object *acpi_get_pci_host(void)
+{
+    PCIHostState *host;
+
+    host = OBJECT_CHECK(PCIHostState,
+                        object_resolve_path("/machine/i440fx", NULL),
+                        TYPE_PCI_HOST_BRIDGE);
+    if (!host) {
+        host = OBJECT_CHECK(PCIHostState,
+                            object_resolve_path("/machine/q35", NULL),
+                            TYPE_PCI_HOST_BRIDGE);
+    }
+
+    return OBJECT(host);
+}
+
+
+void acpi_get_pci_holes(Range *hole, Range *hole64)
+{
+    Object *pci_host;
+
+    pci_host = acpi_get_pci_host();
+    g_assert(pci_host);
+
+    range_set_bounds1(hole,
+                      object_property_get_uint(pci_host,
+                                               PCI_HOST_PROP_PCI_HOLE_START,
+                                               NULL),
+                      object_property_get_uint(pci_host,
+                                               PCI_HOST_PROP_PCI_HOLE_END,
+                                               NULL));
+    range_set_bounds1(hole64,
+                      object_property_get_uint(pci_host,
+                                               PCI_HOST_PROP_PCI_HOLE64_START,
+                                               NULL),
+                      object_property_get_uint(pci_host,
+                                               PCI_HOST_PROP_PCI_HOLE64_END,
+                                               NULL));
+}
+
 static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
 {
     CrsRangeEntry *entry;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index bd147a6bd2..a5f5f83500 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -232,49 +232,6 @@ static void acpi_get_misc_info(AcpiMiscInfo *info)
     info->applesmc_io_base = applesmc_port();
 }
 
-/*
- * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
- * On i386 arch we only have two pci hosts, so we can look only for them.
- */
-static Object *acpi_get_i386_pci_host(void)
-{
-    PCIHostState *host;
-
-    host = OBJECT_CHECK(PCIHostState,
-                        object_resolve_path("/machine/i440fx", NULL),
-                        TYPE_PCI_HOST_BRIDGE);
-    if (!host) {
-        host = OBJECT_CHECK(PCIHostState,
-                            object_resolve_path("/machine/q35", NULL),
-                            TYPE_PCI_HOST_BRIDGE);
-    }
-
-    return OBJECT(host);
-}
-
-static void acpi_get_pci_holes(Range *hole, Range *hole64)
-{
-    Object *pci_host;
-
-    pci_host = acpi_get_i386_pci_host();
-    g_assert(pci_host);
-
-    range_set_bounds1(hole,
-                      object_property_get_uint(pci_host,
-                                               PCI_HOST_PROP_PCI_HOLE_START,
-                                               NULL),
-                      object_property_get_uint(pci_host,
-                                               PCI_HOST_PROP_PCI_HOLE_END,
-                                               NULL));
-    range_set_bounds1(hole64,
-                      object_property_get_uint(pci_host,
-                                               PCI_HOST_PROP_PCI_HOLE64_START,
-                                               NULL),
-                      object_property_get_uint(pci_host,
-                                               PCI_HOST_PROP_PCI_HOLE64_END,
-                                               NULL));
-}
-
 /* FACS */
 static void
 build_facs(GArray *table_data, BIOSLinker *linker)
@@ -1634,7 +1591,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         Object *pci_host;
         PCIBus *bus = NULL;
 
-        pci_host = acpi_get_i386_pci_host();
+        pci_host = acpi_get_pci_host();
         if (pci_host) {
             bus = PCI_HOST_BRIDGE(pci_host)->bus;
         }
@@ -2008,7 +1965,7 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
     Object *pci_host;
     QObject *o;
 
-    pci_host = acpi_get_i386_pci_host();
+    pci_host = acpi_get_pci_host();
     g_assert(pci_host);
 
     o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

This is going to be needed by the hardware reduced implementation, so
let's export it.
Once the ACPI builder methods and getters will be implemented, the
acpi_get_pci_host() implementation will become hardware agnostic.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h |  2 ++
 hw/acpi/aml-build.c         | 43 +++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c        | 47 ++-----------------------------------
 3 files changed, 47 insertions(+), 45 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index c27c0935ae..fde2785b9a 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
              const char *oem_id, const char *oem_table_id);
 void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
+Object *acpi_get_pci_host(void);
+void acpi_get_pci_holes(Range *hole, Range *hole64);
 /* Align AML blob size to a multiple of 'align' */
 void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 2b9a636e75..b8e32f15f7 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
     g_array_free(tables->vmgenid, mfre);
 }
 
+/*
+ * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
+ */
+Object *acpi_get_pci_host(void)
+{
+    PCIHostState *host;
+
+    host = OBJECT_CHECK(PCIHostState,
+                        object_resolve_path("/machine/i440fx", NULL),
+                        TYPE_PCI_HOST_BRIDGE);
+    if (!host) {
+        host = OBJECT_CHECK(PCIHostState,
+                            object_resolve_path("/machine/q35", NULL),
+                            TYPE_PCI_HOST_BRIDGE);
+    }
+
+    return OBJECT(host);
+}
+
+
+void acpi_get_pci_holes(Range *hole, Range *hole64)
+{
+    Object *pci_host;
+
+    pci_host = acpi_get_pci_host();
+    g_assert(pci_host);
+
+    range_set_bounds1(hole,
+                      object_property_get_uint(pci_host,
+                                               PCI_HOST_PROP_PCI_HOLE_START,
+                                               NULL),
+                      object_property_get_uint(pci_host,
+                                               PCI_HOST_PROP_PCI_HOLE_END,
+                                               NULL));
+    range_set_bounds1(hole64,
+                      object_property_get_uint(pci_host,
+                                               PCI_HOST_PROP_PCI_HOLE64_START,
+                                               NULL),
+                      object_property_get_uint(pci_host,
+                                               PCI_HOST_PROP_PCI_HOLE64_END,
+                                               NULL));
+}
+
 static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
 {
     CrsRangeEntry *entry;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index bd147a6bd2..a5f5f83500 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -232,49 +232,6 @@ static void acpi_get_misc_info(AcpiMiscInfo *info)
     info->applesmc_io_base = applesmc_port();
 }
 
-/*
- * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
- * On i386 arch we only have two pci hosts, so we can look only for them.
- */
-static Object *acpi_get_i386_pci_host(void)
-{
-    PCIHostState *host;
-
-    host = OBJECT_CHECK(PCIHostState,
-                        object_resolve_path("/machine/i440fx", NULL),
-                        TYPE_PCI_HOST_BRIDGE);
-    if (!host) {
-        host = OBJECT_CHECK(PCIHostState,
-                            object_resolve_path("/machine/q35", NULL),
-                            TYPE_PCI_HOST_BRIDGE);
-    }
-
-    return OBJECT(host);
-}
-
-static void acpi_get_pci_holes(Range *hole, Range *hole64)
-{
-    Object *pci_host;
-
-    pci_host = acpi_get_i386_pci_host();
-    g_assert(pci_host);
-
-    range_set_bounds1(hole,
-                      object_property_get_uint(pci_host,
-                                               PCI_HOST_PROP_PCI_HOLE_START,
-                                               NULL),
-                      object_property_get_uint(pci_host,
-                                               PCI_HOST_PROP_PCI_HOLE_END,
-                                               NULL));
-    range_set_bounds1(hole64,
-                      object_property_get_uint(pci_host,
-                                               PCI_HOST_PROP_PCI_HOLE64_START,
-                                               NULL),
-                      object_property_get_uint(pci_host,
-                                               PCI_HOST_PROP_PCI_HOLE64_END,
-                                               NULL));
-}
-
 /* FACS */
 static void
 build_facs(GArray *table_data, BIOSLinker *linker)
@@ -1634,7 +1591,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         Object *pci_host;
         PCIBus *bus = NULL;
 
-        pci_host = acpi_get_i386_pci_host();
+        pci_host = acpi_get_pci_host();
         if (pci_host) {
             bus = PCI_HOST_BRIDGE(pci_host)->bus;
         }
@@ -2008,7 +1965,7 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
     Object *pci_host;
     QObject *o;
 
-    pci_host = acpi_get_i386_pci_host();
+    pci_host = acpi_get_pci_host();
     g_assert(pci_host);
 
     o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost, Yang Zhong, Rob Bradford

From: Yang Zhong <yang.zhong@intel.com>

The AML build routines for the PCI host bridge and the corresponding
DSDT addition are neither x86 nor PC machine type specific.
We can move them to the architecture agnostic hw/acpi folder, and by
carrying all the needed information through a new AcpiPciBus structure,
we can make them PC machine type independent.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h |   8 ++
 hw/acpi/aml-build.c         | 157 ++++++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c        | 115 ++------------------------
 3 files changed, 173 insertions(+), 107 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index fde2785b9a..1861e37ebf 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
     uint32_t mcfg_size;
 } AcpiMcfgInfo;
 
+typedef struct AcpiPciBus {
+    PCIBus *pci_bus;
+    Range *pci_hole;
+    Range *pci_hole64;
+} AcpiPciBus;
+
 typedef struct CrsRangeEntry {
     uint64_t base;
     uint64_t limit;
@@ -411,6 +417,8 @@ Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
+void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
+Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
 void crs_range_set_init(CrsRangeSet *range_set);
 Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
 void crs_replace_with_free_ranges(GPtrArray *ranges,
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index b8e32f15f7..869ed70db3 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -29,6 +29,19 @@
 #include "hw/pci/pci_bus.h"
 #include "qemu/range.h"
 #include "hw/pci/pci_bridge.h"
+#include "hw/i386/pc.h"
+#include "sysemu/tpm.h"
+#include "hw/acpi/tpm.h"
+
+#define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
+#define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
+#define PCI_HOST_BRIDGE_IO_0_MAX_ADDR      0x0cf7
+#define PCI_HOST_BRIDGE_IO_1_MIN_ADDR      0x0d00
+#define PCI_HOST_BRIDGE_IO_1_MAX_ADDR      0xffff
+#define PCI_VGA_MEM_BASE_ADDR              0x000a0000
+#define PCI_VGA_MEM_MAX_ADDR               0x000bffff
+#define IO_0_LEN                           0xcf8
+#define VGA_MEM_LEN                        0x20000
 
 static GArray *build_alloc_array(void)
 {
@@ -2142,6 +2155,150 @@ Aml *build_prt(bool is_pci0_prt)
     return method;
 }
 
+Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
+{
+    CrsRangeEntry *entry;
+    Aml *scope, *dev, *crs;
+    CrsRangeSet crs_range_set;
+    Range *pci_hole = NULL;
+    Range *pci_hole64 = NULL;
+    PCIBus *bus = NULL;
+    int root_bus_limit = 0xFF;
+    int i;
+
+    bus = pci_host->pci_bus;
+    assert(bus);
+    pci_hole = pci_host->pci_hole;
+    pci_hole64 = pci_host->pci_hole64;
+
+    crs_range_set_init(&crs_range_set);
+    QLIST_FOREACH(bus, &bus->child, sibling) {
+        uint8_t bus_num = pci_bus_num(bus);
+        uint8_t numa_node = pci_bus_numa_node(bus);
+
+        /* look only for expander root buses */
+        if (!pci_bus_is_root(bus)) {
+            continue;
+        }
+
+        if (bus_num < root_bus_limit) {
+            root_bus_limit = bus_num - 1;
+        }
+
+        scope = aml_scope("\\_SB");
+        dev = aml_device("PC%.02X", bus_num);
+        aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
+        aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
+        aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
+        if (pci_bus_is_express(bus)) {
+            aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+            aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+            aml_append(dev, build_osc_method(0x1F));
+        }
+        if (numa_node != NUMA_NODE_UNASSIGNED) {
+            aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
+        }
+
+        aml_append(dev, build_prt(false));
+        crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
+        aml_append(dev, aml_name_decl("_CRS", crs));
+        aml_append(scope, dev);
+        aml_append(table, scope);
+    }
+    scope = aml_scope("\\_SB.PCI0");
+    /* build PCI0._CRS */
+    crs = aml_resource_template();
+    /* set the pcie bus num */
+    aml_append(crs,
+        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
+                            0x0000, 0x0, root_bus_limit,
+                            0x0000, root_bus_limit + 1));
+    aml_append(crs, aml_io(AML_DECODE16, PCI_HOST_BRIDGE_CONFIG_ADDR,
+                           PCI_HOST_BRIDGE_CONFIG_ADDR, 0x01, 0x08));
+    /* set the io region 0 in pci host bridge */
+    aml_append(crs,
+        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
+                    AML_POS_DECODE, AML_ENTIRE_RANGE,
+                    0x0000, PCI_HOST_BRIDGE_IO_0_MIN_ADDR,
+                    PCI_HOST_BRIDGE_IO_0_MAX_ADDR, 0x0000, IO_0_LEN));
+
+    /* set the io region 1 in pci host bridge */
+    crs_replace_with_free_ranges(crs_range_set.io_ranges,
+                                 PCI_HOST_BRIDGE_IO_1_MIN_ADDR,
+                                 PCI_HOST_BRIDGE_IO_1_MAX_ADDR);
+    for (i = 0; i < crs_range_set.io_ranges->len; i++) {
+        entry = g_ptr_array_index(crs_range_set.io_ranges, i);
+        aml_append(crs,
+            aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
+                        AML_POS_DECODE, AML_ENTIRE_RANGE,
+                        0x0000, entry->base, entry->limit,
+                        0x0000, entry->limit - entry->base + 1));
+    }
+
+    /* set the vga mem region(0) in pci host bridge */
+    aml_append(crs,
+        aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
+                         AML_CACHEABLE, AML_READ_WRITE,
+                         0, PCI_VGA_MEM_BASE_ADDR, PCI_VGA_MEM_MAX_ADDR,
+                         0, VGA_MEM_LEN));
+
+    /* set the mem region 1 in pci host bridge */
+    crs_replace_with_free_ranges(crs_range_set.mem_ranges,
+                                 range_lob(pci_hole),
+                                 range_upb(pci_hole));
+    for (i = 0; i < crs_range_set.mem_ranges->len; i++) {
+        entry = g_ptr_array_index(crs_range_set.mem_ranges, i);
+        aml_append(crs,
+            aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
+                             AML_NON_CACHEABLE, AML_READ_WRITE,
+                             0, entry->base, entry->limit,
+                             0, entry->limit - entry->base + 1));
+    }
+
+    /* set the mem region 2 in pci host bridge */
+    if (!range_is_empty(pci_hole64)) {
+        crs_replace_with_free_ranges(crs_range_set.mem_64bit_ranges,
+                                     range_lob(pci_hole64),
+                                     range_upb(pci_hole64));
+        for (i = 0; i < crs_range_set.mem_64bit_ranges->len; i++) {
+            entry = g_ptr_array_index(crs_range_set.mem_64bit_ranges, i);
+            aml_append(crs,
+                       aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
+                                        AML_MAX_FIXED,
+                                        AML_CACHEABLE, AML_READ_WRITE,
+                                        0, entry->base, entry->limit,
+                                        0, entry->limit - entry->base + 1));
+        }
+    }
+
+    if (TPM_IS_TIS(tpm_find())) {
+        aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
+                   TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
+    }
+
+    aml_append(scope, aml_name_decl("_CRS", crs));
+    crs_range_set_free(&crs_range_set);
+    return scope;
+}
+
+void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host)
+{
+    Aml *dev, *pci_scope;
+
+    dev = aml_device("\\_SB.PCI0");
+    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
+    aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
+    aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
+    aml_append(dev, aml_name_decl("_UID", aml_int(1)));
+    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+    aml_append(dev, build_osc_method(0x1F));
+    aml_append(dsdt, dev);
+
+    pci_scope = build_pci_host_bridge(dsdt, pci_host);
+    aml_append(dsdt, pci_scope);
+}
+
 /* Build rsdt table */
 void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index a5f5f83500..14e2624d14 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1253,16 +1253,11 @@ static void build_piix4_pci_hotplug(Aml *table)
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
-           Range *pci_hole, Range *pci_hole64,
+           AcpiPciBus *pci_host,
            MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-    CrsRangeEntry *entry;
     Aml *dsdt, *sb_scope, *scope, *dev, *method, *field, *pkg, *crs;
-    CrsRangeSet crs_range_set;
     uint32_t nr_mem = machine->ram_slots;
-    int root_bus_limit = 0xFF;
-    PCIBus *bus = NULL;
-    int i;
 
     dsdt = init_aml_allocator();
 
@@ -1337,104 +1332,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
     }
     aml_append(dsdt, scope);
 
-    crs_range_set_init(&crs_range_set);
-    bus = PC_MACHINE(machine)->bus;
-    if (bus) {
-        QLIST_FOREACH(bus, &bus->child, sibling) {
-            uint8_t bus_num = pci_bus_num(bus);
-            uint8_t numa_node = pci_bus_numa_node(bus);
-
-            /* look only for expander root buses */
-            if (!pci_bus_is_root(bus)) {
-                continue;
-            }
-
-            if (bus_num < root_bus_limit) {
-                root_bus_limit = bus_num - 1;
-            }
-
-            scope = aml_scope("\\_SB");
-            dev = aml_device("PC%.02X", bus_num);
-            aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
-            aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
-            aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
-            if (pci_bus_is_express(bus)) {
-                aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL));
-            }
-
-            if (numa_node != NUMA_NODE_UNASSIGNED) {
-                aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
-            }
-
-            aml_append(dev, build_prt(false));
-            crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
-            aml_append(dev, aml_name_decl("_CRS", crs));
-            aml_append(scope, dev);
-            aml_append(dsdt, scope);
-        }
-    }
-
-    scope = aml_scope("\\_SB.PCI0");
-    /* build PCI0._CRS */
-    crs = aml_resource_template();
-    aml_append(crs,
-        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
-                            0x0000, 0x0, root_bus_limit,
-                            0x0000, root_bus_limit + 1));
-    aml_append(crs, aml_io(AML_DECODE16, 0x0CF8, 0x0CF8, 0x01, 0x08));
-
-    aml_append(crs,
-        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
-                    AML_POS_DECODE, AML_ENTIRE_RANGE,
-                    0x0000, 0x0000, 0x0CF7, 0x0000, 0x0CF8));
-
-    crs_replace_with_free_ranges(crs_range_set.io_ranges, 0x0D00, 0xFFFF);
-    for (i = 0; i < crs_range_set.io_ranges->len; i++) {
-        entry = g_ptr_array_index(crs_range_set.io_ranges, i);
-        aml_append(crs,
-            aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
-                        AML_POS_DECODE, AML_ENTIRE_RANGE,
-                        0x0000, entry->base, entry->limit,
-                        0x0000, entry->limit - entry->base + 1));
-    }
-
-    aml_append(crs,
-        aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
-                         AML_CACHEABLE, AML_READ_WRITE,
-                         0, 0x000A0000, 0x000BFFFF, 0, 0x00020000));
-
-    crs_replace_with_free_ranges(crs_range_set.mem_ranges,
-                                 range_lob(pci_hole),
-                                 range_upb(pci_hole));
-    for (i = 0; i < crs_range_set.mem_ranges->len; i++) {
-        entry = g_ptr_array_index(crs_range_set.mem_ranges, i);
-        aml_append(crs,
-            aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
-                             AML_NON_CACHEABLE, AML_READ_WRITE,
-                             0, entry->base, entry->limit,
-                             0, entry->limit - entry->base + 1));
-    }
-
-    if (!range_is_empty(pci_hole64)) {
-        crs_replace_with_free_ranges(crs_range_set.mem_64bit_ranges,
-                                     range_lob(pci_hole64),
-                                     range_upb(pci_hole64));
-        for (i = 0; i < crs_range_set.mem_64bit_ranges->len; i++) {
-            entry = g_ptr_array_index(crs_range_set.mem_64bit_ranges, i);
-            aml_append(crs,
-                       aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
-                                        AML_MAX_FIXED,
-                                        AML_CACHEABLE, AML_READ_WRITE,
-                                        0, entry->base, entry->limit,
-                                        0, entry->limit - entry->base + 1));
-        }
-    }
-
-    if (TPM_IS_TIS(tpm_find())) {
-        aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
-                   TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
-    }
-    aml_append(scope, aml_name_decl("_CRS", crs));
+    scope = build_pci_host_bridge(dsdt, pci_host);
 
     /* reserve GPE0 block resources */
     dev = aml_device("GPE0");
@@ -1454,8 +1352,6 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
     aml_append(dev, aml_name_decl("_CRS", crs));
     aml_append(scope, dev);
 
-    crs_range_set_free(&crs_range_set);
-
     /* reserve PCIHP resources */
     if (pm->pcihp_io_len) {
         dev = aml_device("PHPR");
@@ -2012,6 +1908,11 @@ void acpi_build(AcpiBuildTables *tables,
                              64 /* Ensure FACS is aligned */,
                              false /* high memory */);
 
+    AcpiPciBus pci_host = {
+        .pci_bus    = PC_MACHINE(machine)->bus,
+        .pci_hole   = &pci_hole,
+        .pci_hole64 = &pci_hole64,
+    };
     /*
      * FACS is pointed to by FADT.
      * We place it first since it's the only table that has alignment
@@ -2023,7 +1924,7 @@ void acpi_build(AcpiBuildTables *tables,
     /* DSDT is pointed to by FADT */
     dsdt = tables_blob->len;
     build_dsdt(tables_blob, tables->linker, &pm, &misc,
-               &pci_hole, &pci_hole64, machine, acpi_conf);
+               &pci_host, machine, acpi_conf);
 
     /* Count the size of the DSDT and SSDT, we will need it for legacy
      * sizing of ACPI tables.
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Rob Bradford, Michael S. Tsirkin, Shannon Zhao, Igor Mammedov,
	qemu-arm, Marcel Apfelbaum, Paolo Bonzini, Anthony Perard,
	xen-devel, Richard Henderson

From: Yang Zhong <yang.zhong@intel.com>

The AML build routines for the PCI host bridge and the corresponding
DSDT addition are neither x86 nor PC machine type specific.
We can move them to the architecture agnostic hw/acpi folder, and by
carrying all the needed information through a new AcpiPciBus structure,
we can make them PC machine type independent.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/aml-build.h |   8 ++
 hw/acpi/aml-build.c         | 157 ++++++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c        | 115 ++------------------------
 3 files changed, 173 insertions(+), 107 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index fde2785b9a..1861e37ebf 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
     uint32_t mcfg_size;
 } AcpiMcfgInfo;
 
+typedef struct AcpiPciBus {
+    PCIBus *pci_bus;
+    Range *pci_hole;
+    Range *pci_hole64;
+} AcpiPciBus;
+
 typedef struct CrsRangeEntry {
     uint64_t base;
     uint64_t limit;
@@ -411,6 +417,8 @@ Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
+void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
+Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
 void crs_range_set_init(CrsRangeSet *range_set);
 Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
 void crs_replace_with_free_ranges(GPtrArray *ranges,
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index b8e32f15f7..869ed70db3 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -29,6 +29,19 @@
 #include "hw/pci/pci_bus.h"
 #include "qemu/range.h"
 #include "hw/pci/pci_bridge.h"
+#include "hw/i386/pc.h"
+#include "sysemu/tpm.h"
+#include "hw/acpi/tpm.h"
+
+#define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
+#define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
+#define PCI_HOST_BRIDGE_IO_0_MAX_ADDR      0x0cf7
+#define PCI_HOST_BRIDGE_IO_1_MIN_ADDR      0x0d00
+#define PCI_HOST_BRIDGE_IO_1_MAX_ADDR      0xffff
+#define PCI_VGA_MEM_BASE_ADDR              0x000a0000
+#define PCI_VGA_MEM_MAX_ADDR               0x000bffff
+#define IO_0_LEN                           0xcf8
+#define VGA_MEM_LEN                        0x20000
 
 static GArray *build_alloc_array(void)
 {
@@ -2142,6 +2155,150 @@ Aml *build_prt(bool is_pci0_prt)
     return method;
 }
 
+Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
+{
+    CrsRangeEntry *entry;
+    Aml *scope, *dev, *crs;
+    CrsRangeSet crs_range_set;
+    Range *pci_hole = NULL;
+    Range *pci_hole64 = NULL;
+    PCIBus *bus = NULL;
+    int root_bus_limit = 0xFF;
+    int i;
+
+    bus = pci_host->pci_bus;
+    assert(bus);
+    pci_hole = pci_host->pci_hole;
+    pci_hole64 = pci_host->pci_hole64;
+
+    crs_range_set_init(&crs_range_set);
+    QLIST_FOREACH(bus, &bus->child, sibling) {
+        uint8_t bus_num = pci_bus_num(bus);
+        uint8_t numa_node = pci_bus_numa_node(bus);
+
+        /* look only for expander root buses */
+        if (!pci_bus_is_root(bus)) {
+            continue;
+        }
+
+        if (bus_num < root_bus_limit) {
+            root_bus_limit = bus_num - 1;
+        }
+
+        scope = aml_scope("\\_SB");
+        dev = aml_device("PC%.02X", bus_num);
+        aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
+        aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
+        aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
+        if (pci_bus_is_express(bus)) {
+            aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+            aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+            aml_append(dev, build_osc_method(0x1F));
+        }
+        if (numa_node != NUMA_NODE_UNASSIGNED) {
+            aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
+        }
+
+        aml_append(dev, build_prt(false));
+        crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
+        aml_append(dev, aml_name_decl("_CRS", crs));
+        aml_append(scope, dev);
+        aml_append(table, scope);
+    }
+    scope = aml_scope("\\_SB.PCI0");
+    /* build PCI0._CRS */
+    crs = aml_resource_template();
+    /* set the pcie bus num */
+    aml_append(crs,
+        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
+                            0x0000, 0x0, root_bus_limit,
+                            0x0000, root_bus_limit + 1));
+    aml_append(crs, aml_io(AML_DECODE16, PCI_HOST_BRIDGE_CONFIG_ADDR,
+                           PCI_HOST_BRIDGE_CONFIG_ADDR, 0x01, 0x08));
+    /* set the io region 0 in pci host bridge */
+    aml_append(crs,
+        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
+                    AML_POS_DECODE, AML_ENTIRE_RANGE,
+                    0x0000, PCI_HOST_BRIDGE_IO_0_MIN_ADDR,
+                    PCI_HOST_BRIDGE_IO_0_MAX_ADDR, 0x0000, IO_0_LEN));
+
+    /* set the io region 1 in pci host bridge */
+    crs_replace_with_free_ranges(crs_range_set.io_ranges,
+                                 PCI_HOST_BRIDGE_IO_1_MIN_ADDR,
+                                 PCI_HOST_BRIDGE_IO_1_MAX_ADDR);
+    for (i = 0; i < crs_range_set.io_ranges->len; i++) {
+        entry = g_ptr_array_index(crs_range_set.io_ranges, i);
+        aml_append(crs,
+            aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
+                        AML_POS_DECODE, AML_ENTIRE_RANGE,
+                        0x0000, entry->base, entry->limit,
+                        0x0000, entry->limit - entry->base + 1));
+    }
+
+    /* set the vga mem region(0) in pci host bridge */
+    aml_append(crs,
+        aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
+                         AML_CACHEABLE, AML_READ_WRITE,
+                         0, PCI_VGA_MEM_BASE_ADDR, PCI_VGA_MEM_MAX_ADDR,
+                         0, VGA_MEM_LEN));
+
+    /* set the mem region 1 in pci host bridge */
+    crs_replace_with_free_ranges(crs_range_set.mem_ranges,
+                                 range_lob(pci_hole),
+                                 range_upb(pci_hole));
+    for (i = 0; i < crs_range_set.mem_ranges->len; i++) {
+        entry = g_ptr_array_index(crs_range_set.mem_ranges, i);
+        aml_append(crs,
+            aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
+                             AML_NON_CACHEABLE, AML_READ_WRITE,
+                             0, entry->base, entry->limit,
+                             0, entry->limit - entry->base + 1));
+    }
+
+    /* set the mem region 2 in pci host bridge */
+    if (!range_is_empty(pci_hole64)) {
+        crs_replace_with_free_ranges(crs_range_set.mem_64bit_ranges,
+                                     range_lob(pci_hole64),
+                                     range_upb(pci_hole64));
+        for (i = 0; i < crs_range_set.mem_64bit_ranges->len; i++) {
+            entry = g_ptr_array_index(crs_range_set.mem_64bit_ranges, i);
+            aml_append(crs,
+                       aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
+                                        AML_MAX_FIXED,
+                                        AML_CACHEABLE, AML_READ_WRITE,
+                                        0, entry->base, entry->limit,
+                                        0, entry->limit - entry->base + 1));
+        }
+    }
+
+    if (TPM_IS_TIS(tpm_find())) {
+        aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
+                   TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
+    }
+
+    aml_append(scope, aml_name_decl("_CRS", crs));
+    crs_range_set_free(&crs_range_set);
+    return scope;
+}
+
+void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host)
+{
+    Aml *dev, *pci_scope;
+
+    dev = aml_device("\\_SB.PCI0");
+    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
+    aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
+    aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
+    aml_append(dev, aml_name_decl("_UID", aml_int(1)));
+    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+    aml_append(dev, build_osc_method(0x1F));
+    aml_append(dsdt, dev);
+
+    pci_scope = build_pci_host_bridge(dsdt, pci_host);
+    aml_append(dsdt, pci_scope);
+}
+
 /* Build rsdt table */
 void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index a5f5f83500..14e2624d14 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1253,16 +1253,11 @@ static void build_piix4_pci_hotplug(Aml *table)
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
-           Range *pci_hole, Range *pci_hole64,
+           AcpiPciBus *pci_host,
            MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-    CrsRangeEntry *entry;
     Aml *dsdt, *sb_scope, *scope, *dev, *method, *field, *pkg, *crs;
-    CrsRangeSet crs_range_set;
     uint32_t nr_mem = machine->ram_slots;
-    int root_bus_limit = 0xFF;
-    PCIBus *bus = NULL;
-    int i;
 
     dsdt = init_aml_allocator();
 
@@ -1337,104 +1332,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
     }
     aml_append(dsdt, scope);
 
-    crs_range_set_init(&crs_range_set);
-    bus = PC_MACHINE(machine)->bus;
-    if (bus) {
-        QLIST_FOREACH(bus, &bus->child, sibling) {
-            uint8_t bus_num = pci_bus_num(bus);
-            uint8_t numa_node = pci_bus_numa_node(bus);
-
-            /* look only for expander root buses */
-            if (!pci_bus_is_root(bus)) {
-                continue;
-            }
-
-            if (bus_num < root_bus_limit) {
-                root_bus_limit = bus_num - 1;
-            }
-
-            scope = aml_scope("\\_SB");
-            dev = aml_device("PC%.02X", bus_num);
-            aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
-            aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
-            aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
-            if (pci_bus_is_express(bus)) {
-                aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL));
-            }
-
-            if (numa_node != NUMA_NODE_UNASSIGNED) {
-                aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
-            }
-
-            aml_append(dev, build_prt(false));
-            crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
-            aml_append(dev, aml_name_decl("_CRS", crs));
-            aml_append(scope, dev);
-            aml_append(dsdt, scope);
-        }
-    }
-
-    scope = aml_scope("\\_SB.PCI0");
-    /* build PCI0._CRS */
-    crs = aml_resource_template();
-    aml_append(crs,
-        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
-                            0x0000, 0x0, root_bus_limit,
-                            0x0000, root_bus_limit + 1));
-    aml_append(crs, aml_io(AML_DECODE16, 0x0CF8, 0x0CF8, 0x01, 0x08));
-
-    aml_append(crs,
-        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
-                    AML_POS_DECODE, AML_ENTIRE_RANGE,
-                    0x0000, 0x0000, 0x0CF7, 0x0000, 0x0CF8));
-
-    crs_replace_with_free_ranges(crs_range_set.io_ranges, 0x0D00, 0xFFFF);
-    for (i = 0; i < crs_range_set.io_ranges->len; i++) {
-        entry = g_ptr_array_index(crs_range_set.io_ranges, i);
-        aml_append(crs,
-            aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
-                        AML_POS_DECODE, AML_ENTIRE_RANGE,
-                        0x0000, entry->base, entry->limit,
-                        0x0000, entry->limit - entry->base + 1));
-    }
-
-    aml_append(crs,
-        aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
-                         AML_CACHEABLE, AML_READ_WRITE,
-                         0, 0x000A0000, 0x000BFFFF, 0, 0x00020000));
-
-    crs_replace_with_free_ranges(crs_range_set.mem_ranges,
-                                 range_lob(pci_hole),
-                                 range_upb(pci_hole));
-    for (i = 0; i < crs_range_set.mem_ranges->len; i++) {
-        entry = g_ptr_array_index(crs_range_set.mem_ranges, i);
-        aml_append(crs,
-            aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
-                             AML_NON_CACHEABLE, AML_READ_WRITE,
-                             0, entry->base, entry->limit,
-                             0, entry->limit - entry->base + 1));
-    }
-
-    if (!range_is_empty(pci_hole64)) {
-        crs_replace_with_free_ranges(crs_range_set.mem_64bit_ranges,
-                                     range_lob(pci_hole64),
-                                     range_upb(pci_hole64));
-        for (i = 0; i < crs_range_set.mem_64bit_ranges->len; i++) {
-            entry = g_ptr_array_index(crs_range_set.mem_64bit_ranges, i);
-            aml_append(crs,
-                       aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
-                                        AML_MAX_FIXED,
-                                        AML_CACHEABLE, AML_READ_WRITE,
-                                        0, entry->base, entry->limit,
-                                        0, entry->limit - entry->base + 1));
-        }
-    }
-
-    if (TPM_IS_TIS(tpm_find())) {
-        aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
-                   TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
-    }
-    aml_append(scope, aml_name_decl("_CRS", crs));
+    scope = build_pci_host_bridge(dsdt, pci_host);
 
     /* reserve GPE0 block resources */
     dev = aml_device("GPE0");
@@ -1454,8 +1352,6 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
     aml_append(dev, aml_name_decl("_CRS", crs));
     aml_append(scope, dev);
 
-    crs_range_set_free(&crs_range_set);
-
     /* reserve PCIHP resources */
     if (pm->pcihp_io_len) {
         dev = aml_device("PHPR");
@@ -2012,6 +1908,11 @@ void acpi_build(AcpiBuildTables *tables,
                              64 /* Ensure FACS is aligned */,
                              false /* high memory */);
 
+    AcpiPciBus pci_host = {
+        .pci_bus    = PC_MACHINE(machine)->bus,
+        .pci_hole   = &pci_hole,
+        .pci_hole64 = &pci_hole64,
+    };
     /*
      * FACS is pointed to by FADT.
      * We place it first since it's the only table that has alignment
@@ -2023,7 +1924,7 @@ void acpi_build(AcpiBuildTables *tables,
     /* DSDT is pointed to by FADT */
     dsdt = tables_blob->len;
     build_dsdt(tables_blob, tables->linker, &pm, &misc,
-               &pci_hole, &pci_hole64, machine, acpi_conf);
+               &pci_host, machine, acpi_conf);
 
     /* Count the size of the DSDT and SSDT, we will need it for legacy
      * sizing of ACPI tables.
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost, Yang Zhong

From: Yang Zhong <yang.zhong@intel.com>

The ACPI MCFG getter is not x86 specific and could be called from
anywhere within generic ACPI API, so let's export it.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 include/hw/acpi/aml-build.h |  1 +
 hw/acpi/aml-build.c         | 24 ++++++++++++++++++++++++
 hw/i386/acpi-build.c        | 22 ----------------------
 3 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 1861e37ebf..64ea371656 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -408,6 +408,7 @@ void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
 Object *acpi_get_pci_host(void);
 void acpi_get_pci_holes(Range *hole, Range *hole64);
+bool acpi_get_mcfg(AcpiMcfgInfo *mcfg);
 /* Align AML blob size to a multiple of 'align' */
 void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 869ed70db3..2c5446ab23 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -32,6 +32,8 @@
 #include "hw/i386/pc.h"
 #include "sysemu/tpm.h"
 #include "hw/acpi/tpm.h"
+#include "qom/qom-qobject.h"
+#include "qapi/qmp/qnum.h"
 
 #define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
 #define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
@@ -1657,6 +1659,28 @@ void acpi_get_pci_holes(Range *hole, Range *hole64)
                                                NULL));
 }
 
+bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
+{
+    Object *pci_host;
+    QObject *o;
+
+    pci_host = acpi_get_pci_host();
+    g_assert(pci_host);
+
+    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
+    if (!o) {
+        return false;
+    }
+    mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
+    qobject_unref(o);
+
+    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
+    assert(o);
+    mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
+    qobject_unref(o);
+    return true;
+}
+
 static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
 {
     CrsRangeEntry *entry;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 14e2624d14..d8bba16776 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1856,28 +1856,6 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
                  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
-{
-    Object *pci_host;
-    QObject *o;
-
-    pci_host = acpi_get_pci_host();
-    g_assert(pci_host);
-
-    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
-    if (!o) {
-        return false;
-    }
-    mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
-    qobject_unref(o);
-
-    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
-    assert(o);
-    mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
-    qobject_unref(o);
-    return true;
-}
-
 static
 void acpi_build(AcpiBuildTables *tables,
                 MachineState *machine, AcpiConfiguration *acpi_conf)
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 12/24] hw: acpi: Export the MCFG getter
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

From: Yang Zhong <yang.zhong@intel.com>

The ACPI MCFG getter is not x86 specific and could be called from
anywhere within generic ACPI API, so let's export it.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 include/hw/acpi/aml-build.h |  1 +
 hw/acpi/aml-build.c         | 24 ++++++++++++++++++++++++
 hw/i386/acpi-build.c        | 22 ----------------------
 3 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 1861e37ebf..64ea371656 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -408,6 +408,7 @@ void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
 Object *acpi_get_pci_host(void);
 void acpi_get_pci_holes(Range *hole, Range *hole64);
+bool acpi_get_mcfg(AcpiMcfgInfo *mcfg);
 /* Align AML blob size to a multiple of 'align' */
 void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 869ed70db3..2c5446ab23 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -32,6 +32,8 @@
 #include "hw/i386/pc.h"
 #include "sysemu/tpm.h"
 #include "hw/acpi/tpm.h"
+#include "qom/qom-qobject.h"
+#include "qapi/qmp/qnum.h"
 
 #define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
 #define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
@@ -1657,6 +1659,28 @@ void acpi_get_pci_holes(Range *hole, Range *hole64)
                                                NULL));
 }
 
+bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
+{
+    Object *pci_host;
+    QObject *o;
+
+    pci_host = acpi_get_pci_host();
+    g_assert(pci_host);
+
+    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
+    if (!o) {
+        return false;
+    }
+    mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
+    qobject_unref(o);
+
+    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
+    assert(o);
+    mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
+    qobject_unref(o);
+    return true;
+}
+
 static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
 {
     CrsRangeEntry *entry;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 14e2624d14..d8bba16776 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1856,28 +1856,6 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
                  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
-{
-    Object *pci_host;
-    QObject *o;
-
-    pci_host = acpi_get_pci_host();
-    g_assert(pci_host);
-
-    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
-    if (!o) {
-        return false;
-    }
-    mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
-    qobject_unref(o);
-
-    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
-    assert(o);
-    mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
-    qobject_unref(o);
-    return true;
-}
-
 static
 void acpi_build(AcpiBuildTables *tables,
                 MachineState *machine, AcpiConfiguration *acpi_conf)
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 13/24] hw: acpi: Do not create hotplug method when handler is not defined
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

CPU and memory ACPI hotplug are not necessarily handled through SCI
events. For example, with Hardware-reduced ACPI, the GED device will
manage ACPI hotplug entirely.
As a consequence, we make the CPU and memory specific events AML
generation optional. The code will only be added when the method name is
not NULL.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/acpi/cpu.c            |  8 +++++---
 hw/acpi/memory_hotplug.c | 11 +++++++----
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index f10b190019..cd41377b5a 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -569,9 +569,11 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
     aml_append(sb_scope, cpus_dev);
     aml_append(table, sb_scope);
 
-    method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
-    aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
-    aml_append(table, method);
+    if (event_handler_method) {
+        method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
+        aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
+        aml_append(table, method);
+    }
 
     g_free(cphp_res_path);
 }
diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
index 8c7c1013f3..db2c4df961 100644
--- a/hw/acpi/memory_hotplug.c
+++ b/hw/acpi/memory_hotplug.c
@@ -715,10 +715,13 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
     }
     aml_append(table, dev_container);
 
-    method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
-    aml_append(method,
-        aml_call0(MEMORY_DEVICES_CONTAINER "." MEMORY_SLOT_SCAN_METHOD));
-    aml_append(table, method);
+    if (event_handler_method) {
+        method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
+        aml_append(method,
+                   aml_call0(MEMORY_DEVICES_CONTAINER "."
+                             MEMORY_SLOT_SCAN_METHOD));
+        aml_append(table, method);
+    }
 
     g_free(mhp_res_path);
 }
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 13/24] hw: acpi: Do not create hotplug method when handler is not defined
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

CPU and memory ACPI hotplug are not necessarily handled through SCI
events. For example, with Hardware-reduced ACPI, the GED device will
manage ACPI hotplug entirely.
As a consequence, we make the CPU and memory specific events AML
generation optional. The code will only be added when the method name is
not NULL.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/acpi/cpu.c            |  8 +++++---
 hw/acpi/memory_hotplug.c | 11 +++++++----
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index f10b190019..cd41377b5a 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -569,9 +569,11 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
     aml_append(sb_scope, cpus_dev);
     aml_append(table, sb_scope);
 
-    method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
-    aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
-    aml_append(table, method);
+    if (event_handler_method) {
+        method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
+        aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
+        aml_append(table, method);
+    }
 
     g_free(cphp_res_path);
 }
diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
index 8c7c1013f3..db2c4df961 100644
--- a/hw/acpi/memory_hotplug.c
+++ b/hw/acpi/memory_hotplug.c
@@ -715,10 +715,13 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
     }
     aml_append(table, dev_container);
 
-    method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
-    aml_append(method,
-        aml_call0(MEMORY_DEVICES_CONTAINER "." MEMORY_SLOT_SCAN_METHOD));
-    aml_append(table, method);
+    if (event_handler_method) {
+        method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
+        aml_append(method,
+                   aml_call0(MEMORY_DEVICES_CONTAINER "."
+                             MEMORY_SLOT_SCAN_METHOD));
+        aml_append(table, method);
+    }
 
     g_free(mhp_res_path);
 }
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 14/24] hw: i386: Make the hotpluggable memory size property more generic
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

This property is currently defined under i386/pc while it only describes
a region size that's eventually fetched from the AML ACPI code.

We can make it more generic and shareable across machine types by moving
it to memory-device.h instead.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/i386/pc.h           | 1 -
 include/hw/mem/memory-device.h | 2 ++
 hw/i386/acpi-build.c           | 2 +-
 hw/i386/pc.c                   | 3 ++-
 4 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index bbbdb33ea3..44cb6bf3f3 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -62,7 +62,6 @@ struct PCMachineState {
 };
 
 #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
-#define PC_MACHINE_DEVMEM_REGION_SIZE "device-memory-region-size"
 #define PC_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
 #define PC_MACHINE_VMPORT           "vmport"
 #define PC_MACHINE_SMM              "smm"
diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h
index e904e194d5..d9a4fc7c3e 100644
--- a/include/hw/mem/memory-device.h
+++ b/include/hw/mem/memory-device.h
@@ -97,6 +97,8 @@ typedef struct MemoryDeviceClass {
                              MemoryDeviceInfo *info);
 } MemoryDeviceClass;
 
+#define MEMORY_DEVICE_REGION_SIZE "memory-device-region-size"
+
 MemoryDeviceInfoList *qmp_memory_device_list(void);
 uint64_t get_plugged_memory_size(void);
 void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms,
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index d8bba16776..1ef1a38441 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1628,7 +1628,7 @@ build_srat(GArray *table_data, BIOSLinker *linker,
     MachineClass *mc = MACHINE_GET_CLASS(machine);
     const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
     ram_addr_t hotplugabble_address_space_size =
-        object_property_get_int(OBJECT(machine), PC_MACHINE_DEVMEM_REGION_SIZE,
+        object_property_get_int(OBJECT(machine), MEMORY_DEVICE_REGION_SIZE,
                                 NULL);
 
     srat_start = table_data->len;
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 090f969933..c9ffc8cff6 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -67,6 +67,7 @@
 #include "hw/boards.h"
 #include "acpi-build.h"
 #include "hw/mem/pc-dimm.h"
+#include "hw/mem/memory-device.h"
 #include "qapi/error.h"
 #include "qapi/qapi-visit-common.h"
 #include "qapi/visitor.h"
@@ -2443,7 +2444,7 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
     nc->nmi_monitor_handler = x86_nmi;
     mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
 
-    object_class_property_add(oc, PC_MACHINE_DEVMEM_REGION_SIZE, "int",
+    object_class_property_add(oc, MEMORY_DEVICE_REGION_SIZE, "int",
         pc_machine_get_device_memory_region_size, NULL,
         NULL, NULL, &error_abort);
 
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 14/24] hw: i386: Make the hotpluggable memory size property more generic
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

This property is currently defined under i386/pc while it only describes
a region size that's eventually fetched from the AML ACPI code.

We can make it more generic and shareable across machine types by moving
it to memory-device.h instead.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/i386/pc.h           | 1 -
 include/hw/mem/memory-device.h | 2 ++
 hw/i386/acpi-build.c           | 2 +-
 hw/i386/pc.c                   | 3 ++-
 4 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index bbbdb33ea3..44cb6bf3f3 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -62,7 +62,6 @@ struct PCMachineState {
 };
 
 #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
-#define PC_MACHINE_DEVMEM_REGION_SIZE "device-memory-region-size"
 #define PC_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
 #define PC_MACHINE_VMPORT           "vmport"
 #define PC_MACHINE_SMM              "smm"
diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h
index e904e194d5..d9a4fc7c3e 100644
--- a/include/hw/mem/memory-device.h
+++ b/include/hw/mem/memory-device.h
@@ -97,6 +97,8 @@ typedef struct MemoryDeviceClass {
                              MemoryDeviceInfo *info);
 } MemoryDeviceClass;
 
+#define MEMORY_DEVICE_REGION_SIZE "memory-device-region-size"
+
 MemoryDeviceInfoList *qmp_memory_device_list(void);
 uint64_t get_plugged_memory_size(void);
 void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms,
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index d8bba16776..1ef1a38441 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1628,7 +1628,7 @@ build_srat(GArray *table_data, BIOSLinker *linker,
     MachineClass *mc = MACHINE_GET_CLASS(machine);
     const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
     ram_addr_t hotplugabble_address_space_size =
-        object_property_get_int(OBJECT(machine), PC_MACHINE_DEVMEM_REGION_SIZE,
+        object_property_get_int(OBJECT(machine), MEMORY_DEVICE_REGION_SIZE,
                                 NULL);
 
     srat_start = table_data->len;
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 090f969933..c9ffc8cff6 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -67,6 +67,7 @@
 #include "hw/boards.h"
 #include "acpi-build.h"
 #include "hw/mem/pc-dimm.h"
+#include "hw/mem/memory-device.h"
 #include "qapi/error.h"
 #include "qapi/qapi-visit-common.h"
 #include "qapi/visitor.h"
@@ -2443,7 +2444,7 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
     nc->nmi_monitor_handler = x86_nmi;
     mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
 
-    object_class_property_add(oc, PC_MACHINE_DEVMEM_REGION_SIZE, "int",
+    object_class_property_add(oc, MEMORY_DEVICE_REGION_SIZE, "int",
         pc_machine_get_device_memory_region_size, NULL,
         NULL, NULL, &error_abort);
 
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

This is the standard way of building SRAT on x86 platfoms. But future
machine types could decide to define their own custom SRAT build method
through the ACPI builder methods.
Moreover, we will also need to reach build_srat() from outside of
acpi-build in order to use it as the ACPI builder SRAT build method.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/i386/acpi-build.h | 5 +++++
 hw/i386/acpi-build.c | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
index 065a1d8250..d73c41fe8f 100644
--- a/hw/i386/acpi-build.h
+++ b/hw/i386/acpi-build.h
@@ -4,6 +4,11 @@
 
 #include "hw/acpi/acpi.h"
 
+/* ACPI SRAT (Static Resource Affinity Table) build method for x86 */
+void
+build_srat(GArray *table_data, BIOSLinker *linker,
+           MachineState *machine, AcpiConfiguration *acpi_conf);
+
 void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
 
 #endif
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 1ef1a38441..673c5dfafc 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1615,7 +1615,7 @@ build_tpm2(GArray *table_data, BIOSLinker *linker, GArray *tcpalog)
 #define HOLE_640K_START  (640 * KiB)
 #define HOLE_640K_END   (1 * MiB)
 
-static void
+void
 build_srat(GArray *table_data, BIOSLinker *linker,
            MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

This is the standard way of building SRAT on x86 platfoms. But future
machine types could decide to define their own custom SRAT build method
through the ACPI builder methods.
Moreover, we will also need to reach build_srat() from outside of
acpi-build in order to use it as the ACPI builder SRAT build method.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/i386/acpi-build.h | 5 +++++
 hw/i386/acpi-build.c | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
index 065a1d8250..d73c41fe8f 100644
--- a/hw/i386/acpi-build.h
+++ b/hw/i386/acpi-build.h
@@ -4,6 +4,11 @@
 
 #include "hw/acpi/acpi.h"
 
+/* ACPI SRAT (Static Resource Affinity Table) build method for x86 */
+void
+build_srat(GArray *table_data, BIOSLinker *linker,
+           MachineState *machine, AcpiConfiguration *acpi_conf);
+
 void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
 
 #endif
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 1ef1a38441..673c5dfafc 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1615,7 +1615,7 @@ build_tpm2(GArray *table_data, BIOSLinker *linker, GArray *tcpalog)
 #define HOLE_640K_START  (640 * KiB)
 #define HOLE_640K_END   (1 * MiB)
 
-static void
+void
 build_srat(GArray *table_data, BIOSLinker *linker,
            MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 16/24] hw: acpi: Fix memory hotplug AML generation error
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost, Yang Zhong

From: Yang Zhong <yang.zhong@intel.com>

When using the generated memory hotplug AML, the iasl
compiler would give the following error:

dsdt.dsl 266: Return (MOST (_UID, Arg0, Arg1, Arg2))
Error 6080 - Called method returns no value ^

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 hw/acpi/memory_hotplug.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
index db2c4df961..893fc2bd27 100644
--- a/hw/acpi/memory_hotplug.c
+++ b/hw/acpi/memory_hotplug.c
@@ -686,15 +686,15 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
 
             method = aml_method("_OST", 3, AML_NOTSERIALIZED);
             s = MEMORY_SLOT_OST_METHOD;
-            aml_append(method, aml_return(aml_call4(
-                s, aml_name("_UID"), aml_arg(0), aml_arg(1), aml_arg(2)
-            )));
+            aml_append(method,
+                       aml_call4(s, aml_name("_UID"), aml_arg(0),
+                                 aml_arg(1), aml_arg(2)));
             aml_append(dev, method);
 
             method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
             s = MEMORY_SLOT_EJECT_METHOD;
-            aml_append(method, aml_return(aml_call2(
-                       s, aml_name("_UID"), aml_arg(0))));
+            aml_append(method,
+                       aml_call2(s, aml_name("_UID"), aml_arg(0)));
             aml_append(dev, method);
 
             aml_append(dev_container, dev);
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 16/24] hw: acpi: Fix memory hotplug AML generation error
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

From: Yang Zhong <yang.zhong@intel.com>

When using the generated memory hotplug AML, the iasl
compiler would give the following error:

dsdt.dsl 266: Return (MOST (_UID, Arg0, Arg1, Arg2))
Error 6080 - Called method returns no value ^

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 hw/acpi/memory_hotplug.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
index db2c4df961..893fc2bd27 100644
--- a/hw/acpi/memory_hotplug.c
+++ b/hw/acpi/memory_hotplug.c
@@ -686,15 +686,15 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
 
             method = aml_method("_OST", 3, AML_NOTSERIALIZED);
             s = MEMORY_SLOT_OST_METHOD;
-            aml_append(method, aml_return(aml_call4(
-                s, aml_name("_UID"), aml_arg(0), aml_arg(1), aml_arg(2)
-            )));
+            aml_append(method,
+                       aml_call4(s, aml_name("_UID"), aml_arg(0),
+                                 aml_arg(1), aml_arg(2)));
             aml_append(dev, method);
 
             method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
             s = MEMORY_SLOT_EJECT_METHOD;
-            aml_append(method, aml_return(aml_call2(
-                       s, aml_name("_UID"), aml_arg(0))));
+            aml_append(method,
+                       aml_call2(s, aml_name("_UID"), aml_arg(0)));
             aml_append(dev, method);
 
             aml_append(dev_container, dev);
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 17/24] hw: acpi: Export the PCI hotplug API
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost, Sebastien Boeuf, Jing Liu

From: Sebastien Boeuf <sebastien.boeuf@intel.com>

The ACPI hotplug support for PCI devices APIs are not x86 or even
machine type specific. In order for future machine types to be able to
re-use that code, we export it through the architecture agnostic
hw/acpi folder.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>
---
 include/hw/acpi/aml-build.h |   3 +
 hw/acpi/aml-build.c         | 194 ++++++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c        | 192 +----------------------------------
 3 files changed, 199 insertions(+), 190 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 64ea371656..6b0a9735c5 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -418,6 +418,9 @@ Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
+void build_acpi_pcihp(Aml *scope);
+void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
+                                  bool pcihp_bridge_en);
 void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
 Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
 void crs_range_set_init(CrsRangeSet *range_set);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 2c5446ab23..6112cc2149 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -34,6 +34,7 @@
 #include "hw/acpi/tpm.h"
 #include "qom/qom-qobject.h"
 #include "qapi/qmp/qnum.h"
+#include "hw/acpi/pcihp.h"
 
 #define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
 #define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
@@ -2305,6 +2306,199 @@ Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
     return scope;
 }
 
+void build_acpi_pcihp(Aml *scope)
+{
+    Aml *field;
+    Aml *method;
+
+    aml_append(scope,
+        aml_operation_region("PCST", AML_SYSTEM_IO, aml_int(0xae00), 0x08));
+    field = aml_field("PCST", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+    aml_append(field, aml_named_field("PCIU", 32));
+    aml_append(field, aml_named_field("PCID", 32));
+    aml_append(scope, field);
+
+    aml_append(scope,
+        aml_operation_region("SEJ", AML_SYSTEM_IO, aml_int(0xae08), 0x04));
+    field = aml_field("SEJ", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+    aml_append(field, aml_named_field("B0EJ", 32));
+    aml_append(scope, field);
+
+    aml_append(scope,
+        aml_operation_region("BNMR", AML_SYSTEM_IO, aml_int(0xae10), 0x04));
+    field = aml_field("BNMR", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+    aml_append(field, aml_named_field("BNUM", 32));
+    aml_append(scope, field);
+
+    aml_append(scope, aml_mutex("BLCK", 0));
+
+    method = aml_method("PCEJ", 2, AML_NOTSERIALIZED);
+    aml_append(method, aml_acquire(aml_name("BLCK"), 0xFFFF));
+    aml_append(method, aml_store(aml_arg(0), aml_name("BNUM")));
+    aml_append(method,
+        aml_store(aml_shiftleft(aml_int(1), aml_arg(1)), aml_name("B0EJ")));
+    aml_append(method, aml_release(aml_name("BLCK")));
+    aml_append(method, aml_return(aml_int(0)));
+    aml_append(scope, method);
+}
+
+static void build_append_pcihp_notify_entry(Aml *method, int slot)
+{
+    Aml *if_ctx;
+    int32_t devfn = PCI_DEVFN(slot, 0);
+
+    if_ctx = aml_if(aml_and(aml_arg(0), aml_int(0x1U << slot), NULL));
+    aml_append(if_ctx, aml_notify(aml_name("S%.02X", devfn), aml_arg(1)));
+    aml_append(method, if_ctx);
+}
+
+void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
+                                  bool pcihp_bridge_en)
+{
+    Aml *dev, *notify_method = NULL, *method;
+    QObject *bsel;
+    PCIBus *sec;
+    int i;
+
+    bsel = object_property_get_qobject(OBJECT(bus), ACPI_PCIHP_PROP_BSEL, NULL);
+    if (bsel) {
+        uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
+
+        aml_append(parent_scope, aml_name_decl("BSEL", aml_int(bsel_val)));
+        notify_method = aml_method("DVNT", 2, AML_NOTSERIALIZED);
+    }
+
+    for (i = 0; i < ARRAY_SIZE(bus->devices); i += PCI_FUNC_MAX) {
+        DeviceClass *dc;
+        PCIDeviceClass *pc;
+        PCIDevice *pdev = bus->devices[i];
+        int slot = PCI_SLOT(i);
+        bool hotplug_enabled_dev;
+        bool bridge_in_acpi;
+
+        if (!pdev) {
+            if (bsel) { /* add hotplug slots for non present devices */
+                dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
+                aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
+                aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
+                method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
+                aml_append(method,
+                    aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
+                );
+                aml_append(dev, method);
+                aml_append(parent_scope, dev);
+
+                build_append_pcihp_notify_entry(notify_method, slot);
+            }
+            continue;
+        }
+
+        pc = PCI_DEVICE_GET_CLASS(pdev);
+        dc = DEVICE_GET_CLASS(pdev);
+
+        /* When hotplug for bridges is enabled, bridges are
+         * described in ACPI separately (see build_pci_bus_end).
+         * In this case they aren't themselves hot-pluggable.
+         * Hotplugged bridges *are* hot-pluggable.
+         */
+        bridge_in_acpi = pc->is_bridge && pcihp_bridge_en &&
+            !DEVICE(pdev)->hotplugged;
+
+        hotplug_enabled_dev = bsel && dc->hotpluggable && !bridge_in_acpi;
+
+        if (pc->class_id == PCI_CLASS_BRIDGE_ISA) {
+            continue;
+        }
+
+        /* start to compose PCI slot descriptor */
+        dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
+        aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
+
+        if (pc->class_id == PCI_CLASS_DISPLAY_VGA) {
+            /* add VGA specific AML methods */
+            int s3d;
+
+            if (object_dynamic_cast(OBJECT(pdev), "qxl-vga")) {
+                s3d = 3;
+            } else {
+                s3d = 0;
+            }
+
+            method = aml_method("_S1D", 0, AML_NOTSERIALIZED);
+            aml_append(method, aml_return(aml_int(0)));
+            aml_append(dev, method);
+
+            method = aml_method("_S2D", 0, AML_NOTSERIALIZED);
+            aml_append(method, aml_return(aml_int(0)));
+            aml_append(dev, method);
+
+            method = aml_method("_S3D", 0, AML_NOTSERIALIZED);
+            aml_append(method, aml_return(aml_int(s3d)));
+            aml_append(dev, method);
+        } else if (hotplug_enabled_dev) {
+            /* add _SUN/_EJ0 to make slot hotpluggable  */
+            aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
+
+            method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
+            aml_append(method,
+                aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
+            );
+            aml_append(dev, method);
+
+            if (bsel) {
+                build_append_pcihp_notify_entry(notify_method, slot);
+            }
+        } else if (bridge_in_acpi) {
+            /*
+             * device is coldplugged bridge,
+             * add child device descriptions into its scope
+             */
+            PCIBus *sec_bus = pci_bridge_get_sec_bus(PCI_BRIDGE(pdev));
+
+            build_append_pci_bus_devices(dev, sec_bus, pcihp_bridge_en);
+        }
+        /* slot descriptor has been composed, add it into parent context */
+        aml_append(parent_scope, dev);
+    }
+
+    if (bsel) {
+        aml_append(parent_scope, notify_method);
+    }
+
+    /* Append PCNT method to notify about events on local and child buses.
+     * Add unconditionally for root since DSDT expects it.
+     */
+    method = aml_method("PCNT", 0, AML_NOTSERIALIZED);
+
+    /* If bus supports hotplug select it and notify about local events */
+    if (bsel) {
+        uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
+
+        aml_append(method, aml_store(aml_int(bsel_val), aml_name("BNUM")));
+        aml_append(method,
+            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device Check */)
+        );
+        aml_append(method,
+            aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject Request */)
+        );
+    }
+
+    /* Notify about child bus events in any case */
+    if (pcihp_bridge_en) {
+        QLIST_FOREACH(sec, &bus->child, sibling) {
+            int32_t devfn = sec->parent_dev->devfn;
+
+            if (pci_bus_is_root(sec) || pci_bus_is_express(sec)) {
+                continue;
+            }
+
+            aml_append(method, aml_name("^S%.02X.PCNT", devfn));
+        }
+    }
+    aml_append(parent_scope, method);
+    qobject_unref(bsel);
+}
+
 void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host)
 {
     Aml *dev, *pci_scope;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 673c5dfafc..bef5b23168 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -360,163 +360,6 @@ build_madt(GArray *table_data, BIOSLinker *linker,
                  table_data->len - madt_start, 1, NULL, NULL);
 }
 
-static void build_append_pcihp_notify_entry(Aml *method, int slot)
-{
-    Aml *if_ctx;
-    int32_t devfn = PCI_DEVFN(slot, 0);
-
-    if_ctx = aml_if(aml_and(aml_arg(0), aml_int(0x1U << slot), NULL));
-    aml_append(if_ctx, aml_notify(aml_name("S%.02X", devfn), aml_arg(1)));
-    aml_append(method, if_ctx);
-}
-
-static void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
-                                         bool pcihp_bridge_en)
-{
-    Aml *dev, *notify_method = NULL, *method;
-    QObject *bsel;
-    PCIBus *sec;
-    int i;
-
-    bsel = object_property_get_qobject(OBJECT(bus), ACPI_PCIHP_PROP_BSEL, NULL);
-    if (bsel) {
-        uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
-
-        aml_append(parent_scope, aml_name_decl("BSEL", aml_int(bsel_val)));
-        notify_method = aml_method("DVNT", 2, AML_NOTSERIALIZED);
-    }
-
-    for (i = 0; i < ARRAY_SIZE(bus->devices); i += PCI_FUNC_MAX) {
-        DeviceClass *dc;
-        PCIDeviceClass *pc;
-        PCIDevice *pdev = bus->devices[i];
-        int slot = PCI_SLOT(i);
-        bool hotplug_enabled_dev;
-        bool bridge_in_acpi;
-
-        if (!pdev) {
-            if (bsel) { /* add hotplug slots for non present devices */
-                dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
-                aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
-                aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
-                method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
-                aml_append(method,
-                    aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
-                );
-                aml_append(dev, method);
-                aml_append(parent_scope, dev);
-
-                build_append_pcihp_notify_entry(notify_method, slot);
-            }
-            continue;
-        }
-
-        pc = PCI_DEVICE_GET_CLASS(pdev);
-        dc = DEVICE_GET_CLASS(pdev);
-
-        /* When hotplug for bridges is enabled, bridges are
-         * described in ACPI separately (see build_pci_bus_end).
-         * In this case they aren't themselves hot-pluggable.
-         * Hotplugged bridges *are* hot-pluggable.
-         */
-        bridge_in_acpi = pc->is_bridge && pcihp_bridge_en &&
-            !DEVICE(pdev)->hotplugged;
-
-        hotplug_enabled_dev = bsel && dc->hotpluggable && !bridge_in_acpi;
-
-        if (pc->class_id == PCI_CLASS_BRIDGE_ISA) {
-            continue;
-        }
-
-        /* start to compose PCI slot descriptor */
-        dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
-        aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
-
-        if (pc->class_id == PCI_CLASS_DISPLAY_VGA) {
-            /* add VGA specific AML methods */
-            int s3d;
-
-            if (object_dynamic_cast(OBJECT(pdev), "qxl-vga")) {
-                s3d = 3;
-            } else {
-                s3d = 0;
-            }
-
-            method = aml_method("_S1D", 0, AML_NOTSERIALIZED);
-            aml_append(method, aml_return(aml_int(0)));
-            aml_append(dev, method);
-
-            method = aml_method("_S2D", 0, AML_NOTSERIALIZED);
-            aml_append(method, aml_return(aml_int(0)));
-            aml_append(dev, method);
-
-            method = aml_method("_S3D", 0, AML_NOTSERIALIZED);
-            aml_append(method, aml_return(aml_int(s3d)));
-            aml_append(dev, method);
-        } else if (hotplug_enabled_dev) {
-            /* add _SUN/_EJ0 to make slot hotpluggable  */
-            aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
-
-            method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
-            aml_append(method,
-                aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
-            );
-            aml_append(dev, method);
-
-            if (bsel) {
-                build_append_pcihp_notify_entry(notify_method, slot);
-            }
-        } else if (bridge_in_acpi) {
-            /*
-             * device is coldplugged bridge,
-             * add child device descriptions into its scope
-             */
-            PCIBus *sec_bus = pci_bridge_get_sec_bus(PCI_BRIDGE(pdev));
-
-            build_append_pci_bus_devices(dev, sec_bus, pcihp_bridge_en);
-        }
-        /* slot descriptor has been composed, add it into parent context */
-        aml_append(parent_scope, dev);
-    }
-
-    if (bsel) {
-        aml_append(parent_scope, notify_method);
-    }
-
-    /* Append PCNT method to notify about events on local and child buses.
-     * Add unconditionally for root since DSDT expects it.
-     */
-    method = aml_method("PCNT", 0, AML_NOTSERIALIZED);
-
-    /* If bus supports hotplug select it and notify about local events */
-    if (bsel) {
-        uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
-
-        aml_append(method, aml_store(aml_int(bsel_val), aml_name("BNUM")));
-        aml_append(method,
-            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device Check */)
-        );
-        aml_append(method,
-            aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject Request */)
-        );
-    }
-
-    /* Notify about child bus events in any case */
-    if (pcihp_bridge_en) {
-        QLIST_FOREACH(sec, &bus->child, sibling) {
-            int32_t devfn = sec->parent_dev->devfn;
-
-            if (pci_bus_is_root(sec) || pci_bus_is_express(sec)) {
-                continue;
-            }
-
-            aml_append(method, aml_name("^S%.02X.PCNT", devfn));
-        }
-    }
-    aml_append(parent_scope, method);
-    qobject_unref(bsel);
-}
-
 static void build_hpet_aml(Aml *table)
 {
     Aml *crs;
@@ -1212,41 +1055,10 @@ static void build_piix4_isa_bridge(Aml *table)
 static void build_piix4_pci_hotplug(Aml *table)
 {
     Aml *scope;
-    Aml *field;
-    Aml *method;
-
-    scope =  aml_scope("_SB.PCI0");
-
-    aml_append(scope,
-        aml_operation_region("PCST", AML_SYSTEM_IO, aml_int(0xae00), 0x08));
-    field = aml_field("PCST", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
-    aml_append(field, aml_named_field("PCIU", 32));
-    aml_append(field, aml_named_field("PCID", 32));
-    aml_append(scope, field);
 
-    aml_append(scope,
-        aml_operation_region("SEJ", AML_SYSTEM_IO, aml_int(0xae08), 0x04));
-    field = aml_field("SEJ", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
-    aml_append(field, aml_named_field("B0EJ", 32));
-    aml_append(scope, field);
-
-    aml_append(scope,
-        aml_operation_region("BNMR", AML_SYSTEM_IO, aml_int(0xae10), 0x04));
-    field = aml_field("BNMR", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
-    aml_append(field, aml_named_field("BNUM", 32));
-    aml_append(scope, field);
-
-    aml_append(scope, aml_mutex("BLCK", 0));
-
-    method = aml_method("PCEJ", 2, AML_NOTSERIALIZED);
-    aml_append(method, aml_acquire(aml_name("BLCK"), 0xFFFF));
-    aml_append(method, aml_store(aml_arg(0), aml_name("BNUM")));
-    aml_append(method,
-        aml_store(aml_shiftleft(aml_int(1), aml_arg(1)), aml_name("B0EJ")));
-    aml_append(method, aml_release(aml_name("BLCK")));
-    aml_append(method, aml_return(aml_int(0)));
-    aml_append(scope, method);
+    scope = aml_scope("_SB.PCI0");
 
+    build_acpi_pcihp(scope);
     aml_append(table, scope);
 }
 
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 17/24] hw: acpi: Export the PCI hotplug API
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Jing Liu, Shannon Zhao, Igor Mammedov,
	qemu-arm, Marcel Apfelbaum, Paolo Bonzini, Anthony Perard,
	xen-devel, Sebastien Boeuf, Richard Henderson

From: Sebastien Boeuf <sebastien.boeuf@intel.com>

The ACPI hotplug support for PCI devices APIs are not x86 or even
machine type specific. In order for future machine types to be able to
re-use that code, we export it through the architecture agnostic
hw/acpi folder.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>
---
 include/hw/acpi/aml-build.h |   3 +
 hw/acpi/aml-build.c         | 194 ++++++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c        | 192 +----------------------------------
 3 files changed, 199 insertions(+), 190 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 64ea371656..6b0a9735c5 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -418,6 +418,9 @@ Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
+void build_acpi_pcihp(Aml *scope);
+void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
+                                  bool pcihp_bridge_en);
 void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
 Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
 void crs_range_set_init(CrsRangeSet *range_set);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 2c5446ab23..6112cc2149 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -34,6 +34,7 @@
 #include "hw/acpi/tpm.h"
 #include "qom/qom-qobject.h"
 #include "qapi/qmp/qnum.h"
+#include "hw/acpi/pcihp.h"
 
 #define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
 #define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
@@ -2305,6 +2306,199 @@ Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
     return scope;
 }
 
+void build_acpi_pcihp(Aml *scope)
+{
+    Aml *field;
+    Aml *method;
+
+    aml_append(scope,
+        aml_operation_region("PCST", AML_SYSTEM_IO, aml_int(0xae00), 0x08));
+    field = aml_field("PCST", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+    aml_append(field, aml_named_field("PCIU", 32));
+    aml_append(field, aml_named_field("PCID", 32));
+    aml_append(scope, field);
+
+    aml_append(scope,
+        aml_operation_region("SEJ", AML_SYSTEM_IO, aml_int(0xae08), 0x04));
+    field = aml_field("SEJ", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+    aml_append(field, aml_named_field("B0EJ", 32));
+    aml_append(scope, field);
+
+    aml_append(scope,
+        aml_operation_region("BNMR", AML_SYSTEM_IO, aml_int(0xae10), 0x04));
+    field = aml_field("BNMR", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+    aml_append(field, aml_named_field("BNUM", 32));
+    aml_append(scope, field);
+
+    aml_append(scope, aml_mutex("BLCK", 0));
+
+    method = aml_method("PCEJ", 2, AML_NOTSERIALIZED);
+    aml_append(method, aml_acquire(aml_name("BLCK"), 0xFFFF));
+    aml_append(method, aml_store(aml_arg(0), aml_name("BNUM")));
+    aml_append(method,
+        aml_store(aml_shiftleft(aml_int(1), aml_arg(1)), aml_name("B0EJ")));
+    aml_append(method, aml_release(aml_name("BLCK")));
+    aml_append(method, aml_return(aml_int(0)));
+    aml_append(scope, method);
+}
+
+static void build_append_pcihp_notify_entry(Aml *method, int slot)
+{
+    Aml *if_ctx;
+    int32_t devfn = PCI_DEVFN(slot, 0);
+
+    if_ctx = aml_if(aml_and(aml_arg(0), aml_int(0x1U << slot), NULL));
+    aml_append(if_ctx, aml_notify(aml_name("S%.02X", devfn), aml_arg(1)));
+    aml_append(method, if_ctx);
+}
+
+void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
+                                  bool pcihp_bridge_en)
+{
+    Aml *dev, *notify_method = NULL, *method;
+    QObject *bsel;
+    PCIBus *sec;
+    int i;
+
+    bsel = object_property_get_qobject(OBJECT(bus), ACPI_PCIHP_PROP_BSEL, NULL);
+    if (bsel) {
+        uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
+
+        aml_append(parent_scope, aml_name_decl("BSEL", aml_int(bsel_val)));
+        notify_method = aml_method("DVNT", 2, AML_NOTSERIALIZED);
+    }
+
+    for (i = 0; i < ARRAY_SIZE(bus->devices); i += PCI_FUNC_MAX) {
+        DeviceClass *dc;
+        PCIDeviceClass *pc;
+        PCIDevice *pdev = bus->devices[i];
+        int slot = PCI_SLOT(i);
+        bool hotplug_enabled_dev;
+        bool bridge_in_acpi;
+
+        if (!pdev) {
+            if (bsel) { /* add hotplug slots for non present devices */
+                dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
+                aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
+                aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
+                method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
+                aml_append(method,
+                    aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
+                );
+                aml_append(dev, method);
+                aml_append(parent_scope, dev);
+
+                build_append_pcihp_notify_entry(notify_method, slot);
+            }
+            continue;
+        }
+
+        pc = PCI_DEVICE_GET_CLASS(pdev);
+        dc = DEVICE_GET_CLASS(pdev);
+
+        /* When hotplug for bridges is enabled, bridges are
+         * described in ACPI separately (see build_pci_bus_end).
+         * In this case they aren't themselves hot-pluggable.
+         * Hotplugged bridges *are* hot-pluggable.
+         */
+        bridge_in_acpi = pc->is_bridge && pcihp_bridge_en &&
+            !DEVICE(pdev)->hotplugged;
+
+        hotplug_enabled_dev = bsel && dc->hotpluggable && !bridge_in_acpi;
+
+        if (pc->class_id == PCI_CLASS_BRIDGE_ISA) {
+            continue;
+        }
+
+        /* start to compose PCI slot descriptor */
+        dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
+        aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
+
+        if (pc->class_id == PCI_CLASS_DISPLAY_VGA) {
+            /* add VGA specific AML methods */
+            int s3d;
+
+            if (object_dynamic_cast(OBJECT(pdev), "qxl-vga")) {
+                s3d = 3;
+            } else {
+                s3d = 0;
+            }
+
+            method = aml_method("_S1D", 0, AML_NOTSERIALIZED);
+            aml_append(method, aml_return(aml_int(0)));
+            aml_append(dev, method);
+
+            method = aml_method("_S2D", 0, AML_NOTSERIALIZED);
+            aml_append(method, aml_return(aml_int(0)));
+            aml_append(dev, method);
+
+            method = aml_method("_S3D", 0, AML_NOTSERIALIZED);
+            aml_append(method, aml_return(aml_int(s3d)));
+            aml_append(dev, method);
+        } else if (hotplug_enabled_dev) {
+            /* add _SUN/_EJ0 to make slot hotpluggable  */
+            aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
+
+            method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
+            aml_append(method,
+                aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
+            );
+            aml_append(dev, method);
+
+            if (bsel) {
+                build_append_pcihp_notify_entry(notify_method, slot);
+            }
+        } else if (bridge_in_acpi) {
+            /*
+             * device is coldplugged bridge,
+             * add child device descriptions into its scope
+             */
+            PCIBus *sec_bus = pci_bridge_get_sec_bus(PCI_BRIDGE(pdev));
+
+            build_append_pci_bus_devices(dev, sec_bus, pcihp_bridge_en);
+        }
+        /* slot descriptor has been composed, add it into parent context */
+        aml_append(parent_scope, dev);
+    }
+
+    if (bsel) {
+        aml_append(parent_scope, notify_method);
+    }
+
+    /* Append PCNT method to notify about events on local and child buses.
+     * Add unconditionally for root since DSDT expects it.
+     */
+    method = aml_method("PCNT", 0, AML_NOTSERIALIZED);
+
+    /* If bus supports hotplug select it and notify about local events */
+    if (bsel) {
+        uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
+
+        aml_append(method, aml_store(aml_int(bsel_val), aml_name("BNUM")));
+        aml_append(method,
+            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device Check */)
+        );
+        aml_append(method,
+            aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject Request */)
+        );
+    }
+
+    /* Notify about child bus events in any case */
+    if (pcihp_bridge_en) {
+        QLIST_FOREACH(sec, &bus->child, sibling) {
+            int32_t devfn = sec->parent_dev->devfn;
+
+            if (pci_bus_is_root(sec) || pci_bus_is_express(sec)) {
+                continue;
+            }
+
+            aml_append(method, aml_name("^S%.02X.PCNT", devfn));
+        }
+    }
+    aml_append(parent_scope, method);
+    qobject_unref(bsel);
+}
+
 void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host)
 {
     Aml *dev, *pci_scope;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 673c5dfafc..bef5b23168 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -360,163 +360,6 @@ build_madt(GArray *table_data, BIOSLinker *linker,
                  table_data->len - madt_start, 1, NULL, NULL);
 }
 
-static void build_append_pcihp_notify_entry(Aml *method, int slot)
-{
-    Aml *if_ctx;
-    int32_t devfn = PCI_DEVFN(slot, 0);
-
-    if_ctx = aml_if(aml_and(aml_arg(0), aml_int(0x1U << slot), NULL));
-    aml_append(if_ctx, aml_notify(aml_name("S%.02X", devfn), aml_arg(1)));
-    aml_append(method, if_ctx);
-}
-
-static void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
-                                         bool pcihp_bridge_en)
-{
-    Aml *dev, *notify_method = NULL, *method;
-    QObject *bsel;
-    PCIBus *sec;
-    int i;
-
-    bsel = object_property_get_qobject(OBJECT(bus), ACPI_PCIHP_PROP_BSEL, NULL);
-    if (bsel) {
-        uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
-
-        aml_append(parent_scope, aml_name_decl("BSEL", aml_int(bsel_val)));
-        notify_method = aml_method("DVNT", 2, AML_NOTSERIALIZED);
-    }
-
-    for (i = 0; i < ARRAY_SIZE(bus->devices); i += PCI_FUNC_MAX) {
-        DeviceClass *dc;
-        PCIDeviceClass *pc;
-        PCIDevice *pdev = bus->devices[i];
-        int slot = PCI_SLOT(i);
-        bool hotplug_enabled_dev;
-        bool bridge_in_acpi;
-
-        if (!pdev) {
-            if (bsel) { /* add hotplug slots for non present devices */
-                dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
-                aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
-                aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
-                method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
-                aml_append(method,
-                    aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
-                );
-                aml_append(dev, method);
-                aml_append(parent_scope, dev);
-
-                build_append_pcihp_notify_entry(notify_method, slot);
-            }
-            continue;
-        }
-
-        pc = PCI_DEVICE_GET_CLASS(pdev);
-        dc = DEVICE_GET_CLASS(pdev);
-
-        /* When hotplug for bridges is enabled, bridges are
-         * described in ACPI separately (see build_pci_bus_end).
-         * In this case they aren't themselves hot-pluggable.
-         * Hotplugged bridges *are* hot-pluggable.
-         */
-        bridge_in_acpi = pc->is_bridge && pcihp_bridge_en &&
-            !DEVICE(pdev)->hotplugged;
-
-        hotplug_enabled_dev = bsel && dc->hotpluggable && !bridge_in_acpi;
-
-        if (pc->class_id == PCI_CLASS_BRIDGE_ISA) {
-            continue;
-        }
-
-        /* start to compose PCI slot descriptor */
-        dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
-        aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
-
-        if (pc->class_id == PCI_CLASS_DISPLAY_VGA) {
-            /* add VGA specific AML methods */
-            int s3d;
-
-            if (object_dynamic_cast(OBJECT(pdev), "qxl-vga")) {
-                s3d = 3;
-            } else {
-                s3d = 0;
-            }
-
-            method = aml_method("_S1D", 0, AML_NOTSERIALIZED);
-            aml_append(method, aml_return(aml_int(0)));
-            aml_append(dev, method);
-
-            method = aml_method("_S2D", 0, AML_NOTSERIALIZED);
-            aml_append(method, aml_return(aml_int(0)));
-            aml_append(dev, method);
-
-            method = aml_method("_S3D", 0, AML_NOTSERIALIZED);
-            aml_append(method, aml_return(aml_int(s3d)));
-            aml_append(dev, method);
-        } else if (hotplug_enabled_dev) {
-            /* add _SUN/_EJ0 to make slot hotpluggable  */
-            aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
-
-            method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
-            aml_append(method,
-                aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
-            );
-            aml_append(dev, method);
-
-            if (bsel) {
-                build_append_pcihp_notify_entry(notify_method, slot);
-            }
-        } else if (bridge_in_acpi) {
-            /*
-             * device is coldplugged bridge,
-             * add child device descriptions into its scope
-             */
-            PCIBus *sec_bus = pci_bridge_get_sec_bus(PCI_BRIDGE(pdev));
-
-            build_append_pci_bus_devices(dev, sec_bus, pcihp_bridge_en);
-        }
-        /* slot descriptor has been composed, add it into parent context */
-        aml_append(parent_scope, dev);
-    }
-
-    if (bsel) {
-        aml_append(parent_scope, notify_method);
-    }
-
-    /* Append PCNT method to notify about events on local and child buses.
-     * Add unconditionally for root since DSDT expects it.
-     */
-    method = aml_method("PCNT", 0, AML_NOTSERIALIZED);
-
-    /* If bus supports hotplug select it and notify about local events */
-    if (bsel) {
-        uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
-
-        aml_append(method, aml_store(aml_int(bsel_val), aml_name("BNUM")));
-        aml_append(method,
-            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device Check */)
-        );
-        aml_append(method,
-            aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject Request */)
-        );
-    }
-
-    /* Notify about child bus events in any case */
-    if (pcihp_bridge_en) {
-        QLIST_FOREACH(sec, &bus->child, sibling) {
-            int32_t devfn = sec->parent_dev->devfn;
-
-            if (pci_bus_is_root(sec) || pci_bus_is_express(sec)) {
-                continue;
-            }
-
-            aml_append(method, aml_name("^S%.02X.PCNT", devfn));
-        }
-    }
-    aml_append(parent_scope, method);
-    qobject_unref(bsel);
-}
-
 static void build_hpet_aml(Aml *table)
 {
     Aml *crs;
@@ -1212,41 +1055,10 @@ static void build_piix4_isa_bridge(Aml *table)
 static void build_piix4_pci_hotplug(Aml *table)
 {
     Aml *scope;
-    Aml *field;
-    Aml *method;
-
-    scope =  aml_scope("_SB.PCI0");
-
-    aml_append(scope,
-        aml_operation_region("PCST", AML_SYSTEM_IO, aml_int(0xae00), 0x08));
-    field = aml_field("PCST", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
-    aml_append(field, aml_named_field("PCIU", 32));
-    aml_append(field, aml_named_field("PCID", 32));
-    aml_append(scope, field);
 
-    aml_append(scope,
-        aml_operation_region("SEJ", AML_SYSTEM_IO, aml_int(0xae08), 0x04));
-    field = aml_field("SEJ", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
-    aml_append(field, aml_named_field("B0EJ", 32));
-    aml_append(scope, field);
-
-    aml_append(scope,
-        aml_operation_region("BNMR", AML_SYSTEM_IO, aml_int(0xae10), 0x04));
-    field = aml_field("BNMR", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
-    aml_append(field, aml_named_field("BNUM", 32));
-    aml_append(scope, field);
-
-    aml_append(scope, aml_mutex("BLCK", 0));
-
-    method = aml_method("PCEJ", 2, AML_NOTSERIALIZED);
-    aml_append(method, aml_acquire(aml_name("BLCK"), 0xFFFF));
-    aml_append(method, aml_store(aml_arg(0), aml_name("BNUM")));
-    aml_append(method,
-        aml_store(aml_shiftleft(aml_int(1), aml_arg(1)), aml_name("B0EJ")));
-    aml_append(method, aml_release(aml_name("BLCK")));
-    aml_append(method, aml_return(aml_int(0)));
-    aml_append(scope, method);
+    scope = aml_scope("_SB.PCI0");
 
+    build_acpi_pcihp(scope);
     aml_append(table, scope);
 }
 
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 18/24] hw: i386: Export the MADT build method
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

It is going to be used by the PC machine type as the MADT table builder
method and thus needs to be exported outside of acpi-build.c

Also, now that the generic build_madt() API is exported, we have to
rename the ARM static one in order to avoid build time conflicts.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/i386/acpi.h   | 28 ++++++++++++++++++++++++++++
 hw/arm/virt-acpi-build.c |  4 ++--
 hw/i386/acpi-build.c     |  4 ++--
 3 files changed, 32 insertions(+), 4 deletions(-)
 create mode 100644 include/hw/i386/acpi.h

diff --git a/include/hw/i386/acpi.h b/include/hw/i386/acpi.h
new file mode 100644
index 0000000000..b7a887111d
--- /dev/null
+++ b/include/hw/i386/acpi.h
@@ -0,0 +1,28 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corportation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef HW_I386_ACPI_H
+#define HW_I386_ACPI_H
+
+#include "hw/acpi/acpi.h"
+#include "hw/acpi/bios-linker-loader.h"
+
+/* ACPI MADT (Multiple APIC Description Table) build method */
+void build_madt(GArray *table_data, BIOSLinker *linker,
+                MachineState *ms, AcpiConfiguration *conf);
+
+#endif
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index b5e165543a..b0354c5f03 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -564,7 +564,7 @@ build_gtdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 
 /* MADT */
 static void
-build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
+virt_build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
     VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms);
     int madt_start = table_data->len;
@@ -745,7 +745,7 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
     build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
 
     acpi_add_table(table_offsets, tables_blob);
-    build_madt(tables_blob, tables->linker, vms);
+    virt_build_madt(tables_blob, tables->linker, vms);
 
     acpi_add_table(table_offsets, tables_blob);
     build_gtdt(tables_blob, tables->linker, vms);
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index bef5b23168..4b1d8fbe3f 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -35,7 +35,6 @@
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu.h"
 #include "hw/nvram/fw_cfg.h"
-#include "hw/acpi/bios-linker-loader.h"
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
 #include "hw/block/fdc.h"
@@ -60,6 +59,7 @@
 #include "qom/qom-qobject.h"
 #include "hw/i386/amd_iommu.h"
 #include "hw/i386/intel_iommu.h"
+#include "hw/i386/acpi.h"
 
 #include "hw/acpi/ipmi.h"
 
@@ -279,7 +279,7 @@ void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
     }
 }
 
-static void
+void
 build_madt(GArray *table_data, BIOSLinker *linker,
            MachineState *ms, AcpiConfiguration *acpi_conf)
 {
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 18/24] hw: i386: Export the MADT build method
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

It is going to be used by the PC machine type as the MADT table builder
method and thus needs to be exported outside of acpi-build.c

Also, now that the generic build_madt() API is exported, we have to
rename the ARM static one in order to avoid build time conflicts.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/i386/acpi.h   | 28 ++++++++++++++++++++++++++++
 hw/arm/virt-acpi-build.c |  4 ++--
 hw/i386/acpi-build.c     |  4 ++--
 3 files changed, 32 insertions(+), 4 deletions(-)
 create mode 100644 include/hw/i386/acpi.h

diff --git a/include/hw/i386/acpi.h b/include/hw/i386/acpi.h
new file mode 100644
index 0000000000..b7a887111d
--- /dev/null
+++ b/include/hw/i386/acpi.h
@@ -0,0 +1,28 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corportation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef HW_I386_ACPI_H
+#define HW_I386_ACPI_H
+
+#include "hw/acpi/acpi.h"
+#include "hw/acpi/bios-linker-loader.h"
+
+/* ACPI MADT (Multiple APIC Description Table) build method */
+void build_madt(GArray *table_data, BIOSLinker *linker,
+                MachineState *ms, AcpiConfiguration *conf);
+
+#endif
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index b5e165543a..b0354c5f03 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -564,7 +564,7 @@ build_gtdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 
 /* MADT */
 static void
-build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
+virt_build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
     VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms);
     int madt_start = table_data->len;
@@ -745,7 +745,7 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
     build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
 
     acpi_add_table(table_offsets, tables_blob);
-    build_madt(tables_blob, tables->linker, vms);
+    virt_build_madt(tables_blob, tables->linker, vms);
 
     acpi_add_table(table_offsets, tables_blob);
     build_gtdt(tables_blob, tables->linker, vms);
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index bef5b23168..4b1d8fbe3f 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -35,7 +35,6 @@
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu.h"
 #include "hw/nvram/fw_cfg.h"
-#include "hw/acpi/bios-linker-loader.h"
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
 #include "hw/block/fdc.h"
@@ -60,6 +59,7 @@
 #include "qom/qom-qobject.h"
 #include "hw/i386/amd_iommu.h"
 #include "hw/i386/intel_iommu.h"
+#include "hw/i386/acpi.h"
 
 #include "hw/acpi/ipmi.h"
 
@@ -279,7 +279,7 @@ void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
     }
 }
 
-static void
+void
 build_madt(GArray *table_data, BIOSLinker *linker,
            MachineState *ms, AcpiConfiguration *acpi_conf)
 {
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost, Sebastien Boeuf, Jing Liu

From: Sebastien Boeuf <sebastien.boeuf@intel.com>

Instead of using the machine type specific method find_i440fx() to
retrieve the PCI bus, this commit aims to rely on the fact that the
PCI bus is known by the structure AcpiPciHpState.

When the structure is initialized through acpi_pcihp_init() call,
it saves the PCI bus, which means there is no need to invoke a
special function later on.

Based on the fact that find_i440fx() was only used there, this
patch also removes the function find_i440fx() itself from the
entire codebase.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>
---
 include/hw/i386/pc.h  |  1 -
 hw/acpi/pcihp.c       | 10 ++++------
 hw/pci-host/piix.c    |  8 --------
 stubs/pci-host-piix.c |  6 ------
 stubs/Makefile.objs   |  1 -
 5 files changed, 4 insertions(+), 22 deletions(-)
 delete mode 100644 stubs/pci-host-piix.c

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 44cb6bf3f3..8e5f1464eb 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
                     MemoryRegion *pci_memory,
                     MemoryRegion *ram_memory);
 
-PCIBus *find_i440fx(void);
 /* piix4.c */
 extern PCIDevice *piix4_dev;
 int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
index 80d42e12ff..254b2e50ab 100644
--- a/hw/acpi/pcihp.c
+++ b/hw/acpi/pcihp.c
@@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void *opaque)
     return bsel_alloc;
 }
 
-static void acpi_set_pci_info(void)
+static void acpi_set_pci_info(AcpiPciHpState *s)
 {
     static bool bsel_is_set;
-    PCIBus *bus;
     unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
 
     if (bsel_is_set) {
@@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
     }
     bsel_is_set = true;
 
-    bus = find_i440fx(); /* TODO: Q35 support */
-    if (bus) {
+    if (s->root) {
         /* Scan all PCI buses. Set property to enable acpi based hotplug. */
-        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL, &bsel_alloc);
+        pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL, &bsel_alloc);
     }
 }
 
@@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState *s)
 
 void acpi_pcihp_reset(AcpiPciHpState *s)
 {
-    acpi_set_pci_info();
+    acpi_set_pci_info(s);
     acpi_pcihp_update(s);
 }
 
diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index 47293a3915..658460264b 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
     return b;
 }
 
-PCIBus *find_i440fx(void)
-{
-    PCIHostState *s = OBJECT_CHECK(PCIHostState,
-                                   object_resolve_path("/machine/i440fx", NULL),
-                                   TYPE_PCI_HOST_BRIDGE);
-    return s ? s->bus : NULL;
-}
-
 /* PIIX3 PCI to ISA bridge */
 static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
 {
diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
deleted file mode 100644
index 6ed81b1f21..0000000000
--- a/stubs/pci-host-piix.c
+++ /dev/null
@@ -1,6 +0,0 @@
-#include "qemu/osdep.h"
-#include "hw/i386/pc.h"
-PCIBus *find_i440fx(void)
-{
-    return NULL;
-}
diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
index 5dd0aeeec6..725f78bedc 100644
--- a/stubs/Makefile.objs
+++ b/stubs/Makefile.objs
@@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
 stub-obj-y += vmgenid.o
 stub-obj-y += xen-common.o
 stub-obj-y += xen-hvm.o
-stub-obj-y += pci-host-piix.o
 stub-obj-y += ram-block.o
 stub-obj-y += ramfb.o
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Jing Liu, Shannon Zhao, Igor Mammedov,
	qemu-arm, Marcel Apfelbaum, Paolo Bonzini, Anthony Perard,
	xen-devel, Sebastien Boeuf, Richard Henderson

From: Sebastien Boeuf <sebastien.boeuf@intel.com>

Instead of using the machine type specific method find_i440fx() to
retrieve the PCI bus, this commit aims to rely on the fact that the
PCI bus is known by the structure AcpiPciHpState.

When the structure is initialized through acpi_pcihp_init() call,
it saves the PCI bus, which means there is no need to invoke a
special function later on.

Based on the fact that find_i440fx() was only used there, this
patch also removes the function find_i440fx() itself from the
entire codebase.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>
---
 include/hw/i386/pc.h  |  1 -
 hw/acpi/pcihp.c       | 10 ++++------
 hw/pci-host/piix.c    |  8 --------
 stubs/pci-host-piix.c |  6 ------
 stubs/Makefile.objs   |  1 -
 5 files changed, 4 insertions(+), 22 deletions(-)
 delete mode 100644 stubs/pci-host-piix.c

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 44cb6bf3f3..8e5f1464eb 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
                     MemoryRegion *pci_memory,
                     MemoryRegion *ram_memory);
 
-PCIBus *find_i440fx(void);
 /* piix4.c */
 extern PCIDevice *piix4_dev;
 int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
index 80d42e12ff..254b2e50ab 100644
--- a/hw/acpi/pcihp.c
+++ b/hw/acpi/pcihp.c
@@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void *opaque)
     return bsel_alloc;
 }
 
-static void acpi_set_pci_info(void)
+static void acpi_set_pci_info(AcpiPciHpState *s)
 {
     static bool bsel_is_set;
-    PCIBus *bus;
     unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
 
     if (bsel_is_set) {
@@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
     }
     bsel_is_set = true;
 
-    bus = find_i440fx(); /* TODO: Q35 support */
-    if (bus) {
+    if (s->root) {
         /* Scan all PCI buses. Set property to enable acpi based hotplug. */
-        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL, &bsel_alloc);
+        pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL, &bsel_alloc);
     }
 }
 
@@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState *s)
 
 void acpi_pcihp_reset(AcpiPciHpState *s)
 {
-    acpi_set_pci_info();
+    acpi_set_pci_info(s);
     acpi_pcihp_update(s);
 }
 
diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index 47293a3915..658460264b 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
     return b;
 }
 
-PCIBus *find_i440fx(void)
-{
-    PCIHostState *s = OBJECT_CHECK(PCIHostState,
-                                   object_resolve_path("/machine/i440fx", NULL),
-                                   TYPE_PCI_HOST_BRIDGE);
-    return s ? s->bus : NULL;
-}
-
 /* PIIX3 PCI to ISA bridge */
 static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
 {
diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
deleted file mode 100644
index 6ed81b1f21..0000000000
--- a/stubs/pci-host-piix.c
+++ /dev/null
@@ -1,6 +0,0 @@
-#include "qemu/osdep.h"
-#include "hw/i386/pc.h"
-PCIBus *find_i440fx(void)
-{
-    return NULL;
-}
diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
index 5dd0aeeec6..725f78bedc 100644
--- a/stubs/Makefile.objs
+++ b/stubs/Makefile.objs
@@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
 stub-obj-y += vmgenid.o
 stub-obj-y += xen-common.o
 stub-obj-y += xen-hvm.o
-stub-obj-y += pci-host-piix.o
 stub-obj-y += ram-block.o
 stub-obj-y += ramfb.o
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

In order to decouple ACPI APIs from specific machine types, we are
creating an ACPI builder interface that each ACPI platform can choose to
implement.
This way, a new machine type can re-use the high level ACPI APIs and
define some custom table build methods, without having to duplicate most
of the existing implementation only to add small variations to it.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/builder.h | 100 ++++++++++++++++++++++++++++++++++++++
 hw/acpi/builder.c         |  97 ++++++++++++++++++++++++++++++++++++
 hw/acpi/Makefile.objs     |   1 +
 3 files changed, 198 insertions(+)
 create mode 100644 include/hw/acpi/builder.h
 create mode 100644 hw/acpi/builder.c

diff --git a/include/hw/acpi/builder.h b/include/hw/acpi/builder.h
new file mode 100644
index 0000000000..a63b88ffe9
--- /dev/null
+++ b/include/hw/acpi/builder.h
@@ -0,0 +1,100 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef ACPI_BUILDER_H
+#define ACPI_BUILDER_H
+
+#include "qemu/osdep.h"
+#include "hw/acpi/bios-linker-loader.h"
+#include "qom/object.h"
+
+#define TYPE_ACPI_BUILDER "acpi-builder"
+
+#define ACPI_BUILDER_METHODS(klass) \
+     OBJECT_CLASS_CHECK(AcpiBuilderMethods, (klass), TYPE_ACPI_BUILDER)
+#define ACPI_BUILDER_GET_METHODS(obj) \
+     OBJECT_GET_CLASS(AcpiBuilderMethods, (obj), TYPE_ACPI_BUILDER)
+#define ACPI_BUILDER(obj)                                       \
+     INTERFACE_CHECK(AcpiBuilder, (obj), TYPE_ACPI_BUILDER)
+
+typedef struct AcpiConfiguration AcpiConfiguration;
+typedef struct AcpiBuildState AcpiBuildState;
+typedef struct AcpiMcfgInfo AcpiMcfgInfo;
+
+typedef struct AcpiBuilder {
+    /* <private> */
+    Object Parent;
+} AcpiBuilder;
+
+/**
+ * AcpiBuildMethods:
+ *
+ * Interface to be implemented by a machine type that needs to provide
+ * custom ACPI tables build method.
+ *
+ * @parent: Opaque parent interface.
+ * @rsdp: ACPI RSDP (Root System Description Pointer) table build callback.
+ * @madt: ACPI MADT (Multiple APIC Description Table) table build callback.
+ * @mcfg: ACPI MCFG table build callback.
+ * @srat: ACPI SRAT (System/Static Resource Affinity Table)
+ *        table build callback.
+ * @slit: ACPI SLIT (System Locality System Information Table)
+ *        table build callback.
+ * @configuration: ACPI configuration getter.
+ *                 This is used to query the machine instance for its
+ *                 AcpiConfiguration pointer.
+ */
+typedef struct AcpiBuilderMethods {
+    /* <private> */
+    InterfaceClass parent;
+
+    /* <public> */
+    void (*rsdp)(GArray *table_data, BIOSLinker *linker,
+                 unsigned rsdt_tbl_offset);
+    void (*madt)(GArray *table_data, BIOSLinker *linker,
+                 MachineState *ms, AcpiConfiguration *conf);
+    void (*mcfg)(GArray *table_data, BIOSLinker *linker,
+                 AcpiMcfgInfo *info);
+    void (*srat)(GArray *table_data, BIOSLinker *linker,
+                 MachineState *machine, AcpiConfiguration *conf);
+    void (*slit)(GArray *table_data, BIOSLinker *linker);
+
+    AcpiConfiguration *(*configuration)(AcpiBuilder *builder);
+} AcpiBuilderMethods;
+
+void acpi_builder_rsdp(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       unsigned rsdt_tbl_offset);
+
+void acpi_builder_madt(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       MachineState *ms, AcpiConfiguration *conf);
+
+void acpi_builder_mcfg(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       AcpiMcfgInfo *info);
+
+void acpi_builder_srat(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       MachineState *machine, AcpiConfiguration *conf);
+
+void acpi_builder_slit(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker);
+
+AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder);
+
+#endif
diff --git a/hw/acpi/builder.c b/hw/acpi/builder.c
new file mode 100644
index 0000000000..c29a614793
--- /dev/null
+++ b/hw/acpi/builder.c
@@ -0,0 +1,97 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/module.h"
+#include "qom/object.h"
+#include "hw/acpi/builder.h"
+
+void acpi_builder_rsdp(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       unsigned rsdt_tbl_offset)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->rsdp) {
+        abm->rsdp(table_data, linker, rsdt_tbl_offset);
+    }
+}
+
+void acpi_builder_madt(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       MachineState *ms, AcpiConfiguration *conf)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->madt) {
+        abm->madt(table_data, linker, ms, conf);
+    }
+}
+
+void acpi_builder_mcfg(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       AcpiMcfgInfo *info)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->mcfg) {
+        abm->mcfg(table_data, linker, info);
+    }
+}
+
+void acpi_builder_srat(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       MachineState *machine, AcpiConfiguration *conf)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->srat) {
+        abm->srat(table_data, linker, machine, conf);
+    }
+}
+
+void acpi_builder_slit(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->slit) {
+        abm->slit(table_data, linker);
+    }
+}
+
+AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+    if (abm && abm->configuration) {
+        return abm->configuration(builder);
+    }
+    return NULL;
+}
+
+static const TypeInfo acpi_builder_info = {
+    .name          = TYPE_ACPI_BUILDER,
+    .parent        = TYPE_INTERFACE,
+    .class_size    = sizeof(AcpiBuilderMethods),
+};
+
+static void acpi_builder_register_type(void)
+{
+    type_register_static(&acpi_builder_info);
+}
+
+type_init(acpi_builder_register_type)
diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
index 11c35bcb44..2f383adc6f 100644
--- a/hw/acpi/Makefile.objs
+++ b/hw/acpi/Makefile.objs
@@ -11,6 +11,7 @@ common-obj-$(call lnot,$(CONFIG_ACPI_X86)) += acpi-stub.o
 common-obj-y += acpi_interface.o
 common-obj-y += bios-linker-loader.o
 common-obj-y += aml-build.o
+common-obj-y += builder.o
 
 common-obj-$(CONFIG_IPMI) += ipmi.o
 common-obj-$(call lnot,$(CONFIG_IPMI)) += ipmi-stub.o
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

In order to decouple ACPI APIs from specific machine types, we are
creating an ACPI builder interface that each ACPI platform can choose to
implement.
This way, a new machine type can re-use the high level ACPI APIs and
define some custom table build methods, without having to duplicate most
of the existing implementation only to add small variations to it.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/acpi/builder.h | 100 ++++++++++++++++++++++++++++++++++++++
 hw/acpi/builder.c         |  97 ++++++++++++++++++++++++++++++++++++
 hw/acpi/Makefile.objs     |   1 +
 3 files changed, 198 insertions(+)
 create mode 100644 include/hw/acpi/builder.h
 create mode 100644 hw/acpi/builder.c

diff --git a/include/hw/acpi/builder.h b/include/hw/acpi/builder.h
new file mode 100644
index 0000000000..a63b88ffe9
--- /dev/null
+++ b/include/hw/acpi/builder.h
@@ -0,0 +1,100 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef ACPI_BUILDER_H
+#define ACPI_BUILDER_H
+
+#include "qemu/osdep.h"
+#include "hw/acpi/bios-linker-loader.h"
+#include "qom/object.h"
+
+#define TYPE_ACPI_BUILDER "acpi-builder"
+
+#define ACPI_BUILDER_METHODS(klass) \
+     OBJECT_CLASS_CHECK(AcpiBuilderMethods, (klass), TYPE_ACPI_BUILDER)
+#define ACPI_BUILDER_GET_METHODS(obj) \
+     OBJECT_GET_CLASS(AcpiBuilderMethods, (obj), TYPE_ACPI_BUILDER)
+#define ACPI_BUILDER(obj)                                       \
+     INTERFACE_CHECK(AcpiBuilder, (obj), TYPE_ACPI_BUILDER)
+
+typedef struct AcpiConfiguration AcpiConfiguration;
+typedef struct AcpiBuildState AcpiBuildState;
+typedef struct AcpiMcfgInfo AcpiMcfgInfo;
+
+typedef struct AcpiBuilder {
+    /* <private> */
+    Object Parent;
+} AcpiBuilder;
+
+/**
+ * AcpiBuildMethods:
+ *
+ * Interface to be implemented by a machine type that needs to provide
+ * custom ACPI tables build method.
+ *
+ * @parent: Opaque parent interface.
+ * @rsdp: ACPI RSDP (Root System Description Pointer) table build callback.
+ * @madt: ACPI MADT (Multiple APIC Description Table) table build callback.
+ * @mcfg: ACPI MCFG table build callback.
+ * @srat: ACPI SRAT (System/Static Resource Affinity Table)
+ *        table build callback.
+ * @slit: ACPI SLIT (System Locality System Information Table)
+ *        table build callback.
+ * @configuration: ACPI configuration getter.
+ *                 This is used to query the machine instance for its
+ *                 AcpiConfiguration pointer.
+ */
+typedef struct AcpiBuilderMethods {
+    /* <private> */
+    InterfaceClass parent;
+
+    /* <public> */
+    void (*rsdp)(GArray *table_data, BIOSLinker *linker,
+                 unsigned rsdt_tbl_offset);
+    void (*madt)(GArray *table_data, BIOSLinker *linker,
+                 MachineState *ms, AcpiConfiguration *conf);
+    void (*mcfg)(GArray *table_data, BIOSLinker *linker,
+                 AcpiMcfgInfo *info);
+    void (*srat)(GArray *table_data, BIOSLinker *linker,
+                 MachineState *machine, AcpiConfiguration *conf);
+    void (*slit)(GArray *table_data, BIOSLinker *linker);
+
+    AcpiConfiguration *(*configuration)(AcpiBuilder *builder);
+} AcpiBuilderMethods;
+
+void acpi_builder_rsdp(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       unsigned rsdt_tbl_offset);
+
+void acpi_builder_madt(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       MachineState *ms, AcpiConfiguration *conf);
+
+void acpi_builder_mcfg(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       AcpiMcfgInfo *info);
+
+void acpi_builder_srat(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       MachineState *machine, AcpiConfiguration *conf);
+
+void acpi_builder_slit(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker);
+
+AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder);
+
+#endif
diff --git a/hw/acpi/builder.c b/hw/acpi/builder.c
new file mode 100644
index 0000000000..c29a614793
--- /dev/null
+++ b/hw/acpi/builder.c
@@ -0,0 +1,97 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/module.h"
+#include "qom/object.h"
+#include "hw/acpi/builder.h"
+
+void acpi_builder_rsdp(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       unsigned rsdt_tbl_offset)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->rsdp) {
+        abm->rsdp(table_data, linker, rsdt_tbl_offset);
+    }
+}
+
+void acpi_builder_madt(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       MachineState *ms, AcpiConfiguration *conf)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->madt) {
+        abm->madt(table_data, linker, ms, conf);
+    }
+}
+
+void acpi_builder_mcfg(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       AcpiMcfgInfo *info)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->mcfg) {
+        abm->mcfg(table_data, linker, info);
+    }
+}
+
+void acpi_builder_srat(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker,
+                       MachineState *machine, AcpiConfiguration *conf)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->srat) {
+        abm->srat(table_data, linker, machine, conf);
+    }
+}
+
+void acpi_builder_slit(AcpiBuilder *builder,
+                       GArray *table_data, BIOSLinker *linker)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+
+    if (abm && abm->slit) {
+        abm->slit(table_data, linker);
+    }
+}
+
+AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder)
+{
+    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
+    if (abm && abm->configuration) {
+        return abm->configuration(builder);
+    }
+    return NULL;
+}
+
+static const TypeInfo acpi_builder_info = {
+    .name          = TYPE_ACPI_BUILDER,
+    .parent        = TYPE_INTERFACE,
+    .class_size    = sizeof(AcpiBuilderMethods),
+};
+
+static void acpi_builder_register_type(void)
+{
+    type_register_static(&acpi_builder_info);
+}
+
+type_init(acpi_builder_register_type)
diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
index 11c35bcb44..2f383adc6f 100644
--- a/hw/acpi/Makefile.objs
+++ b/hw/acpi/Makefile.objs
@@ -11,6 +11,7 @@ common-obj-$(call lnot,$(CONFIG_ACPI_X86)) += acpi-stub.o
 common-obj-y += acpi_interface.o
 common-obj-y += bios-linker-loader.o
 common-obj-y += aml-build.o
+common-obj-y += builder.o
 
 common-obj-$(CONFIG_IPMI) += ipmi.o
 common-obj-$(call lnot,$(CONFIG_IPMI)) += ipmi-stub.o
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 21/24] hw: i386: Implement the ACPI builder interface for PC
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

All PC machine type derivatives will use the same ACPI table build
methods. But with that change in place, any new x86 machine type will be
able to re-use the acpi-build API and customize part of it by defining
its own ACPI table build methods.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/i386/acpi-build.c | 14 +++++++++-----
 hw/i386/pc.c         | 19 +++++++++++++++++++
 2 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 4b1d8fbe3f..93d89b96f1 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -34,6 +34,7 @@
 #include "hw/acpi/acpi-defs.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu.h"
+#include "hw/acpi/builder.h"
 #include "hw/nvram/fw_cfg.h"
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
@@ -1683,6 +1684,7 @@ void acpi_build(AcpiBuildTables *tables,
     GArray *tables_blob = tables->table_data;
     AcpiSlicOem slic_oem = { .id = NULL, .table_id = NULL };
     Object *vmgenid_dev;
+    AcpiBuilder *ab = ACPI_BUILDER(machine);
 
     acpi_get_pm_info(&pm);
     acpi_get_misc_info(&misc);
@@ -1732,7 +1734,8 @@ void acpi_build(AcpiBuildTables *tables,
     aml_len += tables_blob->len - fadt;
 
     acpi_add_table(table_offsets, tables_blob);
-    build_madt(tables_blob, tables->linker, machine, acpi_conf);
+    acpi_builder_madt(ab, tables_blob, tables->linker,
+                      machine, acpi_conf);
 
     vmgenid_dev = find_vmgenid_dev();
     if (vmgenid_dev) {
@@ -1756,15 +1759,16 @@ void acpi_build(AcpiBuildTables *tables,
     }
     if (acpi_conf->numa_nodes) {
         acpi_add_table(table_offsets, tables_blob);
-        build_srat(tables_blob, tables->linker, machine, acpi_conf);
+        acpi_builder_srat(ab, tables_blob, tables->linker,
+                          machine, acpi_conf);
         if (have_numa_distance) {
             acpi_add_table(table_offsets, tables_blob);
-            build_slit(tables_blob, tables->linker);
+            acpi_builder_slit(ab, tables_blob, tables->linker);
         }
     }
     if (acpi_get_mcfg(&mcfg)) {
         acpi_add_table(table_offsets, tables_blob);
-        build_mcfg(tables_blob, tables->linker, &mcfg);
+        acpi_builder_mcfg(ab, tables_blob, tables->linker, &mcfg);
     }
     if (x86_iommu_get_default()) {
         IommuType IOMMUType = x86_iommu_get_type();
@@ -1795,7 +1799,7 @@ void acpi_build(AcpiBuildTables *tables,
                slic_oem.id, slic_oem.table_id);
 
     /* RSDP is in FSEG memory, so allocate it separately */
-    build_rsdp_rsdt(tables->rsdp, tables->linker, rsdt);
+    acpi_builder_rsdp(ab, tables->rsdp, tables->linker, rsdt);
 
     /* We'll expose it all to Guest so we want to reduce
      * chance of size changes.
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index c9ffc8cff6..53a3036066 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -64,6 +64,7 @@
 #include "qemu/option.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu_hotplug.h"
+#include "hw/acpi/builder.h"
 #include "hw/boards.h"
 #include "acpi-build.h"
 #include "hw/mem/pc-dimm.h"
@@ -75,6 +76,7 @@
 #include "hw/nmi.h"
 #include "hw/i386/intel_iommu.h"
 #include "hw/net/ne2000-isa.h"
+#include "hw/i386/acpi.h"
 
 /* debug PC/ISA interrupts */
 //#define DEBUG_IRQ
@@ -2404,12 +2406,20 @@ static void x86_nmi(NMIState *n, int cpu_index, Error **errp)
     }
 }
 
+static AcpiConfiguration *pc_acpi_configuration(AcpiBuilder *builder)
+{
+    PCMachineState *pcms = PC_MACHINE(builder);
+
+    return &pcms->acpi_configuration;
+}
+
 static void pc_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
     PCMachineClass *pcmc = PC_MACHINE_CLASS(oc);
     HotplugHandlerClass *hc = HOTPLUG_HANDLER_CLASS(oc);
     NMIClass *nc = NMI_CLASS(oc);
+    AcpiBuilderMethods *abm = ACPI_BUILDER_METHODS(oc);
 
     pcmc->pci_enabled = true;
     pcmc->has_acpi_build = true;
@@ -2444,6 +2454,14 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
     nc->nmi_monitor_handler = x86_nmi;
     mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
 
+    /* ACPI building methods */
+    abm->madt = build_madt;
+    abm->rsdp = build_rsdp_rsdt;
+    abm->mcfg = build_mcfg;
+    abm->srat = build_srat;
+    abm->slit = build_slit;
+    abm->configuration = pc_acpi_configuration;
+
     object_class_property_add(oc, MEMORY_DEVICE_REGION_SIZE, "int",
         pc_machine_get_device_memory_region_size, NULL,
         NULL, NULL, &error_abort);
@@ -2495,6 +2513,7 @@ static const TypeInfo pc_machine_info = {
     .interfaces = (InterfaceInfo[]) {
          { TYPE_HOTPLUG_HANDLER },
          { TYPE_NMI },
+         { TYPE_ACPI_BUILDER },
          { }
     },
 };
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 21/24] hw: i386: Implement the ACPI builder interface for PC
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

All PC machine type derivatives will use the same ACPI table build
methods. But with that change in place, any new x86 machine type will be
able to re-use the acpi-build API and customize part of it by defining
its own ACPI table build methods.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/i386/acpi-build.c | 14 +++++++++-----
 hw/i386/pc.c         | 19 +++++++++++++++++++
 2 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 4b1d8fbe3f..93d89b96f1 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -34,6 +34,7 @@
 #include "hw/acpi/acpi-defs.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu.h"
+#include "hw/acpi/builder.h"
 #include "hw/nvram/fw_cfg.h"
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
@@ -1683,6 +1684,7 @@ void acpi_build(AcpiBuildTables *tables,
     GArray *tables_blob = tables->table_data;
     AcpiSlicOem slic_oem = { .id = NULL, .table_id = NULL };
     Object *vmgenid_dev;
+    AcpiBuilder *ab = ACPI_BUILDER(machine);
 
     acpi_get_pm_info(&pm);
     acpi_get_misc_info(&misc);
@@ -1732,7 +1734,8 @@ void acpi_build(AcpiBuildTables *tables,
     aml_len += tables_blob->len - fadt;
 
     acpi_add_table(table_offsets, tables_blob);
-    build_madt(tables_blob, tables->linker, machine, acpi_conf);
+    acpi_builder_madt(ab, tables_blob, tables->linker,
+                      machine, acpi_conf);
 
     vmgenid_dev = find_vmgenid_dev();
     if (vmgenid_dev) {
@@ -1756,15 +1759,16 @@ void acpi_build(AcpiBuildTables *tables,
     }
     if (acpi_conf->numa_nodes) {
         acpi_add_table(table_offsets, tables_blob);
-        build_srat(tables_blob, tables->linker, machine, acpi_conf);
+        acpi_builder_srat(ab, tables_blob, tables->linker,
+                          machine, acpi_conf);
         if (have_numa_distance) {
             acpi_add_table(table_offsets, tables_blob);
-            build_slit(tables_blob, tables->linker);
+            acpi_builder_slit(ab, tables_blob, tables->linker);
         }
     }
     if (acpi_get_mcfg(&mcfg)) {
         acpi_add_table(table_offsets, tables_blob);
-        build_mcfg(tables_blob, tables->linker, &mcfg);
+        acpi_builder_mcfg(ab, tables_blob, tables->linker, &mcfg);
     }
     if (x86_iommu_get_default()) {
         IommuType IOMMUType = x86_iommu_get_type();
@@ -1795,7 +1799,7 @@ void acpi_build(AcpiBuildTables *tables,
                slic_oem.id, slic_oem.table_id);
 
     /* RSDP is in FSEG memory, so allocate it separately */
-    build_rsdp_rsdt(tables->rsdp, tables->linker, rsdt);
+    acpi_builder_rsdp(ab, tables->rsdp, tables->linker, rsdt);
 
     /* We'll expose it all to Guest so we want to reduce
      * chance of size changes.
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index c9ffc8cff6..53a3036066 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -64,6 +64,7 @@
 #include "qemu/option.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu_hotplug.h"
+#include "hw/acpi/builder.h"
 #include "hw/boards.h"
 #include "acpi-build.h"
 #include "hw/mem/pc-dimm.h"
@@ -75,6 +76,7 @@
 #include "hw/nmi.h"
 #include "hw/i386/intel_iommu.h"
 #include "hw/net/ne2000-isa.h"
+#include "hw/i386/acpi.h"
 
 /* debug PC/ISA interrupts */
 //#define DEBUG_IRQ
@@ -2404,12 +2406,20 @@ static void x86_nmi(NMIState *n, int cpu_index, Error **errp)
     }
 }
 
+static AcpiConfiguration *pc_acpi_configuration(AcpiBuilder *builder)
+{
+    PCMachineState *pcms = PC_MACHINE(builder);
+
+    return &pcms->acpi_configuration;
+}
+
 static void pc_machine_class_init(ObjectClass *oc, void *data)
 {
     MachineClass *mc = MACHINE_CLASS(oc);
     PCMachineClass *pcmc = PC_MACHINE_CLASS(oc);
     HotplugHandlerClass *hc = HOTPLUG_HANDLER_CLASS(oc);
     NMIClass *nc = NMI_CLASS(oc);
+    AcpiBuilderMethods *abm = ACPI_BUILDER_METHODS(oc);
 
     pcmc->pci_enabled = true;
     pcmc->has_acpi_build = true;
@@ -2444,6 +2454,14 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
     nc->nmi_monitor_handler = x86_nmi;
     mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
 
+    /* ACPI building methods */
+    abm->madt = build_madt;
+    abm->rsdp = build_rsdp_rsdt;
+    abm->mcfg = build_mcfg;
+    abm->srat = build_srat;
+    abm->slit = build_slit;
+    abm->configuration = pc_acpi_configuration;
+
     object_class_property_add(oc, MEMORY_DEVICE_REGION_SIZE, "int",
         pc_machine_get_device_memory_region_size, NULL,
         NULL, NULL, &error_abort);
@@ -2495,6 +2513,7 @@ static const TypeInfo pc_machine_info = {
     .interfaces = (InterfaceInfo[]) {
          { TYPE_HOTPLUG_HANDLER },
          { TYPE_NMI },
+         { TYPE_ACPI_BUILDER },
          { }
     },
 };
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 22/24] hw: pci-host: piix: Return PCI host pointer instead of PCI bus
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

For building the MCFG table, we need to track a given machine
type PCI host pointer, and we can't get it from the bus pointer alone.
As piix returns a PCI bus pointer, we simply modify its builder to
return a PCI host pointer instead.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/i386/pc.h | 21 +++++++++++----------
 hw/i386/pc_piix.c    | 18 +++++++++++-------
 hw/pci-host/piix.c   | 24 ++++++++++++------------
 3 files changed, 34 insertions(+), 29 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 8e5f1464eb..b6b79e146d 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -244,16 +244,17 @@ typedef struct PCII440FXState PCII440FXState;
  */
 #define RCR_IOPORT 0xcf9
 
-PCIBus *i440fx_init(const char *host_type, const char *pci_type,
-                    PCII440FXState **pi440fx_state, int *piix_devfn,
-                    ISABus **isa_bus, qemu_irq *pic,
-                    MemoryRegion *address_space_mem,
-                    MemoryRegion *address_space_io,
-                    ram_addr_t ram_size,
-                    ram_addr_t below_4g_mem_size,
-                    ram_addr_t above_4g_mem_size,
-                    MemoryRegion *pci_memory,
-                    MemoryRegion *ram_memory);
+struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
+                                 PCII440FXState **pi440fx_state,
+                                 int *piix_devfn,
+                                 ISABus **isa_bus, qemu_irq *pic,
+                                 MemoryRegion *address_space_mem,
+                                 MemoryRegion *address_space_io,
+                                 ram_addr_t ram_size,
+                                 ram_addr_t below_4g_mem_size,
+                                 ram_addr_t above_4g_mem_size,
+                                 MemoryRegion *pci_memory,
+                                 MemoryRegion *ram_memory);
 
 /* piix4.c */
 extern PCIDevice *piix4_dev;
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 0620d10715..f5b139a3eb 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -32,6 +32,7 @@
 #include "hw/display/ramfb.h"
 #include "hw/smbios/smbios.h"
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_host.h"
 #include "hw/pci/pci_ids.h"
 #include "hw/usb.h"
 #include "net/net.h"
@@ -75,6 +76,7 @@ static void pc_init1(MachineState *machine,
     MemoryRegion *system_memory = get_system_memory();
     MemoryRegion *system_io = get_system_io();
     int i;
+    struct PCIHostState *pci_host;
     PCIBus *pci_bus;
     ISABus *isa_bus;
     PCII440FXState *i440fx_state;
@@ -196,15 +198,17 @@ static void pc_init1(MachineState *machine,
     }
 
     if (pcmc->pci_enabled) {
-        pci_bus = i440fx_init(host_type,
-                              pci_type,
-                              &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
-                              system_memory, system_io, machine->ram_size,
-                              acpi_conf->below_4g_mem_size,
-                              acpi_conf->above_4g_mem_size,
-                              pci_memory, ram_memory);
+        pci_host = i440fx_init(host_type,
+                               pci_type,
+                               &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
+                               system_memory, system_io, machine->ram_size,
+                               acpi_conf->below_4g_mem_size,
+                               acpi_conf->above_4g_mem_size,
+                               pci_memory, ram_memory);
+        pci_bus = pci_host->bus;
         pcms->bus = pci_bus;
     } else {
+        pci_host = NULL;
         pci_bus = NULL;
         i440fx_state = NULL;
         isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index 658460264b..4a412db44c 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -342,17 +342,17 @@ static void i440fx_realize(PCIDevice *dev, Error **errp)
     }
 }
 
-PCIBus *i440fx_init(const char *host_type, const char *pci_type,
-                    PCII440FXState **pi440fx_state,
-                    int *piix3_devfn,
-                    ISABus **isa_bus, qemu_irq *pic,
-                    MemoryRegion *address_space_mem,
-                    MemoryRegion *address_space_io,
-                    ram_addr_t ram_size,
-                    ram_addr_t below_4g_mem_size,
-                    ram_addr_t above_4g_mem_size,
-                    MemoryRegion *pci_address_space,
-                    MemoryRegion *ram_memory)
+struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
+                                 PCII440FXState **pi440fx_state,
+                                 int *piix3_devfn,
+                                 ISABus **isa_bus, qemu_irq *pic,
+                                 MemoryRegion *address_space_mem,
+                                 MemoryRegion *address_space_io,
+                                 ram_addr_t ram_size,
+                                 ram_addr_t below_4g_mem_size,
+                                 ram_addr_t above_4g_mem_size,
+                                 MemoryRegion *pci_address_space,
+                                 MemoryRegion *ram_memory)
 {
     DeviceState *dev;
     PCIBus *b;
@@ -442,7 +442,7 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
 
     i440fx_update_memory_mappings(f);
 
-    return b;
+    return s;
 }
 
 /* PIIX3 PCI to ISA bridge */
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 22/24] hw: pci-host: piix: Return PCI host pointer instead of PCI bus
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

For building the MCFG table, we need to track a given machine
type PCI host pointer, and we can't get it from the bus pointer alone.
As piix returns a PCI bus pointer, we simply modify its builder to
return a PCI host pointer instead.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 include/hw/i386/pc.h | 21 +++++++++++----------
 hw/i386/pc_piix.c    | 18 +++++++++++-------
 hw/pci-host/piix.c   | 24 ++++++++++++------------
 3 files changed, 34 insertions(+), 29 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 8e5f1464eb..b6b79e146d 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -244,16 +244,17 @@ typedef struct PCII440FXState PCII440FXState;
  */
 #define RCR_IOPORT 0xcf9
 
-PCIBus *i440fx_init(const char *host_type, const char *pci_type,
-                    PCII440FXState **pi440fx_state, int *piix_devfn,
-                    ISABus **isa_bus, qemu_irq *pic,
-                    MemoryRegion *address_space_mem,
-                    MemoryRegion *address_space_io,
-                    ram_addr_t ram_size,
-                    ram_addr_t below_4g_mem_size,
-                    ram_addr_t above_4g_mem_size,
-                    MemoryRegion *pci_memory,
-                    MemoryRegion *ram_memory);
+struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
+                                 PCII440FXState **pi440fx_state,
+                                 int *piix_devfn,
+                                 ISABus **isa_bus, qemu_irq *pic,
+                                 MemoryRegion *address_space_mem,
+                                 MemoryRegion *address_space_io,
+                                 ram_addr_t ram_size,
+                                 ram_addr_t below_4g_mem_size,
+                                 ram_addr_t above_4g_mem_size,
+                                 MemoryRegion *pci_memory,
+                                 MemoryRegion *ram_memory);
 
 /* piix4.c */
 extern PCIDevice *piix4_dev;
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 0620d10715..f5b139a3eb 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -32,6 +32,7 @@
 #include "hw/display/ramfb.h"
 #include "hw/smbios/smbios.h"
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_host.h"
 #include "hw/pci/pci_ids.h"
 #include "hw/usb.h"
 #include "net/net.h"
@@ -75,6 +76,7 @@ static void pc_init1(MachineState *machine,
     MemoryRegion *system_memory = get_system_memory();
     MemoryRegion *system_io = get_system_io();
     int i;
+    struct PCIHostState *pci_host;
     PCIBus *pci_bus;
     ISABus *isa_bus;
     PCII440FXState *i440fx_state;
@@ -196,15 +198,17 @@ static void pc_init1(MachineState *machine,
     }
 
     if (pcmc->pci_enabled) {
-        pci_bus = i440fx_init(host_type,
-                              pci_type,
-                              &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
-                              system_memory, system_io, machine->ram_size,
-                              acpi_conf->below_4g_mem_size,
-                              acpi_conf->above_4g_mem_size,
-                              pci_memory, ram_memory);
+        pci_host = i440fx_init(host_type,
+                               pci_type,
+                               &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
+                               system_memory, system_io, machine->ram_size,
+                               acpi_conf->below_4g_mem_size,
+                               acpi_conf->above_4g_mem_size,
+                               pci_memory, ram_memory);
+        pci_bus = pci_host->bus;
         pcms->bus = pci_bus;
     } else {
+        pci_host = NULL;
         pci_bus = NULL;
         i440fx_state = NULL;
         isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index 658460264b..4a412db44c 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -342,17 +342,17 @@ static void i440fx_realize(PCIDevice *dev, Error **errp)
     }
 }
 
-PCIBus *i440fx_init(const char *host_type, const char *pci_type,
-                    PCII440FXState **pi440fx_state,
-                    int *piix3_devfn,
-                    ISABus **isa_bus, qemu_irq *pic,
-                    MemoryRegion *address_space_mem,
-                    MemoryRegion *address_space_io,
-                    ram_addr_t ram_size,
-                    ram_addr_t below_4g_mem_size,
-                    ram_addr_t above_4g_mem_size,
-                    MemoryRegion *pci_address_space,
-                    MemoryRegion *ram_memory)
+struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
+                                 PCII440FXState **pi440fx_state,
+                                 int *piix3_devfn,
+                                 ISABus **isa_bus, qemu_irq *pic,
+                                 MemoryRegion *address_space_mem,
+                                 MemoryRegion *address_space_io,
+                                 ram_addr_t ram_size,
+                                 ram_addr_t below_4g_mem_size,
+                                 ram_addr_t above_4g_mem_size,
+                                 MemoryRegion *pci_address_space,
+                                 MemoryRegion *ram_memory)
 {
     DeviceState *dev;
     PCIBus *b;
@@ -442,7 +442,7 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
 
     i440fx_update_memory_mappings(f);
 
-    return b;
+    return s;
 }
 
 /* PIIX3 PCI to ISA bridge */
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 23/24] hw: i386: Set ACPI configuration PCI host pointer
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

For both PC and Q35 machine types, we can set it at the PCI host
bridge creation time.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/i386/pc_piix.c | 1 +
 hw/i386/pc_q35.c  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index f5b139a3eb..f1f0de3585 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -216,6 +216,7 @@ static void pc_init1(MachineState *machine,
         no_hpet = 1;
     }
     isa_bus_irqs(isa_bus, pcms->gsi);
+    acpi_conf->pci_host = pci_host;
 
     if (kvm_pic_in_kernel()) {
         i8259 = kvm_i8259_init(isa_bus);
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index cdde4a4beb..a8772e29a5 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -188,6 +188,7 @@ static void pc_q35_init(MachineState *machine)
     qdev_init_nofail(DEVICE(q35_host));
     phb = PCI_HOST_BRIDGE(q35_host);
     host_bus = phb->bus;
+    acpi_conf->pci_host = phb;
     /* create ISA bus */
     lpc = pci_create_simple_multifunction(host_bus, PCI_DEVFN(ICH9_LPC_DEV,
                                           ICH9_LPC_FUNC), true,
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 23/24] hw: i386: Set ACPI configuration PCI host pointer
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

For both PC and Q35 machine types, we can set it at the PCI host
bridge creation time.

Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
---
 hw/i386/pc_piix.c | 1 +
 hw/i386/pc_q35.c  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index f5b139a3eb..f1f0de3585 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -216,6 +216,7 @@ static void pc_init1(MachineState *machine,
         no_hpet = 1;
     }
     isa_bus_irqs(isa_bus, pcms->gsi);
+    acpi_conf->pci_host = pci_host;
 
     if (kvm_pic_in_kernel()) {
         i8259 = kvm_i8259_init(isa_bus);
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index cdde4a4beb..a8772e29a5 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -188,6 +188,7 @@ static void pc_q35_init(MachineState *machine)
     qdev_init_nofail(DEVICE(q35_host));
     phb = PCI_HOST_BRIDGE(q35_host);
     host_bus = phb->bus;
+    acpi_conf->pci_host = phb;
     /* create ISA bus */
     lpc = pci_create_simple_multifunction(host_bus, PCI_DEVFN(ICH9_LPC_DEV,
                                           ICH9_LPC_FUNC), true,
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [Qemu-devel] [PATCH v5 24/24] hw: i386: Refactor PCI host getter
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-05  1:40   ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost, Yang Zhong

From: Yang Zhong <yang.zhong@intel.com>

Now that the ACPI builder methods are added, we can reach the ACPI
configuration pointer from the MachineState pointer. From there we can
get to the PCI host pointer and return it.

This makes the PCI host getter an ACPI, architecture agnostic function.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 hw/acpi/aml-build.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 6112cc2149..b532817fb5 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -22,6 +22,8 @@
 #include "qemu/osdep.h"
 #include <glib/gprintf.h>
 #include "hw/acpi/aml-build.h"
+#include "hw/acpi/builder.h"
+#include "hw/mem/memory-device.h"
 #include "qemu/bswap.h"
 #include "qemu/bitops.h"
 #include "sysemu/numa.h"
@@ -1617,23 +1619,15 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
     g_array_free(tables->vmgenid, mfre);
 }
 
-/*
- * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
- */
 Object *acpi_get_pci_host(void)
 {
-    PCIHostState *host;
+    MachineState *ms = MACHINE(qdev_get_machine());
+    AcpiBuilder *ab = ACPI_BUILDER(ms);
+    AcpiConfiguration *acpi_conf;
 
-    host = OBJECT_CHECK(PCIHostState,
-                        object_resolve_path("/machine/i440fx", NULL),
-                        TYPE_PCI_HOST_BRIDGE);
-    if (!host) {
-        host = OBJECT_CHECK(PCIHostState,
-                            object_resolve_path("/machine/q35", NULL),
-                            TYPE_PCI_HOST_BRIDGE);
-    }
+    acpi_conf = acpi_builder_configuration(ab);
 
-    return OBJECT(host);
+    return OBJECT(acpi_conf->pci_host);
 }
 
 
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* [PATCH v5 24/24] hw: i386: Refactor PCI host getter
@ 2018-11-05  1:40   ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-05  1:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, Igor Mammedov, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

From: Yang Zhong <yang.zhong@intel.com>

Now that the ACPI builder methods are added, we can reach the ACPI
configuration pointer from the MachineState pointer. From there we can
get to the PCI host pointer and return it.

This makes the PCI host getter an ACPI, architecture agnostic function.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 hw/acpi/aml-build.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 6112cc2149..b532817fb5 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -22,6 +22,8 @@
 #include "qemu/osdep.h"
 #include <glib/gprintf.h>
 #include "hw/acpi/aml-build.h"
+#include "hw/acpi/builder.h"
+#include "hw/mem/memory-device.h"
 #include "qemu/bswap.h"
 #include "qemu/bitops.h"
 #include "sysemu/numa.h"
@@ -1617,23 +1619,15 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
     g_array_free(tables->vmgenid, mfre);
 }
 
-/*
- * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
- */
 Object *acpi_get_pci_host(void)
 {
-    PCIHostState *host;
+    MachineState *ms = MACHINE(qdev_get_machine());
+    AcpiBuilder *ab = ACPI_BUILDER(ms);
+    AcpiConfiguration *acpi_conf;
 
-    host = OBJECT_CHECK(PCIHostState,
-                        object_resolve_path("/machine/i440fx", NULL),
-                        TYPE_PCI_HOST_BRIDGE);
-    if (!host) {
-        host = OBJECT_CHECK(PCIHostState,
-                            object_resolve_path("/machine/q35", NULL),
-                            TYPE_PCI_HOST_BRIDGE);
-    }
+    acpi_conf = acpi_builder_configuration(ab);
 
-    return OBJECT(host);
+    return OBJECT(acpi_conf->pci_host);
 }
 
 
-- 
2.19.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 03/24] hw: acpi: The RSDP build API can return void
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-06 10:23     ` Paolo Bonzini
  -1 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-06 10:23 UTC (permalink / raw)
  To: Samuel Ortiz, qemu-devel
  Cc: Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

On 05/11/2018 02:40, Samuel Ortiz wrote:
>  /* RSDP */
> -static GArray *
> +static void
>  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
>  {
>      AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> @@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
>      bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
>          (char *)rsdp - rsdp_table->data, sizeof *rsdp,
>          (char *)&rsdp->checksum - rsdp_table->data);
> -
> -    return rsdp_table;
>  }
>  

Better than v4. :)

Paolo

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 03/24] hw: acpi: The RSDP build API can return void
@ 2018-11-06 10:23     ` Paolo Bonzini
  0 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-06 10:23 UTC (permalink / raw)
  To: Samuel Ortiz, qemu-devel
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Marcel Apfelbaum,
	Igor Mammedov, Anthony Perard, xen-devel, Richard Henderson

On 05/11/2018 02:40, Samuel Ortiz wrote:
>  /* RSDP */
> -static GArray *
> +static void
>  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
>  {
>      AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> @@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
>      bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
>          (char *)rsdp - rsdp_table->data, sizeof *rsdp,
>          (char *)&rsdp->checksum - rsdp_table->data);
> -
> -    return rsdp_table;
>  }
>  

Better than v4. :)

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 03/24] hw: acpi: The RSDP build API can return void
  2018-11-06 10:23     ` Paolo Bonzini
@ 2018-11-06 10:43       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-06 10:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel,
	Michael S. Tsirkin, Igor Mammedov, qemu-arm, Peter Maydell,
	Eduardo Habkost

On Tue, Nov 06, 2018 at 11:23:39AM +0100, Paolo Bonzini wrote:
> On 05/11/2018 02:40, Samuel Ortiz wrote:
> >  /* RSDP */
> > -static GArray *
> > +static void
> >  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
> >  {
> >      AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> > @@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
> >      bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> >          (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> >          (char *)&rsdp->checksum - rsdp_table->data);
> > -
> > -    return rsdp_table;
> >  }
> >  
> 
> Better than v4. :)
Right, I followed Philippe's advice and it does make things clearer :)

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 03/24] hw: acpi: The RSDP build API can return void
@ 2018-11-06 10:43       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-06 10:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Igor Mammedov, Anthony Perard, xen-devel,
	Richard Henderson

On Tue, Nov 06, 2018 at 11:23:39AM +0100, Paolo Bonzini wrote:
> On 05/11/2018 02:40, Samuel Ortiz wrote:
> >  /* RSDP */
> > -static GArray *
> > +static void
> >  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
> >  {
> >      AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> > @@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
> >      bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> >          (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> >          (char *)&rsdp->checksum - rsdp_table->data);
> > -
> > -    return rsdp_table;
> >  }
> >  
> 
> Better than v4. :)
Right, I followed Philippe's advice and it does make things clearer :)

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-08 14:16     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-08 14:16 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost

On Mon,  5 Nov 2018 02:40:28 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> Description Table). RSDT only allow for 32-bit addressses and have thus
> been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> no longer RSDTs, although RSDTs are still supported for backward
> compatibility.
> 
> Since version 2.0, RSDPs should add an extended checksum, a complete table
> length and a version field to the table.

This patch re-implements what arm/virt board already does
and fixes checksum bug in the later and at the same time
without a user (within the patch).

I'd suggest redo it a way similar to FADT refactoring
  patch 1: fix checksum bug in virt/arm
  patch 2: update reference tables in test
  patch 3: introduce AcpiRsdpData similar to commit 937d1b587
             (both arm and x86) wich stores all data in hos byte order
  patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
           ... move out to aml-build.c
  patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
      amending it to generate rev1 variant defined by revision in AcpiRsdpData
      (commit dd1b2037a)

  'make check V=1' shouldn't observe any ACPI tables changes after patch 2.

> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/acpi/aml-build.h |  3 +++
>  hw/acpi/aml-build.c         | 37 +++++++++++++++++++++++++++++++++++++
>  2 files changed, 40 insertions(+)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index c9bcb32d81..3580d0ce90 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -393,6 +393,9 @@ void
>  build_rsdp(GArray *table_data,
>             BIOSLinker *linker, unsigned rsdt_tbl_offset);
>  void
> +build_rsdp_xsdt(GArray *table_data,
> +                BIOSLinker *linker, unsigned xsdt_tbl_offset);
> +void
>  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
>             const char *oem_id, const char *oem_table_id);
>  void
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 51b608432f..a030d40674 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
>                   (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
>  }
>  
> +/* RSDP pointing at an XSDT */
> +void
> +build_rsdp_xsdt(GArray *rsdp_table,
> +                BIOSLinker *linker, unsigned xsdt_tbl_offset)
> +{
> +    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> +    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
> +    unsigned xsdt_pa_offset =
> +        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
> +    unsigned xsdt_offset =
> +        (char *)&rsdp->length - rsdp_table->data;
> +
> +    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
> +                             true /* fseg memory */);
> +
> +    memcpy(&rsdp->signature, "RSD PTR ", 8);
> +    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
> +    rsdp->length = cpu_to_le32(sizeof(*rsdp));
> +    /* version 2, we will use the XSDT pointer */
> +    rsdp->revision = 0x02;
> +
> +    /* Address to be filled by Guest linker */
> +    bios_linker_loader_add_pointer(linker,
> +        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
> +        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
> +
> +    /* Legacy checksum to be filled by Guest linker */
> +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> +        (char *)rsdp - rsdp_table->data, xsdt_offset,
> +        (char *)&rsdp->checksum - rsdp_table->data);
> +
> +    /* Extended checksum to be filled by Guest linker */
> +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> +        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> +        (char *)&rsdp->extended_checksum - rsdp_table->data);
> +}
> +
>  void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
>                         uint64_t len, int node, MemoryAffinityFlags flags)
>  {

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-08 14:16     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-08 14:16 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:28 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> Description Table). RSDT only allow for 32-bit addressses and have thus
> been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> no longer RSDTs, although RSDTs are still supported for backward
> compatibility.
> 
> Since version 2.0, RSDPs should add an extended checksum, a complete table
> length and a version field to the table.

This patch re-implements what arm/virt board already does
and fixes checksum bug in the later and at the same time
without a user (within the patch).

I'd suggest redo it a way similar to FADT refactoring
  patch 1: fix checksum bug in virt/arm
  patch 2: update reference tables in test
  patch 3: introduce AcpiRsdpData similar to commit 937d1b587
             (both arm and x86) wich stores all data in hos byte order
  patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
           ... move out to aml-build.c
  patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
      amending it to generate rev1 variant defined by revision in AcpiRsdpData
      (commit dd1b2037a)

  'make check V=1' shouldn't observe any ACPI tables changes after patch 2.

> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/acpi/aml-build.h |  3 +++
>  hw/acpi/aml-build.c         | 37 +++++++++++++++++++++++++++++++++++++
>  2 files changed, 40 insertions(+)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index c9bcb32d81..3580d0ce90 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -393,6 +393,9 @@ void
>  build_rsdp(GArray *table_data,
>             BIOSLinker *linker, unsigned rsdt_tbl_offset);
>  void
> +build_rsdp_xsdt(GArray *table_data,
> +                BIOSLinker *linker, unsigned xsdt_tbl_offset);
> +void
>  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
>             const char *oem_id, const char *oem_table_id);
>  void
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 51b608432f..a030d40674 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
>                   (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
>  }
>  
> +/* RSDP pointing at an XSDT */
> +void
> +build_rsdp_xsdt(GArray *rsdp_table,
> +                BIOSLinker *linker, unsigned xsdt_tbl_offset)
> +{
> +    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> +    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
> +    unsigned xsdt_pa_offset =
> +        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
> +    unsigned xsdt_offset =
> +        (char *)&rsdp->length - rsdp_table->data;
> +
> +    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
> +                             true /* fseg memory */);
> +
> +    memcpy(&rsdp->signature, "RSD PTR ", 8);
> +    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
> +    rsdp->length = cpu_to_le32(sizeof(*rsdp));
> +    /* version 2, we will use the XSDT pointer */
> +    rsdp->revision = 0x02;
> +
> +    /* Address to be filled by Guest linker */
> +    bios_linker_loader_add_pointer(linker,
> +        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
> +        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
> +
> +    /* Legacy checksum to be filled by Guest linker */
> +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> +        (char *)rsdp - rsdp_table->data, xsdt_offset,
> +        (char *)&rsdp->checksum - rsdp_table->data);
> +
> +    /* Extended checksum to be filled by Guest linker */
> +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> +        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> +        (char *)&rsdp->extended_checksum - rsdp_table->data);
> +}
> +
>  void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
>                         uint64_t len, int node, MemoryAffinityFlags flags)
>  {


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 16/24] hw: acpi: Fix memory hotplug AML generation error
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-08 14:23     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-08 14:23 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost,
	Yang Zhong

On Mon,  5 Nov 2018 02:40:39 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Yang Zhong <yang.zhong@intel.com>
> 
> When using the generated memory hotplug AML, the iasl
> compiler would give the following error:
> 
> dsdt.dsl 266: Return (MOST (_UID, Arg0, Arg1, Arg2))
> Error 6080 - Called method returns no value ^
> 
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>

I suggest to put this patch at the beginning of the series
before reference tables in test are updated.

> ---
>  hw/acpi/memory_hotplug.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
> index db2c4df961..893fc2bd27 100644
> --- a/hw/acpi/memory_hotplug.c
> +++ b/hw/acpi/memory_hotplug.c
> @@ -686,15 +686,15 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
>  
>              method = aml_method("_OST", 3, AML_NOTSERIALIZED);
>              s = MEMORY_SLOT_OST_METHOD;
> -            aml_append(method, aml_return(aml_call4(
> -                s, aml_name("_UID"), aml_arg(0), aml_arg(1), aml_arg(2)
> -            )));
> +            aml_append(method,
> +                       aml_call4(s, aml_name("_UID"), aml_arg(0),
> +                                 aml_arg(1), aml_arg(2)));
>              aml_append(dev, method);
>  
>              method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
>              s = MEMORY_SLOT_EJECT_METHOD;
> -            aml_append(method, aml_return(aml_call2(
> -                       s, aml_name("_UID"), aml_arg(0))));
> +            aml_append(method,
> +                       aml_call2(s, aml_name("_UID"), aml_arg(0)));
>              aml_append(dev, method);
>  
>              aml_append(dev_container, dev);

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 16/24] hw: acpi: Fix memory hotplug AML generation error
@ 2018-11-08 14:23     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-08 14:23 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:39 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Yang Zhong <yang.zhong@intel.com>
> 
> When using the generated memory hotplug AML, the iasl
> compiler would give the following error:
> 
> dsdt.dsl 266: Return (MOST (_UID, Arg0, Arg1, Arg2))
> Error 6080 - Called method returns no value ^
> 
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>

I suggest to put this patch at the beginning of the series
before reference tables in test are updated.

> ---
>  hw/acpi/memory_hotplug.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
> index db2c4df961..893fc2bd27 100644
> --- a/hw/acpi/memory_hotplug.c
> +++ b/hw/acpi/memory_hotplug.c
> @@ -686,15 +686,15 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
>  
>              method = aml_method("_OST", 3, AML_NOTSERIALIZED);
>              s = MEMORY_SLOT_OST_METHOD;
> -            aml_append(method, aml_return(aml_call4(
> -                s, aml_name("_UID"), aml_arg(0), aml_arg(1), aml_arg(2)
> -            )));
> +            aml_append(method,
> +                       aml_call4(s, aml_name("_UID"), aml_arg(0),
> +                                 aml_arg(1), aml_arg(2)));
>              aml_append(dev, method);
>  
>              method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
>              s = MEMORY_SLOT_EJECT_METHOD;
> -            aml_append(method, aml_return(aml_call2(
> -                       s, aml_name("_UID"), aml_arg(0))));
> +            aml_append(method,
> +                       aml_call2(s, aml_name("_UID"), aml_arg(0)));
>              aml_append(dev, method);
>  
>              aml_append(dev_container, dev);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 03/24] hw: acpi: The RSDP build API can return void
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-08 14:24     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-08 14:24 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Mon,  5 Nov 2018 02:40:26 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> For both x86 and ARM architectures, the internal RSDP build API can
> return void as the current return value is unused.
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> ---
>  hw/arm/virt-acpi-build.c | 4 +---
>  hw/i386/acpi-build.c     | 4 +---
>  2 files changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index f28a2faa53..fc59cce769 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -367,7 +367,7 @@ static void acpi_dsdt_add_power_button(Aml *scope)
>  }
>  
>  /* RSDP */
> -static GArray *
> +static void
>  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
>  {
>      AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> @@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
>      bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
>          (char *)rsdp - rsdp_table->data, sizeof *rsdp,
>          (char *)&rsdp->checksum - rsdp_table->data);
> -
> -    return rsdp_table;
>  }
>  
>  static void
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 81d98fa34f..74419d0663 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -2513,7 +2513,7 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
>                   "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
>  }
>  
> -static GArray *
> +static void
>  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
>  {
>      AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> @@ -2535,8 +2535,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
>      bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
>          (char *)rsdp - rsdp_table->data, sizeof *rsdp,
>          (char *)&rsdp->checksum - rsdp_table->data);
> -
> -    return rsdp_table;
>  }
>  
>  static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 03/24] hw: acpi: The RSDP build API can return void
@ 2018-11-08 14:24     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-08 14:24 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Mon,  5 Nov 2018 02:40:26 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> For both x86 and ARM architectures, the internal RSDP build API can
> return void as the current return value is unused.
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> ---
>  hw/arm/virt-acpi-build.c | 4 +---
>  hw/i386/acpi-build.c     | 4 +---
>  2 files changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index f28a2faa53..fc59cce769 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -367,7 +367,7 @@ static void acpi_dsdt_add_power_button(Aml *scope)
>  }
>  
>  /* RSDP */
> -static GArray *
> +static void
>  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
>  {
>      AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> @@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
>      bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
>          (char *)rsdp - rsdp_table->data, sizeof *rsdp,
>          (char *)&rsdp->checksum - rsdp_table->data);
> -
> -    return rsdp_table;
>  }
>  
>  static void
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 81d98fa34f..74419d0663 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -2513,7 +2513,7 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
>                   "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
>  }
>  
> -static GArray *
> +static void
>  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
>  {
>      AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> @@ -2535,8 +2535,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
>      bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
>          (char *)rsdp - rsdp_table->data, sizeof *rsdp,
>          (char *)&rsdp->checksum - rsdp_table->data);
> -
> -    return rsdp_table;
>  }
>  
>  static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-08 14:16     ` Igor Mammedov
@ 2018-11-08 14:36       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-08 14:36 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost

Hi Igor,

On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:28 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > Description Table). RSDT only allow for 32-bit addressses and have thus
> > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > no longer RSDTs, although RSDTs are still supported for backward
> > compatibility.
> > 
> > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > length and a version field to the table.
> 
> This patch re-implements what arm/virt board already does
> and fixes checksum bug in the later and at the same time
> without a user (within the patch).
> 
> I'd suggest redo it a way similar to FADT refactoring
>   patch 1: fix checksum bug in virt/arm
>   patch 2: update reference tables in test
I now see what you meant with the ACPI reference tables, thanks.
I'll follow your advice.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-08 14:36       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-08 14:36 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

Hi Igor,

On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:28 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > Description Table). RSDT only allow for 32-bit addressses and have thus
> > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > no longer RSDTs, although RSDTs are still supported for backward
> > compatibility.
> > 
> > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > length and a version field to the table.
> 
> This patch re-implements what arm/virt board already does
> and fixes checksum bug in the later and at the same time
> without a user (within the patch).
> 
> I'd suggest redo it a way similar to FADT refactoring
>   patch 1: fix checksum bug in virt/arm
>   patch 2: update reference tables in test
I now see what you meant with the ACPI reference tables, thanks.
I'll follow your advice.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-08 14:16     ` Igor Mammedov
@ 2018-11-08 14:53       ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-08 14:53 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Thu, 8 Nov 2018 15:16:23 +0100
Igor Mammedov <imammedo@redhat.com> wrote:

[...]
>   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)

>            ... move out to aml-build.c
my mistake, generally when we move something out,
we should do it in separate path preferably without any changes to the moved code
so it would be easier to review. So it should be a separate patch.

[...]

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-08 14:53       ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-08 14:53 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Thu, 8 Nov 2018 15:16:23 +0100
Igor Mammedov <imammedo@redhat.com> wrote:

[...]
>   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)

>            ... move out to aml-build.c
my mistake, generally when we move something out,
we should do it in separate path preferably without any changes to the moved code
so it would be easier to review. So it should be a separate patch.

[...]


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 13/24] hw: acpi: Do not create hotplug method when handler is not defined
  2018-11-05  1:40   ` Samuel Ortiz
  (?)
@ 2018-11-09  9:12   ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09  9:12 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Mon,  5 Nov 2018 02:40:36 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> CPU and memory ACPI hotplug are not necessarily handled through SCI
> events. For example, with Hardware-reduced ACPI, the GED device will
> manage ACPI hotplug entirely.
> As a consequence, we make the CPU and memory specific events AML
> generation optional. The code will only be added when the method name is
> not NULL.
patch doesn't belong to this series, it should be go along with
GED device patch.

Suggest to drop it for now.

> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  hw/acpi/cpu.c            |  8 +++++---
>  hw/acpi/memory_hotplug.c | 11 +++++++----
>  2 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index f10b190019..cd41377b5a 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -569,9 +569,11 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
>      aml_append(sb_scope, cpus_dev);
>      aml_append(table, sb_scope);
>  
> -    method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> -    aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
> -    aml_append(table, method);
> +    if (event_handler_method) {
> +        method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> +        aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
> +        aml_append(table, method);
> +    }
>  
>      g_free(cphp_res_path);
>  }
> diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
> index 8c7c1013f3..db2c4df961 100644
> --- a/hw/acpi/memory_hotplug.c
> +++ b/hw/acpi/memory_hotplug.c
> @@ -715,10 +715,13 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
>      }
>      aml_append(table, dev_container);
>  
> -    method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> -    aml_append(method,
> -        aml_call0(MEMORY_DEVICES_CONTAINER "." MEMORY_SLOT_SCAN_METHOD));
> -    aml_append(table, method);
> +    if (event_handler_method) {
> +        method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> +        aml_append(method,
> +                   aml_call0(MEMORY_DEVICES_CONTAINER "."
> +                             MEMORY_SLOT_SCAN_METHOD));
> +        aml_append(table, method);
> +    }
>  
>      g_free(mhp_res_path);
>  }

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 13/24] hw: acpi: Do not create hotplug method when handler is not defined
  2018-11-05  1:40   ` Samuel Ortiz
  (?)
  (?)
@ 2018-11-09  9:12   ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09  9:12 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Mon,  5 Nov 2018 02:40:36 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> CPU and memory ACPI hotplug are not necessarily handled through SCI
> events. For example, with Hardware-reduced ACPI, the GED device will
> manage ACPI hotplug entirely.
> As a consequence, we make the CPU and memory specific events AML
> generation optional. The code will only be added when the method name is
> not NULL.
patch doesn't belong to this series, it should be go along with
GED device patch.

Suggest to drop it for now.

> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  hw/acpi/cpu.c            |  8 +++++---
>  hw/acpi/memory_hotplug.c | 11 +++++++----
>  2 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index f10b190019..cd41377b5a 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -569,9 +569,11 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
>      aml_append(sb_scope, cpus_dev);
>      aml_append(table, sb_scope);
>  
> -    method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> -    aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
> -    aml_append(table, method);
> +    if (event_handler_method) {
> +        method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> +        aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
> +        aml_append(table, method);
> +    }
>  
>      g_free(cphp_res_path);
>  }
> diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
> index 8c7c1013f3..db2c4df961 100644
> --- a/hw/acpi/memory_hotplug.c
> +++ b/hw/acpi/memory_hotplug.c
> @@ -715,10 +715,13 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
>      }
>      aml_append(table, dev_container);
>  
> -    method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> -    aml_append(method,
> -        aml_call0(MEMORY_DEVICES_CONTAINER "." MEMORY_SLOT_SCAN_METHOD));
> -    aml_append(table, method);
> +    if (event_handler_method) {
> +        method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> +        aml_append(method,
> +                   aml_call0(MEMORY_DEVICES_CONTAINER "."
> +                             MEMORY_SLOT_SCAN_METHOD));
> +        aml_append(table, method);
> +    }
>  
>      g_free(mhp_res_path);
>  }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 07/24] hw: acpi: Generalize AML build routines
  2018-11-05  1:40   ` Samuel Ortiz
  (?)
@ 2018-11-09 13:37   ` Igor Mammedov
  2018-11-21 15:00       ` Samuel Ortiz
  -1 siblings, 1 reply; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09 13:37 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost,
	Yang Zhong

On Mon,  5 Nov 2018 02:40:30 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Yang Zhong <yang.zhong@intel.com>
> 
> Most of the AML build routines under acpi-build are not even
> architecture specific. They can be moved to the more generic hw/acpi
> folder where they could be shared across machine types and
> architectures.

I'd prefer if won't pull into aml-build PCI specific headers,
Suggest to create hw/acpi/pci.c and move generic PCI related
code there, with corresponding header the would export API
(preferably without PCI dependencies in it)


Also patch is too big and does too much at a time.
Here I'd suggest to split it in smaller parts to make it more digestible

1. split it in 3 parts
    * MCFG
    * CRS
    * PTR
2. mcfg between x86 and ARM look pretty much the same with ARM
   open codding bus number calculation and missing migration hack
   * a patch to make bus number calculation in ARM the same as x86
   * a patch to bring migration hack (dummy MCFG table in case it's disabled)
     it's questionable if we actually need it in generic,
     we most likely need it for legacy machines that predate
     resizable MemeoryRegion, but we probably don't need it for
     later machines as problem doesn't exists there.
     So it might be better to push hack out from generic code
     to a legacy caller and keep generic MCFG clean.
     (this patch might be better at the beginning of the series as
      it might affect acpi test results, and might need an update to reference tables
      I don't really sure)
   * at this point arm and x86 impl. would be the same so
     a patch to move mcfg build routine to a generic place and replace
     x86/arm with a single impl.
   * a patch to convert mcfg build routine to build_append_int_noprefix() API
     and drop AcpiTableMcfg structure
     
 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> ---
>  include/hw/acpi/aml-build.h |  25 ++
>  hw/acpi/aml-build.c         | 498 ++++++++++++++++++++++++++++++++++
>  hw/arm/virt-acpi-build.c    |   4 +-
>  hw/i386/acpi-build.c        | 518 +-----------------------------------
>  4 files changed, 528 insertions(+), 517 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index a2ef8b6f31..4f678c45a5 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -3,6 +3,7 @@
>  
>  #include "hw/acpi/acpi-defs.h"
>  #include "hw/acpi/bios-linker-loader.h"
> +#include "hw/pci/pcie_host.h"
>  
>  /* Reserve RAM space for tables: add another order of magnitude. */
>  #define ACPI_BUILD_TABLE_MAX_SIZE         0x200000
> @@ -223,6 +224,21 @@ struct AcpiBuildTables {
>      BIOSLinker *linker;
>  } AcpiBuildTables;
>  
> +typedef struct AcpiMcfgInfo {
> +    uint64_t mcfg_base;
> +    uint32_t mcfg_size;
> +} AcpiMcfgInfo;
> +
> +typedef struct CrsRangeEntry {
> +    uint64_t base;
> +    uint64_t limit;
> +} CrsRangeEntry;
> +
> +typedef struct CrsRangeSet {
> +    GPtrArray *io_ranges;
> +    GPtrArray *mem_ranges;
> +    GPtrArray *mem_64bit_ranges;
> +} CrsRangeSet;
>  /**
I'd prefer not to put these into aml-build.h, it's supposed to host ACPI spec primitives mostly
so I'd suggest to move these to acpi-defs.h or PCI specific acpi header

>   * init_aml_allocator:
>   *
> @@ -389,6 +405,15 @@ void acpi_align_size(GArray *blob, unsigned align);
>  void acpi_add_table(GArray *table_offsets, GArray *table_data);
>  void acpi_build_tables_init(AcpiBuildTables *tables);
>  void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
> +Aml *build_osc_method(void);
> +void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
> +Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
> +Aml *build_prt(bool is_pci0_prt);
> +void crs_range_set_init(CrsRangeSet *range_set);
> +Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
> +void crs_replace_with_free_ranges(GPtrArray *ranges,
> +                                  uint64_t start, uint64_t end);
> +void crs_range_set_free(CrsRangeSet *range_set);
>  void
>  build_rsdp_rsdt(GArray *table_data,
>                  BIOSLinker *linker, unsigned rsdt_tbl_offset);
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 8c2388274c..d3242c6b31 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -25,6 +25,10 @@
>  #include "qemu/bswap.h"
>  #include "qemu/bitops.h"
>  #include "sysemu/numa.h"
> +#include "hw/pci/pci.h"
> +#include "hw/pci/pci_bus.h"
> +#include "qemu/range.h"
> +#include "hw/pci/pci_bridge.h"
>  
>  static GArray *build_alloc_array(void)
>  {
> @@ -1597,6 +1601,500 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
>      g_array_free(tables->vmgenid, mfre);
>  }
>  
> +static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
> +{
> +    CrsRangeEntry *entry;
> +
> +    entry = g_malloc(sizeof(*entry));
> +    entry->base = base;
> +    entry->limit = limit;
> +
> +    g_ptr_array_add(ranges, entry);
> +}
> +
> +static void crs_range_free(gpointer data)
> +{
> +    CrsRangeEntry *entry = (CrsRangeEntry *)data;
> +    g_free(entry);
> +}
> +
> +void crs_range_set_init(CrsRangeSet *range_set)
> +{
> +    range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
> +    range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
> +    range_set->mem_64bit_ranges =
> +            g_ptr_array_new_with_free_func(crs_range_free);
> +}
> +
> +void crs_range_set_free(CrsRangeSet *range_set)
> +{
> +    g_ptr_array_free(range_set->io_ranges, true);
> +    g_ptr_array_free(range_set->mem_ranges, true);
> +    g_ptr_array_free(range_set->mem_64bit_ranges, true);
> +}
> +
> +static gint crs_range_compare(gconstpointer a, gconstpointer b)
> +{
> +     CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
> +     CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
> +
> +     return (int64_t)entry_a->base - (int64_t)entry_b->base;
> +}
> +
> +/*
> + * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
> + * interval, computes the 'free' ranges from the same interval.
> + * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
> + * will return { [base - a1], [a2 - b1], [b2 - limit] }.
> + */
> +void crs_replace_with_free_ranges(GPtrArray *ranges,
> +                                         uint64_t start, uint64_t end)
> +{
> +    GPtrArray *free_ranges = g_ptr_array_new();
> +    uint64_t free_base = start;
> +    int i;
> +
> +    g_ptr_array_sort(ranges, crs_range_compare);
> +    for (i = 0; i < ranges->len; i++) {
> +        CrsRangeEntry *used = g_ptr_array_index(ranges, i);
> +
> +        if (free_base < used->base) {
> +            crs_range_insert(free_ranges, free_base, used->base - 1);
> +        }
> +
> +        free_base = used->limit + 1;
> +    }
> +
> +    if (free_base < end) {
> +        crs_range_insert(free_ranges, free_base, end);
> +    }
> +
> +    g_ptr_array_set_size(ranges, 0);
> +    for (i = 0; i < free_ranges->len; i++) {
> +        g_ptr_array_add(ranges, g_ptr_array_index(free_ranges, i));
> +    }
> +
> +    g_ptr_array_free(free_ranges, true);
> +}
> +
> +/*
> + * crs_range_merge - merges adjacent ranges in the given array.
> + * Array elements are deleted and replaced with the merged ranges.
> + */
> +static void crs_range_merge(GPtrArray *range)
> +{
> +    GPtrArray *tmp =  g_ptr_array_new_with_free_func(crs_range_free);
> +    CrsRangeEntry *entry;
> +    uint64_t range_base, range_limit;
> +    int i;
> +
> +    if (!range->len) {
> +        return;
> +    }
> +
> +    g_ptr_array_sort(range, crs_range_compare);
> +
> +    entry = g_ptr_array_index(range, 0);
> +    range_base = entry->base;
> +    range_limit = entry->limit;
> +    for (i = 1; i < range->len; i++) {
> +        entry = g_ptr_array_index(range, i);
> +        if (entry->base - 1 == range_limit) {
> +            range_limit = entry->limit;
> +        } else {
> +            crs_range_insert(tmp, range_base, range_limit);
> +            range_base = entry->base;
> +            range_limit = entry->limit;
> +        }
> +    }
> +    crs_range_insert(tmp, range_base, range_limit);
> +
> +    g_ptr_array_set_size(range, 0);
> +    for (i = 0; i < tmp->len; i++) {
> +        entry = g_ptr_array_index(tmp, i);
> +        crs_range_insert(range, entry->base, entry->limit);
> +    }
> +    g_ptr_array_free(tmp, true);
> +}
> +
> +Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
> +{
> +    Aml *crs = aml_resource_template();
> +    CrsRangeSet temp_range_set;
> +    CrsRangeEntry *entry;
> +    uint8_t max_bus = pci_bus_num(host->bus);
> +    uint8_t type;
> +    int devfn;
> +    int i;
> +
> +    crs_range_set_init(&temp_range_set);
> +    for (devfn = 0; devfn < ARRAY_SIZE(host->bus->devices); devfn++) {
> +        uint64_t range_base, range_limit;
> +        PCIDevice *dev = host->bus->devices[devfn];
> +
> +        if (!dev) {
> +            continue;
> +        }
> +
> +        for (i = 0; i < PCI_NUM_REGIONS; i++) {
> +            PCIIORegion *r = &dev->io_regions[i];
> +
> +            range_base = r->addr;
> +            range_limit = r->addr + r->size - 1;
> +
> +            /*
> +             * Work-around for old bioses
> +             * that do not support multiple root buses
> +             */
> +            if (!range_base || range_base > range_limit) {
> +                continue;
> +            }
> +
> +            if (r->type & PCI_BASE_ADDRESS_SPACE_IO) {
> +                crs_range_insert(temp_range_set.io_ranges,
> +                                 range_base, range_limit);
> +            } else { /* "memory" */
> +                crs_range_insert(temp_range_set.mem_ranges,
> +                                 range_base, range_limit);
> +            }
> +        }
> +
> +        type = dev->config[PCI_HEADER_TYPE] & ~PCI_HEADER_TYPE_MULTI_FUNCTION;
> +        if (type == PCI_HEADER_TYPE_BRIDGE) {
> +            uint8_t subordinate = dev->config[PCI_SUBORDINATE_BUS];
> +            if (subordinate > max_bus) {
> +                max_bus = subordinate;
> +            }
> +
> +            range_base = pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_IO);
> +            range_limit = pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_IO);
> +
> +            /*
> +             * Work-around for old bioses
> +             * that do not support multiple root buses
> +             */
> +            if (range_base && range_base <= range_limit) {
> +                crs_range_insert(temp_range_set.io_ranges,
> +                                 range_base, range_limit);
> +            }
> +
> +            range_base =
> +                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
> +            range_limit =
> +                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
> +
> +            /*
> +             * Work-around for old bioses
> +             * that do not support multiple root buses
> +             */
> +            if (range_base && range_base <= range_limit) {
> +                uint64_t length = range_limit - range_base + 1;
> +                if (range_limit <= UINT32_MAX && length <= UINT32_MAX)  {
> +                    crs_range_insert(temp_range_set.mem_ranges,
> +                                     range_base, range_limit);
> +                } else {
> +                    crs_range_insert(temp_range_set.mem_64bit_ranges,
> +                                     range_base, range_limit);
> +                }
> +            }
> +
> +            range_base =
> +                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
> +            range_limit =
> +                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
> +
> +            /*
> +             * Work-around for old bioses
> +             * that do not support multiple root buses
> +             */
> +            if (range_base && range_base <= range_limit) {
> +                uint64_t length = range_limit - range_base + 1;
> +                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
> +                    crs_range_insert(temp_range_set.mem_ranges,
> +                                     range_base, range_limit);
> +                } else {
> +                    crs_range_insert(temp_range_set.mem_64bit_ranges,
> +                                     range_base, range_limit);
> +                }
> +            }
> +        }
> +    }
> +
> +    crs_range_merge(temp_range_set.io_ranges);
> +    for (i = 0; i < temp_range_set.io_ranges->len; i++) {
> +        entry = g_ptr_array_index(temp_range_set.io_ranges, i);
> +        aml_append(crs,
> +                   aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> +                               AML_POS_DECODE, AML_ENTIRE_RANGE,
> +                               0, entry->base, entry->limit, 0,
> +                               entry->limit - entry->base + 1));
> +        crs_range_insert(range_set->io_ranges, entry->base, entry->limit);
> +    }
> +
> +    crs_range_merge(temp_range_set.mem_ranges);
> +    for (i = 0; i < temp_range_set.mem_ranges->len; i++) {
> +        entry = g_ptr_array_index(temp_range_set.mem_ranges, i);
> +        aml_append(crs,
> +                   aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> +                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
> +                                    AML_READ_WRITE,
> +                                    0, entry->base, entry->limit, 0,
> +                                    entry->limit - entry->base + 1));
> +        crs_range_insert(range_set->mem_ranges, entry->base, entry->limit);
> +    }
> +
> +    crs_range_merge(temp_range_set.mem_64bit_ranges);
> +    for (i = 0; i < temp_range_set.mem_64bit_ranges->len; i++) {
> +        entry = g_ptr_array_index(temp_range_set.mem_64bit_ranges, i);
> +        aml_append(crs,
> +                   aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> +                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
> +                                    AML_READ_WRITE,
> +                                    0, entry->base, entry->limit, 0,
> +                                    entry->limit - entry->base + 1));
> +        crs_range_insert(range_set->mem_64bit_ranges,
> +                         entry->base, entry->limit);
> +    }
> +
> +    crs_range_set_free(&temp_range_set);
> +
> +    aml_append(crs,
> +        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> +                            0,
> +                            pci_bus_num(host->bus),
> +                            max_bus,
> +                            0,
> +                            max_bus - pci_bus_num(host->bus) + 1));
> +
> +    return crs;
> +}
> +
> +Aml *build_osc_method(void)
> +{
> +    Aml *if_ctx;
> +    Aml *if_ctx2;
> +    Aml *else_ctx;
> +    Aml *method;
> +    Aml *a_cwd1 = aml_name("CDW1");
> +    Aml *a_ctrl = aml_local(0);
> +
> +    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
> +    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
> +
> +    if_ctx = aml_if(aml_equal(
> +        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
> +    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
> +    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
> +
> +    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
> +
> +    /*
> +     * Always allow native PME, AER (no dependencies)
> +     * Allow SHPC (PCI bridges can have SHPC controller)
> +     */
> +    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
> +
> +    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
> +    /* Unknown revision */
> +    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
> +    aml_append(if_ctx, if_ctx2);
> +
> +    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
> +    /* Capabilities bits were masked */
> +    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
> +    aml_append(if_ctx, if_ctx2);
> +
> +    /* Update DWORD3 in the buffer */
> +    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
> +    aml_append(method, if_ctx);
> +
> +    else_ctx = aml_else();
> +    /* Unrecognized UUID */
> +    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
> +    aml_append(method, else_ctx);
> +
> +    aml_append(method, aml_return(aml_arg(3)));
> +    return method;
> +}
> +
> +void
> +build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info)
> +{
> +    AcpiTableMcfg *mcfg;
> +    const char *sig;
> +    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
> +
> +    mcfg = acpi_data_push(table_data, len);
> +    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
> +    /* Only a single allocation so no need to play with segments */
> +    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
> +    mcfg->allocation[0].start_bus_number = 0;
> +    mcfg->allocation[0].end_bus_number = PCIE_MMCFG_BUS(info->mcfg_size - 1);
> +
> +    /* MCFG is used for ECAM which can be enabled or disabled by guest.
> +     * To avoid table size changes (which create migration issues),
> +     * always create the table even if there are no allocations,
> +     * but set the signature to a reserved value in this case.
> +     * ACPI spec requires OSPMs to ignore such tables.
> +     */
> +    if (info->mcfg_base == PCIE_BASE_ADDR_UNMAPPED) {
> +        /* Reserved signature: ignored by OSPM */
> +        sig = "QEMU";
> +    } else {
> +        sig = "MCFG";
> +    }
> +    build_header(linker, table_data, (void *)mcfg, sig, len, 1, NULL, NULL);
> +}
> +
> +Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
> +{
> +    Aml *dev;
> +    Aml *crs;
> +    Aml *method;
> +    uint32_t irqs;
> +
> +    dev = aml_device("%s", name);
> +    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C0F")));
> +    aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
> +
> +    crs = aml_resource_template();
> +    irqs = gsi;
> +    aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
> +                                  AML_SHARED, &irqs, 1));
> +    aml_append(dev, aml_name_decl("_PRS", crs));
> +
> +    aml_append(dev, aml_name_decl("_CRS", crs));
> +
> +    /*
> +     * _DIS can be no-op because the interrupt cannot be disabled.
> +     */
> +    method = aml_method("_DIS", 0, AML_NOTSERIALIZED);
> +    aml_append(dev, method);
> +
> +    method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
> +    aml_append(dev, method);
> +
> +    return dev;
> +}
> +
> +/**
> + * build_prt_entry:
> + * @link_name: link name for PCI route entry
> + *
> + * build AML package containing a PCI route entry for @link_name
> + */
> +static Aml *build_prt_entry(const char *link_name)
> +{
> +    Aml *a_zero = aml_int(0);
> +    Aml *pkg = aml_package(4);
> +    aml_append(pkg, a_zero);
> +    aml_append(pkg, a_zero);
> +    aml_append(pkg, aml_name("%s", link_name));
> +    aml_append(pkg, a_zero);
> +    return pkg;
> +}
> +
> +/*
> + * initialize_route - Initialize the interrupt routing rule
> + * through a specific LINK:
> + *  if (lnk_idx == idx)
> + *      route using link 'link_name'
> + */
> +static Aml *initialize_route(Aml *route, const char *link_name,
> +                             Aml *lnk_idx, int idx)
> +{
> +    Aml *if_ctx = aml_if(aml_equal(lnk_idx, aml_int(idx)));
> +    Aml *pkg = build_prt_entry(link_name);
> +
> +    aml_append(if_ctx, aml_store(pkg, route));
> +
> +    return if_ctx;
> +}
> +
> +/*
> + * build_prt - Define interrupt rounting rules
> + *
> + * Returns an array of 128 routes, one for each device,
> + * based on device location.
> + * The main goal is to equaly distribute the interrupts
> + * over the 4 existing ACPI links (works only for i440fx).
> + * The hash function is  (slot + pin) & 3 -> "LNK[D|A|B|C]".
> + *
> + */
> +Aml *build_prt(bool is_pci0_prt)
> +{
> +    Aml *method, *while_ctx, *pin, *res;
> +
> +    method = aml_method("_PRT", 0, AML_NOTSERIALIZED);
> +    res = aml_local(0);
> +    pin = aml_local(1);
> +    aml_append(method, aml_store(aml_package(128), res));
> +    aml_append(method, aml_store(aml_int(0), pin));
> +
> +    /* while (pin < 128) */
> +    while_ctx = aml_while(aml_lless(pin, aml_int(128)));
> +    {
> +        Aml *slot = aml_local(2);
> +        Aml *lnk_idx = aml_local(3);
> +        Aml *route = aml_local(4);
> +
> +        /* slot = pin >> 2 */
> +        aml_append(while_ctx,
> +                   aml_store(aml_shiftright(pin, aml_int(2), NULL), slot));
> +        /* lnk_idx = (slot + pin) & 3 */
> +        aml_append(while_ctx,
> +            aml_store(aml_and(aml_add(pin, slot, NULL), aml_int(3), NULL),
> +                      lnk_idx));
> +
> +        /* route[2] = "LNK[D|A|B|C]", selection based on pin % 3  */
> +        aml_append(while_ctx, initialize_route(route, "LNKD", lnk_idx, 0));
> +        if (is_pci0_prt) {
> +            Aml *if_device_1, *if_pin_4, *else_pin_4;
> +
> +            /* device 1 is the power-management device, needs SCI */
> +            if_device_1 = aml_if(aml_equal(lnk_idx, aml_int(1)));
> +            {
> +                if_pin_4 = aml_if(aml_equal(pin, aml_int(4)));
> +                {
> +                    aml_append(if_pin_4,
> +                        aml_store(build_prt_entry("LNKS"), route));
> +                }
> +                aml_append(if_device_1, if_pin_4);
> +                else_pin_4 = aml_else();
> +                {
> +                    aml_append(else_pin_4,
> +                        aml_store(build_prt_entry("LNKA"), route));
> +                }
> +                aml_append(if_device_1, else_pin_4);
> +            }
> +            aml_append(while_ctx, if_device_1);
> +        } else {
> +            aml_append(while_ctx, initialize_route(route, "LNKA", lnk_idx, 1));
> +        }
> +        aml_append(while_ctx, initialize_route(route, "LNKB", lnk_idx, 2));
> +        aml_append(while_ctx, initialize_route(route, "LNKC", lnk_idx, 3));
> +
> +        /* route[0] = 0x[slot]FFFF */
> +        aml_append(while_ctx,
> +            aml_store(aml_or(aml_shiftleft(slot, aml_int(16)), aml_int(0xFFFF),
> +                             NULL),
> +                      aml_index(route, aml_int(0))));
> +        /* route[1] = pin & 3 */
> +        aml_append(while_ctx,
> +            aml_store(aml_and(pin, aml_int(3), NULL),
> +                      aml_index(route, aml_int(1))));
> +        /* res[pin] = route */
> +        aml_append(while_ctx, aml_store(route, aml_index(res, pin)));
> +        /* pin++ */
> +        aml_append(while_ctx, aml_increment(pin));
> +    }
> +    aml_append(method, while_ctx);
> +    /* return res*/
> +    aml_append(method, aml_return(res));
> +
> +    return method;
> +}
> +
>  /* Build rsdt table */
>  void
>  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index 261363e20c..c9b4916ba7 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -545,7 +545,7 @@ build_srat(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>  }
>  
>  static void
> -build_mcfg(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
> +virt_build_mcfg(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>  {
>      AcpiTableMcfg *mcfg;
>      const MemMapEntry *memmap = vms->memmap;
> @@ -790,7 +790,7 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
>      build_gtdt(tables_blob, tables->linker, vms);
>  
>      acpi_add_table(table_offsets, tables_blob);
> -    build_mcfg(tables_blob, tables->linker, vms);
> +    virt_build_mcfg(tables_blob, tables->linker, vms);
>  
>      acpi_add_table(table_offsets, tables_blob);
>      build_spcr(tables_blob, tables->linker, vms);
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index cfc2444d0d..996d8a11dc 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -27,7 +27,6 @@
>  #include "qemu-common.h"
>  #include "qemu/bitmap.h"
>  #include "qemu/error-report.h"
> -#include "hw/pci/pci.h"
>  #include "qom/cpu.h"
>  #include "target/i386/cpu.h"
>  #include "hw/misc/pvpanic.h"
> @@ -53,7 +52,6 @@
>  #include "hw/acpi/piix4.h"
>  #include "hw/acpi/pcihp.h"
>  #include "hw/i386/ich9.h"
> -#include "hw/pci/pci_bus.h"
>  #include "hw/pci-host/q35.h"
>  #include "hw/i386/x86-iommu.h"
>  
> @@ -86,11 +84,6 @@
>  /* Default IOAPIC ID */
>  #define ACPI_BUILD_IOAPIC_ID 0x0
>  
> -typedef struct AcpiMcfgInfo {
> -    uint64_t mcfg_base;
> -    uint32_t mcfg_size;
> -} AcpiMcfgInfo;
> -
>  typedef struct AcpiPmInfo {
>      bool s3_disabled;
>      bool s4_disabled;
> @@ -567,403 +560,6 @@ static void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
>      qobject_unref(bsel);
>  }
>  
> -/**
> - * build_prt_entry:
> - * @link_name: link name for PCI route entry
> - *
> - * build AML package containing a PCI route entry for @link_name
> - */
> -static Aml *build_prt_entry(const char *link_name)
> -{
> -    Aml *a_zero = aml_int(0);
> -    Aml *pkg = aml_package(4);
> -    aml_append(pkg, a_zero);
> -    aml_append(pkg, a_zero);
> -    aml_append(pkg, aml_name("%s", link_name));
> -    aml_append(pkg, a_zero);
> -    return pkg;
> -}
> -
> -/*
> - * initialize_route - Initialize the interrupt routing rule
> - * through a specific LINK:
> - *  if (lnk_idx == idx)
> - *      route using link 'link_name'
> - */
> -static Aml *initialize_route(Aml *route, const char *link_name,
> -                             Aml *lnk_idx, int idx)
> -{
> -    Aml *if_ctx = aml_if(aml_equal(lnk_idx, aml_int(idx)));
> -    Aml *pkg = build_prt_entry(link_name);
> -
> -    aml_append(if_ctx, aml_store(pkg, route));
> -
> -    return if_ctx;
> -}
> -
> -/*
> - * build_prt - Define interrupt rounting rules
> - *
> - * Returns an array of 128 routes, one for each device,
> - * based on device location.
> - * The main goal is to equaly distribute the interrupts
> - * over the 4 existing ACPI links (works only for i440fx).
> - * The hash function is  (slot + pin) & 3 -> "LNK[D|A|B|C]".
> - *
> - */
> -static Aml *build_prt(bool is_pci0_prt)
> -{
> -    Aml *method, *while_ctx, *pin, *res;
> -
> -    method = aml_method("_PRT", 0, AML_NOTSERIALIZED);
> -    res = aml_local(0);
> -    pin = aml_local(1);
> -    aml_append(method, aml_store(aml_package(128), res));
> -    aml_append(method, aml_store(aml_int(0), pin));
> -
> -    /* while (pin < 128) */
> -    while_ctx = aml_while(aml_lless(pin, aml_int(128)));
> -    {
> -        Aml *slot = aml_local(2);
> -        Aml *lnk_idx = aml_local(3);
> -        Aml *route = aml_local(4);
> -
> -        /* slot = pin >> 2 */
> -        aml_append(while_ctx,
> -                   aml_store(aml_shiftright(pin, aml_int(2), NULL), slot));
> -        /* lnk_idx = (slot + pin) & 3 */
> -        aml_append(while_ctx,
> -            aml_store(aml_and(aml_add(pin, slot, NULL), aml_int(3), NULL),
> -                      lnk_idx));
> -
> -        /* route[2] = "LNK[D|A|B|C]", selection based on pin % 3  */
> -        aml_append(while_ctx, initialize_route(route, "LNKD", lnk_idx, 0));
> -        if (is_pci0_prt) {
> -            Aml *if_device_1, *if_pin_4, *else_pin_4;
> -
> -            /* device 1 is the power-management device, needs SCI */
> -            if_device_1 = aml_if(aml_equal(lnk_idx, aml_int(1)));
> -            {
> -                if_pin_4 = aml_if(aml_equal(pin, aml_int(4)));
> -                {
> -                    aml_append(if_pin_4,
> -                        aml_store(build_prt_entry("LNKS"), route));
> -                }
> -                aml_append(if_device_1, if_pin_4);
> -                else_pin_4 = aml_else();
> -                {
> -                    aml_append(else_pin_4,
> -                        aml_store(build_prt_entry("LNKA"), route));
> -                }
> -                aml_append(if_device_1, else_pin_4);
> -            }
> -            aml_append(while_ctx, if_device_1);
> -        } else {
> -            aml_append(while_ctx, initialize_route(route, "LNKA", lnk_idx, 1));
> -        }
> -        aml_append(while_ctx, initialize_route(route, "LNKB", lnk_idx, 2));
> -        aml_append(while_ctx, initialize_route(route, "LNKC", lnk_idx, 3));
> -
> -        /* route[0] = 0x[slot]FFFF */
> -        aml_append(while_ctx,
> -            aml_store(aml_or(aml_shiftleft(slot, aml_int(16)), aml_int(0xFFFF),
> -                             NULL),
> -                      aml_index(route, aml_int(0))));
> -        /* route[1] = pin & 3 */
> -        aml_append(while_ctx,
> -            aml_store(aml_and(pin, aml_int(3), NULL),
> -                      aml_index(route, aml_int(1))));
> -        /* res[pin] = route */
> -        aml_append(while_ctx, aml_store(route, aml_index(res, pin)));
> -        /* pin++ */
> -        aml_append(while_ctx, aml_increment(pin));
> -    }
> -    aml_append(method, while_ctx);
> -    /* return res*/
> -    aml_append(method, aml_return(res));
> -
> -    return method;
> -}
> -
> -typedef struct CrsRangeEntry {
> -    uint64_t base;
> -    uint64_t limit;
> -} CrsRangeEntry;
> -
> -static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
> -{
> -    CrsRangeEntry *entry;
> -
> -    entry = g_malloc(sizeof(*entry));
> -    entry->base = base;
> -    entry->limit = limit;
> -
> -    g_ptr_array_add(ranges, entry);
> -}
> -
> -static void crs_range_free(gpointer data)
> -{
> -    CrsRangeEntry *entry = (CrsRangeEntry *)data;
> -    g_free(entry);
> -}
> -
> -typedef struct CrsRangeSet {
> -    GPtrArray *io_ranges;
> -    GPtrArray *mem_ranges;
> -    GPtrArray *mem_64bit_ranges;
> - } CrsRangeSet;
> -
> -static void crs_range_set_init(CrsRangeSet *range_set)
> -{
> -    range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
> -    range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
> -    range_set->mem_64bit_ranges =
> -            g_ptr_array_new_with_free_func(crs_range_free);
> -}
> -
> -static void crs_range_set_free(CrsRangeSet *range_set)
> -{
> -    g_ptr_array_free(range_set->io_ranges, true);
> -    g_ptr_array_free(range_set->mem_ranges, true);
> -    g_ptr_array_free(range_set->mem_64bit_ranges, true);
> -}
> -
> -static gint crs_range_compare(gconstpointer a, gconstpointer b)
> -{
> -     CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
> -     CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
> -
> -     return (int64_t)entry_a->base - (int64_t)entry_b->base;
> -}
> -
> -/*
> - * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
> - * interval, computes the 'free' ranges from the same interval.
> - * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
> - * will return { [base - a1], [a2 - b1], [b2 - limit] }.
> - */
> -static void crs_replace_with_free_ranges(GPtrArray *ranges,
> -                                         uint64_t start, uint64_t end)
> -{
> -    GPtrArray *free_ranges = g_ptr_array_new();
> -    uint64_t free_base = start;
> -    int i;
> -
> -    g_ptr_array_sort(ranges, crs_range_compare);
> -    for (i = 0; i < ranges->len; i++) {
> -        CrsRangeEntry *used = g_ptr_array_index(ranges, i);
> -
> -        if (free_base < used->base) {
> -            crs_range_insert(free_ranges, free_base, used->base - 1);
> -        }
> -
> -        free_base = used->limit + 1;
> -    }
> -
> -    if (free_base < end) {
> -        crs_range_insert(free_ranges, free_base, end);
> -    }
> -
> -    g_ptr_array_set_size(ranges, 0);
> -    for (i = 0; i < free_ranges->len; i++) {
> -        g_ptr_array_add(ranges, g_ptr_array_index(free_ranges, i));
> -    }
> -
> -    g_ptr_array_free(free_ranges, true);
> -}
> -
> -/*
> - * crs_range_merge - merges adjacent ranges in the given array.
> - * Array elements are deleted and replaced with the merged ranges.
> - */
> -static void crs_range_merge(GPtrArray *range)
> -{
> -    GPtrArray *tmp =  g_ptr_array_new_with_free_func(crs_range_free);
> -    CrsRangeEntry *entry;
> -    uint64_t range_base, range_limit;
> -    int i;
> -
> -    if (!range->len) {
> -        return;
> -    }
> -
> -    g_ptr_array_sort(range, crs_range_compare);
> -
> -    entry = g_ptr_array_index(range, 0);
> -    range_base = entry->base;
> -    range_limit = entry->limit;
> -    for (i = 1; i < range->len; i++) {
> -        entry = g_ptr_array_index(range, i);
> -        if (entry->base - 1 == range_limit) {
> -            range_limit = entry->limit;
> -        } else {
> -            crs_range_insert(tmp, range_base, range_limit);
> -            range_base = entry->base;
> -            range_limit = entry->limit;
> -        }
> -    }
> -    crs_range_insert(tmp, range_base, range_limit);
> -
> -    g_ptr_array_set_size(range, 0);
> -    for (i = 0; i < tmp->len; i++) {
> -        entry = g_ptr_array_index(tmp, i);
> -        crs_range_insert(range, entry->base, entry->limit);
> -    }
> -    g_ptr_array_free(tmp, true);
> -}
> -
> -static Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
> -{
> -    Aml *crs = aml_resource_template();
> -    CrsRangeSet temp_range_set;
> -    CrsRangeEntry *entry;
> -    uint8_t max_bus = pci_bus_num(host->bus);
> -    uint8_t type;
> -    int devfn;
> -    int i;
> -
> -    crs_range_set_init(&temp_range_set);
> -    for (devfn = 0; devfn < ARRAY_SIZE(host->bus->devices); devfn++) {
> -        uint64_t range_base, range_limit;
> -        PCIDevice *dev = host->bus->devices[devfn];
> -
> -        if (!dev) {
> -            continue;
> -        }
> -
> -        for (i = 0; i < PCI_NUM_REGIONS; i++) {
> -            PCIIORegion *r = &dev->io_regions[i];
> -
> -            range_base = r->addr;
> -            range_limit = r->addr + r->size - 1;
> -
> -            /*
> -             * Work-around for old bioses
> -             * that do not support multiple root buses
> -             */
> -            if (!range_base || range_base > range_limit) {
> -                continue;
> -            }
> -
> -            if (r->type & PCI_BASE_ADDRESS_SPACE_IO) {
> -                crs_range_insert(temp_range_set.io_ranges,
> -                                 range_base, range_limit);
> -            } else { /* "memory" */
> -                crs_range_insert(temp_range_set.mem_ranges,
> -                                 range_base, range_limit);
> -            }
> -        }
> -
> -        type = dev->config[PCI_HEADER_TYPE] & ~PCI_HEADER_TYPE_MULTI_FUNCTION;
> -        if (type == PCI_HEADER_TYPE_BRIDGE) {
> -            uint8_t subordinate = dev->config[PCI_SUBORDINATE_BUS];
> -            if (subordinate > max_bus) {
> -                max_bus = subordinate;
> -            }
> -
> -            range_base = pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_IO);
> -            range_limit = pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_IO);
> -
> -            /*
> -             * Work-around for old bioses
> -             * that do not support multiple root buses
> -             */
> -            if (range_base && range_base <= range_limit) {
> -                crs_range_insert(temp_range_set.io_ranges,
> -                                 range_base, range_limit);
> -            }
> -
> -            range_base =
> -                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
> -            range_limit =
> -                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
> -
> -            /*
> -             * Work-around for old bioses
> -             * that do not support multiple root buses
> -             */
> -            if (range_base && range_base <= range_limit) {
> -                uint64_t length = range_limit - range_base + 1;
> -                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
> -                    crs_range_insert(temp_range_set.mem_ranges,
> -                                     range_base, range_limit);
> -                } else {
> -                    crs_range_insert(temp_range_set.mem_64bit_ranges,
> -                                     range_base, range_limit);
> -                }
> -            }
> -
> -            range_base =
> -                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
> -            range_limit =
> -                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
> -
> -            /*
> -             * Work-around for old bioses
> -             * that do not support multiple root buses
> -             */
> -            if (range_base && range_base <= range_limit) {
> -                uint64_t length = range_limit - range_base + 1;
> -                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
> -                    crs_range_insert(temp_range_set.mem_ranges,
> -                                     range_base, range_limit);
> -                } else {
> -                    crs_range_insert(temp_range_set.mem_64bit_ranges,
> -                                     range_base, range_limit);
> -                }
> -            }
> -        }
> -    }
> -
> -    crs_range_merge(temp_range_set.io_ranges);
> -    for (i = 0; i < temp_range_set.io_ranges->len; i++) {
> -        entry = g_ptr_array_index(temp_range_set.io_ranges, i);
> -        aml_append(crs,
> -                   aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> -                               AML_POS_DECODE, AML_ENTIRE_RANGE,
> -                               0, entry->base, entry->limit, 0,
> -                               entry->limit - entry->base + 1));
> -        crs_range_insert(range_set->io_ranges, entry->base, entry->limit);
> -    }
> -
> -    crs_range_merge(temp_range_set.mem_ranges);
> -    for (i = 0; i < temp_range_set.mem_ranges->len; i++) {
> -        entry = g_ptr_array_index(temp_range_set.mem_ranges, i);
> -        aml_append(crs,
> -                   aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> -                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
> -                                    AML_READ_WRITE,
> -                                    0, entry->base, entry->limit, 0,
> -                                    entry->limit - entry->base + 1));
> -        crs_range_insert(range_set->mem_ranges, entry->base, entry->limit);
> -    }
> -
> -    crs_range_merge(temp_range_set.mem_64bit_ranges);
> -    for (i = 0; i < temp_range_set.mem_64bit_ranges->len; i++) {
> -        entry = g_ptr_array_index(temp_range_set.mem_64bit_ranges, i);
> -        aml_append(crs,
> -                   aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> -                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
> -                                    AML_READ_WRITE,
> -                                    0, entry->base, entry->limit, 0,
> -                                    entry->limit - entry->base + 1));
> -        crs_range_insert(range_set->mem_64bit_ranges,
> -                         entry->base, entry->limit);
> -    }
> -
> -    crs_range_set_free(&temp_range_set);
> -
> -    aml_append(crs,
> -        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> -                            0,
> -                            pci_bus_num(host->bus),
> -                            max_bus,
> -                            0,
> -                            max_bus - pci_bus_num(host->bus) + 1));
> -
> -    return crs;
> -}
> -
>  static void build_hpet_aml(Aml *table)
>  {
>      Aml *crs;
> @@ -1334,37 +930,6 @@ static Aml *build_link_dev(const char *name, uint8_t uid, Aml *reg)
>      return dev;
>   }
>  
> -static Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
> -{
> -    Aml *dev;
> -    Aml *crs;
> -    Aml *method;
> -    uint32_t irqs;
> -
> -    dev = aml_device("%s", name);
> -    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C0F")));
> -    aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
> -
> -    crs = aml_resource_template();
> -    irqs = gsi;
> -    aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
> -                                  AML_SHARED, &irqs, 1));
> -    aml_append(dev, aml_name_decl("_PRS", crs));
> -
> -    aml_append(dev, aml_name_decl("_CRS", crs));
> -
> -    /*
> -     * _DIS can be no-op because the interrupt cannot be disabled.
> -     */
> -    method = aml_method("_DIS", 0, AML_NOTSERIALIZED);
> -    aml_append(dev, method);
> -
> -    method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
> -    aml_append(dev, method);
> -
> -    return dev;
> -}
> -
>  /* _CRS method - get current settings */
>  static Aml *build_iqcr_method(bool is_piix4)
>  {
> @@ -1728,54 +1293,6 @@ static void build_piix4_pci_hotplug(Aml *table)
>      aml_append(table, scope);
>  }
>  
> -static Aml *build_q35_osc_method(void)
> -{
> -    Aml *if_ctx;
> -    Aml *if_ctx2;
> -    Aml *else_ctx;
> -    Aml *method;
> -    Aml *a_cwd1 = aml_name("CDW1");
> -    Aml *a_ctrl = aml_local(0);
> -
> -    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
> -    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
> -
> -    if_ctx = aml_if(aml_equal(
> -        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
> -    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
> -    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
> -
> -    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
> -
> -    /*
> -     * Always allow native PME, AER (no dependencies)
> -     * Allow SHPC (PCI bridges can have SHPC controller)
> -     */
> -    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
> -
> -    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
> -    /* Unknown revision */
> -    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
> -    aml_append(if_ctx, if_ctx2);
> -
> -    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
> -    /* Capabilities bits were masked */
> -    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
> -    aml_append(if_ctx, if_ctx2);
> -
> -    /* Update DWORD3 in the buffer */
> -    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
> -    aml_append(method, if_ctx);
> -
> -    else_ctx = aml_else();
> -    /* Unrecognized UUID */
> -    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
> -    aml_append(method, else_ctx);
> -
> -    aml_append(method, aml_return(aml_arg(3)));
> -    return method;
> -}
> -
>  static void
>  build_dsdt(GArray *table_data, BIOSLinker *linker,
>             AcpiPmInfo *pm, AcpiMiscInfo *misc,
> @@ -1818,7 +1335,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>          aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
>          aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
>          aml_append(dev, aml_name_decl("_UID", aml_int(1)));
> -        aml_append(dev, build_q35_osc_method());
> +        aml_append(dev, build_osc_method());
>          aml_append(sb_scope, dev);
>          aml_append(dsdt, sb_scope);
>  
> @@ -1883,7 +1400,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>              aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
>              aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
>              if (pci_bus_is_express(bus)) {
> -                aml_append(dev, build_q35_osc_method());
> +                aml_append(dev, build_osc_method());
>              }
>  
>              if (numa_node != NUMA_NODE_UNASSIGNED) {
> @@ -2370,35 +1887,6 @@ build_srat(GArray *table_data, BIOSLinker *linker,
>                   table_data->len - srat_start, 1, NULL, NULL);
>  }
>  
> -static void
> -build_mcfg_q35(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info)
> -{
> -    AcpiTableMcfg *mcfg;
> -    const char *sig;
> -    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
> -
> -    mcfg = acpi_data_push(table_data, len);
> -    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
> -    /* Only a single allocation so no need to play with segments */
> -    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
> -    mcfg->allocation[0].start_bus_number = 0;
> -    mcfg->allocation[0].end_bus_number = PCIE_MMCFG_BUS(info->mcfg_size - 1);
> -
> -    /* MCFG is used for ECAM which can be enabled or disabled by guest.
> -     * To avoid table size changes (which create migration issues),
> -     * always create the table even if there are no allocations,
> -     * but set the signature to a reserved value in this case.
> -     * ACPI spec requires OSPMs to ignore such tables.
> -     */
> -    if (info->mcfg_base == PCIE_BASE_ADDR_UNMAPPED) {
> -        /* Reserved signature: ignored by OSPM */
> -        sig = "QEMU";
> -    } else {
> -        sig = "MCFG";
> -    }
> -    build_header(linker, table_data, (void *)mcfg, sig, len, 1, NULL, NULL);
> -}
> -
>  /*
>   * VT-d spec 8.1 DMA Remapping Reporting Structure
>   * (version Oct. 2014 or later)
> @@ -2626,7 +2114,7 @@ void acpi_build(AcpiBuildTables *tables,
>      }
>      if (acpi_get_mcfg(&mcfg)) {
>          acpi_add_table(table_offsets, tables_blob);
> -        build_mcfg_q35(tables_blob, tables->linker, &mcfg);
> +        build_mcfg(tables_blob, tables->linker, &mcfg);
>      }
>      if (x86_iommu_get_default()) {
>          IommuType IOMMUType = x86_iommu_get_type();

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 07/24] hw: acpi: Generalize AML build routines
  2018-11-05  1:40   ` Samuel Ortiz
  (?)
  (?)
@ 2018-11-09 13:37   ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09 13:37 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:30 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Yang Zhong <yang.zhong@intel.com>
> 
> Most of the AML build routines under acpi-build are not even
> architecture specific. They can be moved to the more generic hw/acpi
> folder where they could be shared across machine types and
> architectures.

I'd prefer if won't pull into aml-build PCI specific headers,
Suggest to create hw/acpi/pci.c and move generic PCI related
code there, with corresponding header the would export API
(preferably without PCI dependencies in it)


Also patch is too big and does too much at a time.
Here I'd suggest to split it in smaller parts to make it more digestible

1. split it in 3 parts
    * MCFG
    * CRS
    * PTR
2. mcfg between x86 and ARM look pretty much the same with ARM
   open codding bus number calculation and missing migration hack
   * a patch to make bus number calculation in ARM the same as x86
   * a patch to bring migration hack (dummy MCFG table in case it's disabled)
     it's questionable if we actually need it in generic,
     we most likely need it for legacy machines that predate
     resizable MemeoryRegion, but we probably don't need it for
     later machines as problem doesn't exists there.
     So it might be better to push hack out from generic code
     to a legacy caller and keep generic MCFG clean.
     (this patch might be better at the beginning of the series as
      it might affect acpi test results, and might need an update to reference tables
      I don't really sure)
   * at this point arm and x86 impl. would be the same so
     a patch to move mcfg build routine to a generic place and replace
     x86/arm with a single impl.
   * a patch to convert mcfg build routine to build_append_int_noprefix() API
     and drop AcpiTableMcfg structure
     
 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> ---
>  include/hw/acpi/aml-build.h |  25 ++
>  hw/acpi/aml-build.c         | 498 ++++++++++++++++++++++++++++++++++
>  hw/arm/virt-acpi-build.c    |   4 +-
>  hw/i386/acpi-build.c        | 518 +-----------------------------------
>  4 files changed, 528 insertions(+), 517 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index a2ef8b6f31..4f678c45a5 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -3,6 +3,7 @@
>  
>  #include "hw/acpi/acpi-defs.h"
>  #include "hw/acpi/bios-linker-loader.h"
> +#include "hw/pci/pcie_host.h"
>  
>  /* Reserve RAM space for tables: add another order of magnitude. */
>  #define ACPI_BUILD_TABLE_MAX_SIZE         0x200000
> @@ -223,6 +224,21 @@ struct AcpiBuildTables {
>      BIOSLinker *linker;
>  } AcpiBuildTables;
>  
> +typedef struct AcpiMcfgInfo {
> +    uint64_t mcfg_base;
> +    uint32_t mcfg_size;
> +} AcpiMcfgInfo;
> +
> +typedef struct CrsRangeEntry {
> +    uint64_t base;
> +    uint64_t limit;
> +} CrsRangeEntry;
> +
> +typedef struct CrsRangeSet {
> +    GPtrArray *io_ranges;
> +    GPtrArray *mem_ranges;
> +    GPtrArray *mem_64bit_ranges;
> +} CrsRangeSet;
>  /**
I'd prefer not to put these into aml-build.h, it's supposed to host ACPI spec primitives mostly
so I'd suggest to move these to acpi-defs.h or PCI specific acpi header

>   * init_aml_allocator:
>   *
> @@ -389,6 +405,15 @@ void acpi_align_size(GArray *blob, unsigned align);
>  void acpi_add_table(GArray *table_offsets, GArray *table_data);
>  void acpi_build_tables_init(AcpiBuildTables *tables);
>  void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
> +Aml *build_osc_method(void);
> +void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
> +Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
> +Aml *build_prt(bool is_pci0_prt);
> +void crs_range_set_init(CrsRangeSet *range_set);
> +Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
> +void crs_replace_with_free_ranges(GPtrArray *ranges,
> +                                  uint64_t start, uint64_t end);
> +void crs_range_set_free(CrsRangeSet *range_set);
>  void
>  build_rsdp_rsdt(GArray *table_data,
>                  BIOSLinker *linker, unsigned rsdt_tbl_offset);
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 8c2388274c..d3242c6b31 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -25,6 +25,10 @@
>  #include "qemu/bswap.h"
>  #include "qemu/bitops.h"
>  #include "sysemu/numa.h"
> +#include "hw/pci/pci.h"
> +#include "hw/pci/pci_bus.h"
> +#include "qemu/range.h"
> +#include "hw/pci/pci_bridge.h"
>  
>  static GArray *build_alloc_array(void)
>  {
> @@ -1597,6 +1601,500 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
>      g_array_free(tables->vmgenid, mfre);
>  }
>  
> +static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
> +{
> +    CrsRangeEntry *entry;
> +
> +    entry = g_malloc(sizeof(*entry));
> +    entry->base = base;
> +    entry->limit = limit;
> +
> +    g_ptr_array_add(ranges, entry);
> +}
> +
> +static void crs_range_free(gpointer data)
> +{
> +    CrsRangeEntry *entry = (CrsRangeEntry *)data;
> +    g_free(entry);
> +}
> +
> +void crs_range_set_init(CrsRangeSet *range_set)
> +{
> +    range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
> +    range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
> +    range_set->mem_64bit_ranges =
> +            g_ptr_array_new_with_free_func(crs_range_free);
> +}
> +
> +void crs_range_set_free(CrsRangeSet *range_set)
> +{
> +    g_ptr_array_free(range_set->io_ranges, true);
> +    g_ptr_array_free(range_set->mem_ranges, true);
> +    g_ptr_array_free(range_set->mem_64bit_ranges, true);
> +}
> +
> +static gint crs_range_compare(gconstpointer a, gconstpointer b)
> +{
> +     CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
> +     CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
> +
> +     return (int64_t)entry_a->base - (int64_t)entry_b->base;
> +}
> +
> +/*
> + * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
> + * interval, computes the 'free' ranges from the same interval.
> + * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
> + * will return { [base - a1], [a2 - b1], [b2 - limit] }.
> + */
> +void crs_replace_with_free_ranges(GPtrArray *ranges,
> +                                         uint64_t start, uint64_t end)
> +{
> +    GPtrArray *free_ranges = g_ptr_array_new();
> +    uint64_t free_base = start;
> +    int i;
> +
> +    g_ptr_array_sort(ranges, crs_range_compare);
> +    for (i = 0; i < ranges->len; i++) {
> +        CrsRangeEntry *used = g_ptr_array_index(ranges, i);
> +
> +        if (free_base < used->base) {
> +            crs_range_insert(free_ranges, free_base, used->base - 1);
> +        }
> +
> +        free_base = used->limit + 1;
> +    }
> +
> +    if (free_base < end) {
> +        crs_range_insert(free_ranges, free_base, end);
> +    }
> +
> +    g_ptr_array_set_size(ranges, 0);
> +    for (i = 0; i < free_ranges->len; i++) {
> +        g_ptr_array_add(ranges, g_ptr_array_index(free_ranges, i));
> +    }
> +
> +    g_ptr_array_free(free_ranges, true);
> +}
> +
> +/*
> + * crs_range_merge - merges adjacent ranges in the given array.
> + * Array elements are deleted and replaced with the merged ranges.
> + */
> +static void crs_range_merge(GPtrArray *range)
> +{
> +    GPtrArray *tmp =  g_ptr_array_new_with_free_func(crs_range_free);
> +    CrsRangeEntry *entry;
> +    uint64_t range_base, range_limit;
> +    int i;
> +
> +    if (!range->len) {
> +        return;
> +    }
> +
> +    g_ptr_array_sort(range, crs_range_compare);
> +
> +    entry = g_ptr_array_index(range, 0);
> +    range_base = entry->base;
> +    range_limit = entry->limit;
> +    for (i = 1; i < range->len; i++) {
> +        entry = g_ptr_array_index(range, i);
> +        if (entry->base - 1 == range_limit) {
> +            range_limit = entry->limit;
> +        } else {
> +            crs_range_insert(tmp, range_base, range_limit);
> +            range_base = entry->base;
> +            range_limit = entry->limit;
> +        }
> +    }
> +    crs_range_insert(tmp, range_base, range_limit);
> +
> +    g_ptr_array_set_size(range, 0);
> +    for (i = 0; i < tmp->len; i++) {
> +        entry = g_ptr_array_index(tmp, i);
> +        crs_range_insert(range, entry->base, entry->limit);
> +    }
> +    g_ptr_array_free(tmp, true);
> +}
> +
> +Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
> +{
> +    Aml *crs = aml_resource_template();
> +    CrsRangeSet temp_range_set;
> +    CrsRangeEntry *entry;
> +    uint8_t max_bus = pci_bus_num(host->bus);
> +    uint8_t type;
> +    int devfn;
> +    int i;
> +
> +    crs_range_set_init(&temp_range_set);
> +    for (devfn = 0; devfn < ARRAY_SIZE(host->bus->devices); devfn++) {
> +        uint64_t range_base, range_limit;
> +        PCIDevice *dev = host->bus->devices[devfn];
> +
> +        if (!dev) {
> +            continue;
> +        }
> +
> +        for (i = 0; i < PCI_NUM_REGIONS; i++) {
> +            PCIIORegion *r = &dev->io_regions[i];
> +
> +            range_base = r->addr;
> +            range_limit = r->addr + r->size - 1;
> +
> +            /*
> +             * Work-around for old bioses
> +             * that do not support multiple root buses
> +             */
> +            if (!range_base || range_base > range_limit) {
> +                continue;
> +            }
> +
> +            if (r->type & PCI_BASE_ADDRESS_SPACE_IO) {
> +                crs_range_insert(temp_range_set.io_ranges,
> +                                 range_base, range_limit);
> +            } else { /* "memory" */
> +                crs_range_insert(temp_range_set.mem_ranges,
> +                                 range_base, range_limit);
> +            }
> +        }
> +
> +        type = dev->config[PCI_HEADER_TYPE] & ~PCI_HEADER_TYPE_MULTI_FUNCTION;
> +        if (type == PCI_HEADER_TYPE_BRIDGE) {
> +            uint8_t subordinate = dev->config[PCI_SUBORDINATE_BUS];
> +            if (subordinate > max_bus) {
> +                max_bus = subordinate;
> +            }
> +
> +            range_base = pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_IO);
> +            range_limit = pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_IO);
> +
> +            /*
> +             * Work-around for old bioses
> +             * that do not support multiple root buses
> +             */
> +            if (range_base && range_base <= range_limit) {
> +                crs_range_insert(temp_range_set.io_ranges,
> +                                 range_base, range_limit);
> +            }
> +
> +            range_base =
> +                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
> +            range_limit =
> +                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
> +
> +            /*
> +             * Work-around for old bioses
> +             * that do not support multiple root buses
> +             */
> +            if (range_base && range_base <= range_limit) {
> +                uint64_t length = range_limit - range_base + 1;
> +                if (range_limit <= UINT32_MAX && length <= UINT32_MAX)  {
> +                    crs_range_insert(temp_range_set.mem_ranges,
> +                                     range_base, range_limit);
> +                } else {
> +                    crs_range_insert(temp_range_set.mem_64bit_ranges,
> +                                     range_base, range_limit);
> +                }
> +            }
> +
> +            range_base =
> +                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
> +            range_limit =
> +                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
> +
> +            /*
> +             * Work-around for old bioses
> +             * that do not support multiple root buses
> +             */
> +            if (range_base && range_base <= range_limit) {
> +                uint64_t length = range_limit - range_base + 1;
> +                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
> +                    crs_range_insert(temp_range_set.mem_ranges,
> +                                     range_base, range_limit);
> +                } else {
> +                    crs_range_insert(temp_range_set.mem_64bit_ranges,
> +                                     range_base, range_limit);
> +                }
> +            }
> +        }
> +    }
> +
> +    crs_range_merge(temp_range_set.io_ranges);
> +    for (i = 0; i < temp_range_set.io_ranges->len; i++) {
> +        entry = g_ptr_array_index(temp_range_set.io_ranges, i);
> +        aml_append(crs,
> +                   aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> +                               AML_POS_DECODE, AML_ENTIRE_RANGE,
> +                               0, entry->base, entry->limit, 0,
> +                               entry->limit - entry->base + 1));
> +        crs_range_insert(range_set->io_ranges, entry->base, entry->limit);
> +    }
> +
> +    crs_range_merge(temp_range_set.mem_ranges);
> +    for (i = 0; i < temp_range_set.mem_ranges->len; i++) {
> +        entry = g_ptr_array_index(temp_range_set.mem_ranges, i);
> +        aml_append(crs,
> +                   aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> +                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
> +                                    AML_READ_WRITE,
> +                                    0, entry->base, entry->limit, 0,
> +                                    entry->limit - entry->base + 1));
> +        crs_range_insert(range_set->mem_ranges, entry->base, entry->limit);
> +    }
> +
> +    crs_range_merge(temp_range_set.mem_64bit_ranges);
> +    for (i = 0; i < temp_range_set.mem_64bit_ranges->len; i++) {
> +        entry = g_ptr_array_index(temp_range_set.mem_64bit_ranges, i);
> +        aml_append(crs,
> +                   aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> +                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
> +                                    AML_READ_WRITE,
> +                                    0, entry->base, entry->limit, 0,
> +                                    entry->limit - entry->base + 1));
> +        crs_range_insert(range_set->mem_64bit_ranges,
> +                         entry->base, entry->limit);
> +    }
> +
> +    crs_range_set_free(&temp_range_set);
> +
> +    aml_append(crs,
> +        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> +                            0,
> +                            pci_bus_num(host->bus),
> +                            max_bus,
> +                            0,
> +                            max_bus - pci_bus_num(host->bus) + 1));
> +
> +    return crs;
> +}
> +
> +Aml *build_osc_method(void)
> +{
> +    Aml *if_ctx;
> +    Aml *if_ctx2;
> +    Aml *else_ctx;
> +    Aml *method;
> +    Aml *a_cwd1 = aml_name("CDW1");
> +    Aml *a_ctrl = aml_local(0);
> +
> +    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
> +    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
> +
> +    if_ctx = aml_if(aml_equal(
> +        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
> +    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
> +    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
> +
> +    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
> +
> +    /*
> +     * Always allow native PME, AER (no dependencies)
> +     * Allow SHPC (PCI bridges can have SHPC controller)
> +     */
> +    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
> +
> +    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
> +    /* Unknown revision */
> +    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
> +    aml_append(if_ctx, if_ctx2);
> +
> +    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
> +    /* Capabilities bits were masked */
> +    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
> +    aml_append(if_ctx, if_ctx2);
> +
> +    /* Update DWORD3 in the buffer */
> +    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
> +    aml_append(method, if_ctx);
> +
> +    else_ctx = aml_else();
> +    /* Unrecognized UUID */
> +    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
> +    aml_append(method, else_ctx);
> +
> +    aml_append(method, aml_return(aml_arg(3)));
> +    return method;
> +}
> +
> +void
> +build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info)
> +{
> +    AcpiTableMcfg *mcfg;
> +    const char *sig;
> +    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
> +
> +    mcfg = acpi_data_push(table_data, len);
> +    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
> +    /* Only a single allocation so no need to play with segments */
> +    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
> +    mcfg->allocation[0].start_bus_number = 0;
> +    mcfg->allocation[0].end_bus_number = PCIE_MMCFG_BUS(info->mcfg_size - 1);
> +
> +    /* MCFG is used for ECAM which can be enabled or disabled by guest.
> +     * To avoid table size changes (which create migration issues),
> +     * always create the table even if there are no allocations,
> +     * but set the signature to a reserved value in this case.
> +     * ACPI spec requires OSPMs to ignore such tables.
> +     */
> +    if (info->mcfg_base == PCIE_BASE_ADDR_UNMAPPED) {
> +        /* Reserved signature: ignored by OSPM */
> +        sig = "QEMU";
> +    } else {
> +        sig = "MCFG";
> +    }
> +    build_header(linker, table_data, (void *)mcfg, sig, len, 1, NULL, NULL);
> +}
> +
> +Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
> +{
> +    Aml *dev;
> +    Aml *crs;
> +    Aml *method;
> +    uint32_t irqs;
> +
> +    dev = aml_device("%s", name);
> +    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C0F")));
> +    aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
> +
> +    crs = aml_resource_template();
> +    irqs = gsi;
> +    aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
> +                                  AML_SHARED, &irqs, 1));
> +    aml_append(dev, aml_name_decl("_PRS", crs));
> +
> +    aml_append(dev, aml_name_decl("_CRS", crs));
> +
> +    /*
> +     * _DIS can be no-op because the interrupt cannot be disabled.
> +     */
> +    method = aml_method("_DIS", 0, AML_NOTSERIALIZED);
> +    aml_append(dev, method);
> +
> +    method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
> +    aml_append(dev, method);
> +
> +    return dev;
> +}
> +
> +/**
> + * build_prt_entry:
> + * @link_name: link name for PCI route entry
> + *
> + * build AML package containing a PCI route entry for @link_name
> + */
> +static Aml *build_prt_entry(const char *link_name)
> +{
> +    Aml *a_zero = aml_int(0);
> +    Aml *pkg = aml_package(4);
> +    aml_append(pkg, a_zero);
> +    aml_append(pkg, a_zero);
> +    aml_append(pkg, aml_name("%s", link_name));
> +    aml_append(pkg, a_zero);
> +    return pkg;
> +}
> +
> +/*
> + * initialize_route - Initialize the interrupt routing rule
> + * through a specific LINK:
> + *  if (lnk_idx == idx)
> + *      route using link 'link_name'
> + */
> +static Aml *initialize_route(Aml *route, const char *link_name,
> +                             Aml *lnk_idx, int idx)
> +{
> +    Aml *if_ctx = aml_if(aml_equal(lnk_idx, aml_int(idx)));
> +    Aml *pkg = build_prt_entry(link_name);
> +
> +    aml_append(if_ctx, aml_store(pkg, route));
> +
> +    return if_ctx;
> +}
> +
> +/*
> + * build_prt - Define interrupt rounting rules
> + *
> + * Returns an array of 128 routes, one for each device,
> + * based on device location.
> + * The main goal is to equaly distribute the interrupts
> + * over the 4 existing ACPI links (works only for i440fx).
> + * The hash function is  (slot + pin) & 3 -> "LNK[D|A|B|C]".
> + *
> + */
> +Aml *build_prt(bool is_pci0_prt)
> +{
> +    Aml *method, *while_ctx, *pin, *res;
> +
> +    method = aml_method("_PRT", 0, AML_NOTSERIALIZED);
> +    res = aml_local(0);
> +    pin = aml_local(1);
> +    aml_append(method, aml_store(aml_package(128), res));
> +    aml_append(method, aml_store(aml_int(0), pin));
> +
> +    /* while (pin < 128) */
> +    while_ctx = aml_while(aml_lless(pin, aml_int(128)));
> +    {
> +        Aml *slot = aml_local(2);
> +        Aml *lnk_idx = aml_local(3);
> +        Aml *route = aml_local(4);
> +
> +        /* slot = pin >> 2 */
> +        aml_append(while_ctx,
> +                   aml_store(aml_shiftright(pin, aml_int(2), NULL), slot));
> +        /* lnk_idx = (slot + pin) & 3 */
> +        aml_append(while_ctx,
> +            aml_store(aml_and(aml_add(pin, slot, NULL), aml_int(3), NULL),
> +                      lnk_idx));
> +
> +        /* route[2] = "LNK[D|A|B|C]", selection based on pin % 3  */
> +        aml_append(while_ctx, initialize_route(route, "LNKD", lnk_idx, 0));
> +        if (is_pci0_prt) {
> +            Aml *if_device_1, *if_pin_4, *else_pin_4;
> +
> +            /* device 1 is the power-management device, needs SCI */
> +            if_device_1 = aml_if(aml_equal(lnk_idx, aml_int(1)));
> +            {
> +                if_pin_4 = aml_if(aml_equal(pin, aml_int(4)));
> +                {
> +                    aml_append(if_pin_4,
> +                        aml_store(build_prt_entry("LNKS"), route));
> +                }
> +                aml_append(if_device_1, if_pin_4);
> +                else_pin_4 = aml_else();
> +                {
> +                    aml_append(else_pin_4,
> +                        aml_store(build_prt_entry("LNKA"), route));
> +                }
> +                aml_append(if_device_1, else_pin_4);
> +            }
> +            aml_append(while_ctx, if_device_1);
> +        } else {
> +            aml_append(while_ctx, initialize_route(route, "LNKA", lnk_idx, 1));
> +        }
> +        aml_append(while_ctx, initialize_route(route, "LNKB", lnk_idx, 2));
> +        aml_append(while_ctx, initialize_route(route, "LNKC", lnk_idx, 3));
> +
> +        /* route[0] = 0x[slot]FFFF */
> +        aml_append(while_ctx,
> +            aml_store(aml_or(aml_shiftleft(slot, aml_int(16)), aml_int(0xFFFF),
> +                             NULL),
> +                      aml_index(route, aml_int(0))));
> +        /* route[1] = pin & 3 */
> +        aml_append(while_ctx,
> +            aml_store(aml_and(pin, aml_int(3), NULL),
> +                      aml_index(route, aml_int(1))));
> +        /* res[pin] = route */
> +        aml_append(while_ctx, aml_store(route, aml_index(res, pin)));
> +        /* pin++ */
> +        aml_append(while_ctx, aml_increment(pin));
> +    }
> +    aml_append(method, while_ctx);
> +    /* return res*/
> +    aml_append(method, aml_return(res));
> +
> +    return method;
> +}
> +
>  /* Build rsdt table */
>  void
>  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index 261363e20c..c9b4916ba7 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -545,7 +545,7 @@ build_srat(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>  }
>  
>  static void
> -build_mcfg(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
> +virt_build_mcfg(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>  {
>      AcpiTableMcfg *mcfg;
>      const MemMapEntry *memmap = vms->memmap;
> @@ -790,7 +790,7 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
>      build_gtdt(tables_blob, tables->linker, vms);
>  
>      acpi_add_table(table_offsets, tables_blob);
> -    build_mcfg(tables_blob, tables->linker, vms);
> +    virt_build_mcfg(tables_blob, tables->linker, vms);
>  
>      acpi_add_table(table_offsets, tables_blob);
>      build_spcr(tables_blob, tables->linker, vms);
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index cfc2444d0d..996d8a11dc 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -27,7 +27,6 @@
>  #include "qemu-common.h"
>  #include "qemu/bitmap.h"
>  #include "qemu/error-report.h"
> -#include "hw/pci/pci.h"
>  #include "qom/cpu.h"
>  #include "target/i386/cpu.h"
>  #include "hw/misc/pvpanic.h"
> @@ -53,7 +52,6 @@
>  #include "hw/acpi/piix4.h"
>  #include "hw/acpi/pcihp.h"
>  #include "hw/i386/ich9.h"
> -#include "hw/pci/pci_bus.h"
>  #include "hw/pci-host/q35.h"
>  #include "hw/i386/x86-iommu.h"
>  
> @@ -86,11 +84,6 @@
>  /* Default IOAPIC ID */
>  #define ACPI_BUILD_IOAPIC_ID 0x0
>  
> -typedef struct AcpiMcfgInfo {
> -    uint64_t mcfg_base;
> -    uint32_t mcfg_size;
> -} AcpiMcfgInfo;
> -
>  typedef struct AcpiPmInfo {
>      bool s3_disabled;
>      bool s4_disabled;
> @@ -567,403 +560,6 @@ static void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
>      qobject_unref(bsel);
>  }
>  
> -/**
> - * build_prt_entry:
> - * @link_name: link name for PCI route entry
> - *
> - * build AML package containing a PCI route entry for @link_name
> - */
> -static Aml *build_prt_entry(const char *link_name)
> -{
> -    Aml *a_zero = aml_int(0);
> -    Aml *pkg = aml_package(4);
> -    aml_append(pkg, a_zero);
> -    aml_append(pkg, a_zero);
> -    aml_append(pkg, aml_name("%s", link_name));
> -    aml_append(pkg, a_zero);
> -    return pkg;
> -}
> -
> -/*
> - * initialize_route - Initialize the interrupt routing rule
> - * through a specific LINK:
> - *  if (lnk_idx == idx)
> - *      route using link 'link_name'
> - */
> -static Aml *initialize_route(Aml *route, const char *link_name,
> -                             Aml *lnk_idx, int idx)
> -{
> -    Aml *if_ctx = aml_if(aml_equal(lnk_idx, aml_int(idx)));
> -    Aml *pkg = build_prt_entry(link_name);
> -
> -    aml_append(if_ctx, aml_store(pkg, route));
> -
> -    return if_ctx;
> -}
> -
> -/*
> - * build_prt - Define interrupt rounting rules
> - *
> - * Returns an array of 128 routes, one for each device,
> - * based on device location.
> - * The main goal is to equaly distribute the interrupts
> - * over the 4 existing ACPI links (works only for i440fx).
> - * The hash function is  (slot + pin) & 3 -> "LNK[D|A|B|C]".
> - *
> - */
> -static Aml *build_prt(bool is_pci0_prt)
> -{
> -    Aml *method, *while_ctx, *pin, *res;
> -
> -    method = aml_method("_PRT", 0, AML_NOTSERIALIZED);
> -    res = aml_local(0);
> -    pin = aml_local(1);
> -    aml_append(method, aml_store(aml_package(128), res));
> -    aml_append(method, aml_store(aml_int(0), pin));
> -
> -    /* while (pin < 128) */
> -    while_ctx = aml_while(aml_lless(pin, aml_int(128)));
> -    {
> -        Aml *slot = aml_local(2);
> -        Aml *lnk_idx = aml_local(3);
> -        Aml *route = aml_local(4);
> -
> -        /* slot = pin >> 2 */
> -        aml_append(while_ctx,
> -                   aml_store(aml_shiftright(pin, aml_int(2), NULL), slot));
> -        /* lnk_idx = (slot + pin) & 3 */
> -        aml_append(while_ctx,
> -            aml_store(aml_and(aml_add(pin, slot, NULL), aml_int(3), NULL),
> -                      lnk_idx));
> -
> -        /* route[2] = "LNK[D|A|B|C]", selection based on pin % 3  */
> -        aml_append(while_ctx, initialize_route(route, "LNKD", lnk_idx, 0));
> -        if (is_pci0_prt) {
> -            Aml *if_device_1, *if_pin_4, *else_pin_4;
> -
> -            /* device 1 is the power-management device, needs SCI */
> -            if_device_1 = aml_if(aml_equal(lnk_idx, aml_int(1)));
> -            {
> -                if_pin_4 = aml_if(aml_equal(pin, aml_int(4)));
> -                {
> -                    aml_append(if_pin_4,
> -                        aml_store(build_prt_entry("LNKS"), route));
> -                }
> -                aml_append(if_device_1, if_pin_4);
> -                else_pin_4 = aml_else();
> -                {
> -                    aml_append(else_pin_4,
> -                        aml_store(build_prt_entry("LNKA"), route));
> -                }
> -                aml_append(if_device_1, else_pin_4);
> -            }
> -            aml_append(while_ctx, if_device_1);
> -        } else {
> -            aml_append(while_ctx, initialize_route(route, "LNKA", lnk_idx, 1));
> -        }
> -        aml_append(while_ctx, initialize_route(route, "LNKB", lnk_idx, 2));
> -        aml_append(while_ctx, initialize_route(route, "LNKC", lnk_idx, 3));
> -
> -        /* route[0] = 0x[slot]FFFF */
> -        aml_append(while_ctx,
> -            aml_store(aml_or(aml_shiftleft(slot, aml_int(16)), aml_int(0xFFFF),
> -                             NULL),
> -                      aml_index(route, aml_int(0))));
> -        /* route[1] = pin & 3 */
> -        aml_append(while_ctx,
> -            aml_store(aml_and(pin, aml_int(3), NULL),
> -                      aml_index(route, aml_int(1))));
> -        /* res[pin] = route */
> -        aml_append(while_ctx, aml_store(route, aml_index(res, pin)));
> -        /* pin++ */
> -        aml_append(while_ctx, aml_increment(pin));
> -    }
> -    aml_append(method, while_ctx);
> -    /* return res*/
> -    aml_append(method, aml_return(res));
> -
> -    return method;
> -}
> -
> -typedef struct CrsRangeEntry {
> -    uint64_t base;
> -    uint64_t limit;
> -} CrsRangeEntry;
> -
> -static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
> -{
> -    CrsRangeEntry *entry;
> -
> -    entry = g_malloc(sizeof(*entry));
> -    entry->base = base;
> -    entry->limit = limit;
> -
> -    g_ptr_array_add(ranges, entry);
> -}
> -
> -static void crs_range_free(gpointer data)
> -{
> -    CrsRangeEntry *entry = (CrsRangeEntry *)data;
> -    g_free(entry);
> -}
> -
> -typedef struct CrsRangeSet {
> -    GPtrArray *io_ranges;
> -    GPtrArray *mem_ranges;
> -    GPtrArray *mem_64bit_ranges;
> - } CrsRangeSet;
> -
> -static void crs_range_set_init(CrsRangeSet *range_set)
> -{
> -    range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
> -    range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
> -    range_set->mem_64bit_ranges =
> -            g_ptr_array_new_with_free_func(crs_range_free);
> -}
> -
> -static void crs_range_set_free(CrsRangeSet *range_set)
> -{
> -    g_ptr_array_free(range_set->io_ranges, true);
> -    g_ptr_array_free(range_set->mem_ranges, true);
> -    g_ptr_array_free(range_set->mem_64bit_ranges, true);
> -}
> -
> -static gint crs_range_compare(gconstpointer a, gconstpointer b)
> -{
> -     CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
> -     CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
> -
> -     return (int64_t)entry_a->base - (int64_t)entry_b->base;
> -}
> -
> -/*
> - * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
> - * interval, computes the 'free' ranges from the same interval.
> - * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
> - * will return { [base - a1], [a2 - b1], [b2 - limit] }.
> - */
> -static void crs_replace_with_free_ranges(GPtrArray *ranges,
> -                                         uint64_t start, uint64_t end)
> -{
> -    GPtrArray *free_ranges = g_ptr_array_new();
> -    uint64_t free_base = start;
> -    int i;
> -
> -    g_ptr_array_sort(ranges, crs_range_compare);
> -    for (i = 0; i < ranges->len; i++) {
> -        CrsRangeEntry *used = g_ptr_array_index(ranges, i);
> -
> -        if (free_base < used->base) {
> -            crs_range_insert(free_ranges, free_base, used->base - 1);
> -        }
> -
> -        free_base = used->limit + 1;
> -    }
> -
> -    if (free_base < end) {
> -        crs_range_insert(free_ranges, free_base, end);
> -    }
> -
> -    g_ptr_array_set_size(ranges, 0);
> -    for (i = 0; i < free_ranges->len; i++) {
> -        g_ptr_array_add(ranges, g_ptr_array_index(free_ranges, i));
> -    }
> -
> -    g_ptr_array_free(free_ranges, true);
> -}
> -
> -/*
> - * crs_range_merge - merges adjacent ranges in the given array.
> - * Array elements are deleted and replaced with the merged ranges.
> - */
> -static void crs_range_merge(GPtrArray *range)
> -{
> -    GPtrArray *tmp =  g_ptr_array_new_with_free_func(crs_range_free);
> -    CrsRangeEntry *entry;
> -    uint64_t range_base, range_limit;
> -    int i;
> -
> -    if (!range->len) {
> -        return;
> -    }
> -
> -    g_ptr_array_sort(range, crs_range_compare);
> -
> -    entry = g_ptr_array_index(range, 0);
> -    range_base = entry->base;
> -    range_limit = entry->limit;
> -    for (i = 1; i < range->len; i++) {
> -        entry = g_ptr_array_index(range, i);
> -        if (entry->base - 1 == range_limit) {
> -            range_limit = entry->limit;
> -        } else {
> -            crs_range_insert(tmp, range_base, range_limit);
> -            range_base = entry->base;
> -            range_limit = entry->limit;
> -        }
> -    }
> -    crs_range_insert(tmp, range_base, range_limit);
> -
> -    g_ptr_array_set_size(range, 0);
> -    for (i = 0; i < tmp->len; i++) {
> -        entry = g_ptr_array_index(tmp, i);
> -        crs_range_insert(range, entry->base, entry->limit);
> -    }
> -    g_ptr_array_free(tmp, true);
> -}
> -
> -static Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set)
> -{
> -    Aml *crs = aml_resource_template();
> -    CrsRangeSet temp_range_set;
> -    CrsRangeEntry *entry;
> -    uint8_t max_bus = pci_bus_num(host->bus);
> -    uint8_t type;
> -    int devfn;
> -    int i;
> -
> -    crs_range_set_init(&temp_range_set);
> -    for (devfn = 0; devfn < ARRAY_SIZE(host->bus->devices); devfn++) {
> -        uint64_t range_base, range_limit;
> -        PCIDevice *dev = host->bus->devices[devfn];
> -
> -        if (!dev) {
> -            continue;
> -        }
> -
> -        for (i = 0; i < PCI_NUM_REGIONS; i++) {
> -            PCIIORegion *r = &dev->io_regions[i];
> -
> -            range_base = r->addr;
> -            range_limit = r->addr + r->size - 1;
> -
> -            /*
> -             * Work-around for old bioses
> -             * that do not support multiple root buses
> -             */
> -            if (!range_base || range_base > range_limit) {
> -                continue;
> -            }
> -
> -            if (r->type & PCI_BASE_ADDRESS_SPACE_IO) {
> -                crs_range_insert(temp_range_set.io_ranges,
> -                                 range_base, range_limit);
> -            } else { /* "memory" */
> -                crs_range_insert(temp_range_set.mem_ranges,
> -                                 range_base, range_limit);
> -            }
> -        }
> -
> -        type = dev->config[PCI_HEADER_TYPE] & ~PCI_HEADER_TYPE_MULTI_FUNCTION;
> -        if (type == PCI_HEADER_TYPE_BRIDGE) {
> -            uint8_t subordinate = dev->config[PCI_SUBORDINATE_BUS];
> -            if (subordinate > max_bus) {
> -                max_bus = subordinate;
> -            }
> -
> -            range_base = pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_IO);
> -            range_limit = pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_IO);
> -
> -            /*
> -             * Work-around for old bioses
> -             * that do not support multiple root buses
> -             */
> -            if (range_base && range_base <= range_limit) {
> -                crs_range_insert(temp_range_set.io_ranges,
> -                                 range_base, range_limit);
> -            }
> -
> -            range_base =
> -                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
> -            range_limit =
> -                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_SPACE_MEMORY);
> -
> -            /*
> -             * Work-around for old bioses
> -             * that do not support multiple root buses
> -             */
> -            if (range_base && range_base <= range_limit) {
> -                uint64_t length = range_limit - range_base + 1;
> -                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
> -                    crs_range_insert(temp_range_set.mem_ranges,
> -                                     range_base, range_limit);
> -                } else {
> -                    crs_range_insert(temp_range_set.mem_64bit_ranges,
> -                                     range_base, range_limit);
> -                }
> -            }
> -
> -            range_base =
> -                pci_bridge_get_base(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
> -            range_limit =
> -                pci_bridge_get_limit(dev, PCI_BASE_ADDRESS_MEM_PREFETCH);
> -
> -            /*
> -             * Work-around for old bioses
> -             * that do not support multiple root buses
> -             */
> -            if (range_base && range_base <= range_limit) {
> -                uint64_t length = range_limit - range_base + 1;
> -                if (range_limit <= UINT32_MAX && length <= UINT32_MAX) {
> -                    crs_range_insert(temp_range_set.mem_ranges,
> -                                     range_base, range_limit);
> -                } else {
> -                    crs_range_insert(temp_range_set.mem_64bit_ranges,
> -                                     range_base, range_limit);
> -                }
> -            }
> -        }
> -    }
> -
> -    crs_range_merge(temp_range_set.io_ranges);
> -    for (i = 0; i < temp_range_set.io_ranges->len; i++) {
> -        entry = g_ptr_array_index(temp_range_set.io_ranges, i);
> -        aml_append(crs,
> -                   aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> -                               AML_POS_DECODE, AML_ENTIRE_RANGE,
> -                               0, entry->base, entry->limit, 0,
> -                               entry->limit - entry->base + 1));
> -        crs_range_insert(range_set->io_ranges, entry->base, entry->limit);
> -    }
> -
> -    crs_range_merge(temp_range_set.mem_ranges);
> -    for (i = 0; i < temp_range_set.mem_ranges->len; i++) {
> -        entry = g_ptr_array_index(temp_range_set.mem_ranges, i);
> -        aml_append(crs,
> -                   aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> -                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
> -                                    AML_READ_WRITE,
> -                                    0, entry->base, entry->limit, 0,
> -                                    entry->limit - entry->base + 1));
> -        crs_range_insert(range_set->mem_ranges, entry->base, entry->limit);
> -    }
> -
> -    crs_range_merge(temp_range_set.mem_64bit_ranges);
> -    for (i = 0; i < temp_range_set.mem_64bit_ranges->len; i++) {
> -        entry = g_ptr_array_index(temp_range_set.mem_64bit_ranges, i);
> -        aml_append(crs,
> -                   aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> -                                    AML_MAX_FIXED, AML_NON_CACHEABLE,
> -                                    AML_READ_WRITE,
> -                                    0, entry->base, entry->limit, 0,
> -                                    entry->limit - entry->base + 1));
> -        crs_range_insert(range_set->mem_64bit_ranges,
> -                         entry->base, entry->limit);
> -    }
> -
> -    crs_range_set_free(&temp_range_set);
> -
> -    aml_append(crs,
> -        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> -                            0,
> -                            pci_bus_num(host->bus),
> -                            max_bus,
> -                            0,
> -                            max_bus - pci_bus_num(host->bus) + 1));
> -
> -    return crs;
> -}
> -
>  static void build_hpet_aml(Aml *table)
>  {
>      Aml *crs;
> @@ -1334,37 +930,6 @@ static Aml *build_link_dev(const char *name, uint8_t uid, Aml *reg)
>      return dev;
>   }
>  
> -static Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi)
> -{
> -    Aml *dev;
> -    Aml *crs;
> -    Aml *method;
> -    uint32_t irqs;
> -
> -    dev = aml_device("%s", name);
> -    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C0F")));
> -    aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
> -
> -    crs = aml_resource_template();
> -    irqs = gsi;
> -    aml_append(crs, aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
> -                                  AML_SHARED, &irqs, 1));
> -    aml_append(dev, aml_name_decl("_PRS", crs));
> -
> -    aml_append(dev, aml_name_decl("_CRS", crs));
> -
> -    /*
> -     * _DIS can be no-op because the interrupt cannot be disabled.
> -     */
> -    method = aml_method("_DIS", 0, AML_NOTSERIALIZED);
> -    aml_append(dev, method);
> -
> -    method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
> -    aml_append(dev, method);
> -
> -    return dev;
> -}
> -
>  /* _CRS method - get current settings */
>  static Aml *build_iqcr_method(bool is_piix4)
>  {
> @@ -1728,54 +1293,6 @@ static void build_piix4_pci_hotplug(Aml *table)
>      aml_append(table, scope);
>  }
>  
> -static Aml *build_q35_osc_method(void)
> -{
> -    Aml *if_ctx;
> -    Aml *if_ctx2;
> -    Aml *else_ctx;
> -    Aml *method;
> -    Aml *a_cwd1 = aml_name("CDW1");
> -    Aml *a_ctrl = aml_local(0);
> -
> -    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
> -    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
> -
> -    if_ctx = aml_if(aml_equal(
> -        aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
> -    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
> -    aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
> -
> -    aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
> -
> -    /*
> -     * Always allow native PME, AER (no dependencies)
> -     * Allow SHPC (PCI bridges can have SHPC controller)
> -     */
> -    aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
> -
> -    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1))));
> -    /* Unknown revision */
> -    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
> -    aml_append(if_ctx, if_ctx2);
> -
> -    if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
> -    /* Capabilities bits were masked */
> -    aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
> -    aml_append(if_ctx, if_ctx2);
> -
> -    /* Update DWORD3 in the buffer */
> -    aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
> -    aml_append(method, if_ctx);
> -
> -    else_ctx = aml_else();
> -    /* Unrecognized UUID */
> -    aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
> -    aml_append(method, else_ctx);
> -
> -    aml_append(method, aml_return(aml_arg(3)));
> -    return method;
> -}
> -
>  static void
>  build_dsdt(GArray *table_data, BIOSLinker *linker,
>             AcpiPmInfo *pm, AcpiMiscInfo *misc,
> @@ -1818,7 +1335,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>          aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
>          aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
>          aml_append(dev, aml_name_decl("_UID", aml_int(1)));
> -        aml_append(dev, build_q35_osc_method());
> +        aml_append(dev, build_osc_method());
>          aml_append(sb_scope, dev);
>          aml_append(dsdt, sb_scope);
>  
> @@ -1883,7 +1400,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>              aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
>              aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
>              if (pci_bus_is_express(bus)) {
> -                aml_append(dev, build_q35_osc_method());
> +                aml_append(dev, build_osc_method());
>              }
>  
>              if (numa_node != NUMA_NODE_UNASSIGNED) {
> @@ -2370,35 +1887,6 @@ build_srat(GArray *table_data, BIOSLinker *linker,
>                   table_data->len - srat_start, 1, NULL, NULL);
>  }
>  
> -static void
> -build_mcfg_q35(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info)
> -{
> -    AcpiTableMcfg *mcfg;
> -    const char *sig;
> -    int len = sizeof(*mcfg) + 1 * sizeof(mcfg->allocation[0]);
> -
> -    mcfg = acpi_data_push(table_data, len);
> -    mcfg->allocation[0].address = cpu_to_le64(info->mcfg_base);
> -    /* Only a single allocation so no need to play with segments */
> -    mcfg->allocation[0].pci_segment = cpu_to_le16(0);
> -    mcfg->allocation[0].start_bus_number = 0;
> -    mcfg->allocation[0].end_bus_number = PCIE_MMCFG_BUS(info->mcfg_size - 1);
> -
> -    /* MCFG is used for ECAM which can be enabled or disabled by guest.
> -     * To avoid table size changes (which create migration issues),
> -     * always create the table even if there are no allocations,
> -     * but set the signature to a reserved value in this case.
> -     * ACPI spec requires OSPMs to ignore such tables.
> -     */
> -    if (info->mcfg_base == PCIE_BASE_ADDR_UNMAPPED) {
> -        /* Reserved signature: ignored by OSPM */
> -        sig = "QEMU";
> -    } else {
> -        sig = "MCFG";
> -    }
> -    build_header(linker, table_data, (void *)mcfg, sig, len, 1, NULL, NULL);
> -}
> -
>  /*
>   * VT-d spec 8.1 DMA Remapping Reporting Structure
>   * (version Oct. 2014 or later)
> @@ -2626,7 +2114,7 @@ void acpi_build(AcpiBuildTables *tables,
>      }
>      if (acpi_get_mcfg(&mcfg)) {
>          acpi_add_table(table_offsets, tables_blob);
> -        build_mcfg_q35(tables_blob, tables->linker, &mcfg);
> +        build_mcfg(tables_blob, tables->linker, &mcfg);
>      }
>      if (x86_iommu_get_default()) {
>          IommuType IOMMUType = x86_iommu_get_type();


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-09 14:23     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09 14:23 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost

On Mon,  5 Nov 2018 02:40:24 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> ACPI tables are platform and machine type and even architecture
> agnostic, and as such we want to provide an internal ACPI API that
> only depends on platform agnostic information.
> 
> For the x86 architecture, in order to build ACPI tables independently
> from the PC or Q35 machine types, we are moving a few MachineState
> structure fields into a machine type agnostic structure called
> AcpiConfiguration. The structure fields we move are:

It's not obvious why new structure is needed, especially at
the beginning of series. We probably should place this patch
much later in the series (if we need it at all) and try
generalize a much as possible without using it.

And try to come up with an API that doesn't need centralized collection
of data somehow related to ACPI (most of the fields here are not generic
and applicable to a specific board/target).

For generic API, I'd prefer a separate building blocks
like build_fadt()/... that take as an input only parameters
necessary to compose a table/aml part with occasional board
interface hooks instead of all encompassing AcpiConfiguration
and board specific 'acpi_build' that would use them when/if needed.

We probably should split series into several smaller
(if possible independent) ones, so people won't be scared of
its sheer size and run away from reviewing it.
That way it would be easier to review, amend certain parts and merge.

acpi_setup() & co probably should be the last things that's are
generalized as they are called by concrete boards and might collect
board specific data and apply compat workarounds for building ACPI tables
(assuming that we won't push non generic data into generic API).

See more comments below

>    HotplugHandler *acpi_dev
>    AcpiNVDIMMState acpi_nvdimm_state;
>    FWCfgState *fw_cfg
>    ram_addr_t below_4g_mem_size, above_4g_mem_size
>    bool apic_xrupt_override
>    unsigned apic_id_limit
>    uint64_t numa_nodes
>    uint64_t numa_mem
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  hw/i386/acpi-build.h     |   4 +-
>  include/hw/acpi/acpi.h   |  44 ++++++++++
>  include/hw/i386/pc.h     |  19 ++---
>  hw/acpi/cpu_hotplug.c    |   9 +-
>  hw/arm/virt-acpi-build.c |  10 ---
>  hw/i386/acpi-build.c     | 136 ++++++++++++++----------------
>  hw/i386/pc.c             | 176 ++++++++++++++++++++++++---------------
>  hw/i386/pc_piix.c        |  21 ++---
>  hw/i386/pc_q35.c         |  21 ++---
>  hw/i386/xen/xen-hvm.c    |  19 +++--
>  10 files changed, 257 insertions(+), 202 deletions(-)
> 
> diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
> index 007332e51c..065a1d8250 100644
> --- a/hw/i386/acpi-build.h
> +++ b/hw/i386/acpi-build.h
> @@ -2,6 +2,8 @@
>  #ifndef HW_I386_ACPI_BUILD_H
>  #define HW_I386_ACPI_BUILD_H
>  
> -void acpi_setup(void);
> +#include "hw/acpi/acpi.h"
> +
> +void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
>  
>  #endif
> diff --git a/include/hw/acpi/acpi.h b/include/hw/acpi/acpi.h
> index c20ace0d0b..254c8d0cfc 100644
> --- a/include/hw/acpi/acpi.h
> +++ b/include/hw/acpi/acpi.h
> @@ -24,6 +24,8 @@
>  #include "exec/memory.h"
>  #include "hw/irq.h"
>  #include "hw/acpi/acpi_dev_interface.h"
> +#include "hw/hotplug.h"
> +#include "hw/mem/nvdimm.h"
>  
>  /*
>   * current device naming scheme supports up to 256 memory devices
> @@ -186,6 +188,48 @@ extern int acpi_enabled;
>  extern char unsigned *acpi_tables;
>  extern size_t acpi_tables_len;
>  
> +typedef
> +struct AcpiBuildState {
> +    /* Copy of table in RAM (for patching). */
> +    MemoryRegion *table_mr;
> +    /* Is table patched? */
> +    bool patched;
> +    void *rsdp;
> +    MemoryRegion *rsdp_mr;
> +    MemoryRegion *linker_mr;
> +} AcpiBuildState;
> +
> +typedef
> +struct AcpiConfiguration {
We used to have a similar intermediate structure PcGuestInfo,
but got rid of it in the end. Even with other questions aside
I'm not quite convinced that it's good idea to reintroduce similar
one again.


> +    /* Machine class ACPI settings */
> +    int legacy_acpi_table_size;
> +    bool rsdp_in_ram;
> +    unsigned acpi_data_size;
 (*) well, all 2 are the legacy stuff, I'd rather not to push it into
generic API and keep it in the caller as board specific/machine
version code.

> +
> +    /* Machine state ACPI settings */
> +    HotplugHandler *acpi_dev;
> +    AcpiNVDIMMState acpi_nvdimm_state;
> +
> +    /*
> +     * The fields below are machine settings that
> +     * are not ACPI specific. However they are needed
> +     * for building ACPI tables and as such should be
> +     * carried through the ACPI configuration structure.
> +     */
if they are not ACPI specific, then it shouldn't be in acpi
configuration. Some of the fields are compat hacks, which doesn't
belong to generic API so I'd leave them in board specific code
and some are target specific which also doesn't belong in generic
place.

> +    bool legacy_cpu_hotplug;
> +    bool linuxboot_dma_enabled;
> +    FWCfgState *fw_cfg;

> +    ram_addr_t below_4g_mem_size, above_4g_mem_size;;
Just curious, how is this applicable to i386/virt machine?
Does it also have memory split in 2 regions?
Is it possible to have only one region?

> +    uint64_t numa_nodes;
> +    uint64_t *node_mem;
that's kept in PCMachine for the sake of legacy SeaBIOS
which builds ACPI tables on its own.
I'd suggest to use existing globals instead (like ARM does)
so that we wouldn't have to hunt down extra copies later
when those globals are re-factored to properties.

> +    bool apic_xrupt_override;
> +    unsigned apic_id_limit;
> +    PCIHostState *pci_host;
> +
> +    /* Build state */
> +    AcpiBuildState *build_state;
> +} AcpiConfiguration;
> +
[...]

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type
@ 2018-11-09 14:23     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09 14:23 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:24 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> ACPI tables are platform and machine type and even architecture
> agnostic, and as such we want to provide an internal ACPI API that
> only depends on platform agnostic information.
> 
> For the x86 architecture, in order to build ACPI tables independently
> from the PC or Q35 machine types, we are moving a few MachineState
> structure fields into a machine type agnostic structure called
> AcpiConfiguration. The structure fields we move are:

It's not obvious why new structure is needed, especially at
the beginning of series. We probably should place this patch
much later in the series (if we need it at all) and try
generalize a much as possible without using it.

And try to come up with an API that doesn't need centralized collection
of data somehow related to ACPI (most of the fields here are not generic
and applicable to a specific board/target).

For generic API, I'd prefer a separate building blocks
like build_fadt()/... that take as an input only parameters
necessary to compose a table/aml part with occasional board
interface hooks instead of all encompassing AcpiConfiguration
and board specific 'acpi_build' that would use them when/if needed.

We probably should split series into several smaller
(if possible independent) ones, so people won't be scared of
its sheer size and run away from reviewing it.
That way it would be easier to review, amend certain parts and merge.

acpi_setup() & co probably should be the last things that's are
generalized as they are called by concrete boards and might collect
board specific data and apply compat workarounds for building ACPI tables
(assuming that we won't push non generic data into generic API).

See more comments below

>    HotplugHandler *acpi_dev
>    AcpiNVDIMMState acpi_nvdimm_state;
>    FWCfgState *fw_cfg
>    ram_addr_t below_4g_mem_size, above_4g_mem_size
>    bool apic_xrupt_override
>    unsigned apic_id_limit
>    uint64_t numa_nodes
>    uint64_t numa_mem
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  hw/i386/acpi-build.h     |   4 +-
>  include/hw/acpi/acpi.h   |  44 ++++++++++
>  include/hw/i386/pc.h     |  19 ++---
>  hw/acpi/cpu_hotplug.c    |   9 +-
>  hw/arm/virt-acpi-build.c |  10 ---
>  hw/i386/acpi-build.c     | 136 ++++++++++++++----------------
>  hw/i386/pc.c             | 176 ++++++++++++++++++++++++---------------
>  hw/i386/pc_piix.c        |  21 ++---
>  hw/i386/pc_q35.c         |  21 ++---
>  hw/i386/xen/xen-hvm.c    |  19 +++--
>  10 files changed, 257 insertions(+), 202 deletions(-)
> 
> diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
> index 007332e51c..065a1d8250 100644
> --- a/hw/i386/acpi-build.h
> +++ b/hw/i386/acpi-build.h
> @@ -2,6 +2,8 @@
>  #ifndef HW_I386_ACPI_BUILD_H
>  #define HW_I386_ACPI_BUILD_H
>  
> -void acpi_setup(void);
> +#include "hw/acpi/acpi.h"
> +
> +void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
>  
>  #endif
> diff --git a/include/hw/acpi/acpi.h b/include/hw/acpi/acpi.h
> index c20ace0d0b..254c8d0cfc 100644
> --- a/include/hw/acpi/acpi.h
> +++ b/include/hw/acpi/acpi.h
> @@ -24,6 +24,8 @@
>  #include "exec/memory.h"
>  #include "hw/irq.h"
>  #include "hw/acpi/acpi_dev_interface.h"
> +#include "hw/hotplug.h"
> +#include "hw/mem/nvdimm.h"
>  
>  /*
>   * current device naming scheme supports up to 256 memory devices
> @@ -186,6 +188,48 @@ extern int acpi_enabled;
>  extern char unsigned *acpi_tables;
>  extern size_t acpi_tables_len;
>  
> +typedef
> +struct AcpiBuildState {
> +    /* Copy of table in RAM (for patching). */
> +    MemoryRegion *table_mr;
> +    /* Is table patched? */
> +    bool patched;
> +    void *rsdp;
> +    MemoryRegion *rsdp_mr;
> +    MemoryRegion *linker_mr;
> +} AcpiBuildState;
> +
> +typedef
> +struct AcpiConfiguration {
We used to have a similar intermediate structure PcGuestInfo,
but got rid of it in the end. Even with other questions aside
I'm not quite convinced that it's good idea to reintroduce similar
one again.


> +    /* Machine class ACPI settings */
> +    int legacy_acpi_table_size;
> +    bool rsdp_in_ram;
> +    unsigned acpi_data_size;
 (*) well, all 2 are the legacy stuff, I'd rather not to push it into
generic API and keep it in the caller as board specific/machine
version code.

> +
> +    /* Machine state ACPI settings */
> +    HotplugHandler *acpi_dev;
> +    AcpiNVDIMMState acpi_nvdimm_state;
> +
> +    /*
> +     * The fields below are machine settings that
> +     * are not ACPI specific. However they are needed
> +     * for building ACPI tables and as such should be
> +     * carried through the ACPI configuration structure.
> +     */
if they are not ACPI specific, then it shouldn't be in acpi
configuration. Some of the fields are compat hacks, which doesn't
belong to generic API so I'd leave them in board specific code
and some are target specific which also doesn't belong in generic
place.

> +    bool legacy_cpu_hotplug;
> +    bool linuxboot_dma_enabled;
> +    FWCfgState *fw_cfg;

> +    ram_addr_t below_4g_mem_size, above_4g_mem_size;;
Just curious, how is this applicable to i386/virt machine?
Does it also have memory split in 2 regions?
Is it possible to have only one region?

> +    uint64_t numa_nodes;
> +    uint64_t *node_mem;
that's kept in PCMachine for the sake of legacy SeaBIOS
which builds ACPI tables on its own.
I'd suggest to use existing globals instead (like ARM does)
so that we wouldn't have to hunt down extra copies later
when those globals are re-factored to properties.

> +    bool apic_xrupt_override;
> +    unsigned apic_id_limit;
> +    PCIHostState *pci_host;
> +
> +    /* Build state */
> +    AcpiBuildState *build_state;
> +} AcpiConfiguration;
> +
[...]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-09 14:27     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09 14:27 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost

On Mon,  5 Nov 2018 02:40:25 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This is going to be needed by the Hardware-reduced ACPI routines.
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
the patch is probably misplaced withing series,
if there is an external user within this series then this patch should
be squashed there, otherwise it doesn't belong to this series.

> ---
>  include/hw/acpi/aml-build.h | 2 ++
>  hw/acpi/aml-build.c         | 8 ++++++++
>  hw/i386/acpi-build.c        | 8 --------
>  3 files changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index 6c36903c0a..73fc6659f2 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -384,6 +384,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
>               const char *oem_id, const char *oem_table_id);
>  void *acpi_data_push(GArray *table_data, unsigned size);
>  unsigned acpi_data_len(GArray *table);
> +/* Align AML blob size to a multiple of 'align' */
> +void acpi_align_size(GArray *blob, unsigned align);
>  void acpi_add_table(GArray *table_offsets, GArray *table_data);
>  void acpi_build_tables_init(AcpiBuildTables *tables);
>  void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 1e43cd736d..51b608432f 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1565,6 +1565,14 @@ unsigned acpi_data_len(GArray *table)
>      return table->len;
>  }
>  
> +void acpi_align_size(GArray *blob, unsigned align)
> +{
> +    /* Align size to multiple of given size. This reduces the chance
> +     * we need to change size in the future (breaking cross version migration).
> +     */
> +    g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
> +}
> +
>  void acpi_add_table(GArray *table_offsets, GArray *table_data)
>  {
>      uint32_t offset = table_data->len;
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index d0362e1382..81d98fa34f 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -282,14 +282,6 @@ static void acpi_get_pci_holes(Range *hole, Range *hole64)
>                                                 NULL));
>  }
>  
> -static void acpi_align_size(GArray *blob, unsigned align)
> -{
> -    /* Align size to multiple of given size. This reduces the chance
> -     * we need to change size in the future (breaking cross version migration).
> -     */
> -    g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
> -}
> -
>  /* FACS */
>  static void
>  build_facs(GArray *table_data, BIOSLinker *linker)

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API
@ 2018-11-09 14:27     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09 14:27 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:25 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This is going to be needed by the Hardware-reduced ACPI routines.
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
the patch is probably misplaced withing series,
if there is an external user within this series then this patch should
be squashed there, otherwise it doesn't belong to this series.

> ---
>  include/hw/acpi/aml-build.h | 2 ++
>  hw/acpi/aml-build.c         | 8 ++++++++
>  hw/i386/acpi-build.c        | 8 --------
>  3 files changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index 6c36903c0a..73fc6659f2 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -384,6 +384,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
>               const char *oem_id, const char *oem_table_id);
>  void *acpi_data_push(GArray *table_data, unsigned size);
>  unsigned acpi_data_len(GArray *table);
> +/* Align AML blob size to a multiple of 'align' */
> +void acpi_align_size(GArray *blob, unsigned align);
>  void acpi_add_table(GArray *table_offsets, GArray *table_data);
>  void acpi_build_tables_init(AcpiBuildTables *tables);
>  void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 1e43cd736d..51b608432f 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1565,6 +1565,14 @@ unsigned acpi_data_len(GArray *table)
>      return table->len;
>  }
>  
> +void acpi_align_size(GArray *blob, unsigned align)
> +{
> +    /* Align size to multiple of given size. This reduces the chance
> +     * we need to change size in the future (breaking cross version migration).
> +     */
> +    g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
> +}
> +
>  void acpi_add_table(GArray *table_offsets, GArray *table_data)
>  {
>      uint32_t offset = table_data->len;
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index d0362e1382..81d98fa34f 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -282,14 +282,6 @@ static void acpi_get_pci_holes(Range *hole, Range *hole64)
>                                                 NULL));
>  }
>  
> -static void acpi_align_size(GArray *blob, unsigned align)
> -{
> -    /* Align size to multiple of given size. This reduces the chance
> -     * we need to change size in the future (breaking cross version migration).
> -     */
> -    g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
> -}
> -
>  /* FACS */
>  static void
>  build_facs(GArray *table_data, BIOSLinker *linker)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 09/24] hw: i386: Move PCI host definitions to pci_host.h
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-09 14:30     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09 14:30 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost

On Mon,  5 Nov 2018 02:40:32 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> The PCI hole properties are not pc or i386 specific. They belong to the
> PCI host header instead.
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/i386/pc.h      | 5 -----
>  include/hw/pci/pci_host.h | 6 ++++++
>  2 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index fed136fcdd..bbbdb33ea3 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -182,11 +182,6 @@ void pc_acpi_init(const char *default_dsdt);
>  
>  void pc_guest_info_init(PCMachineState *pcms);
>  
> -#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
> -#define PCI_HOST_PROP_PCI_HOLE_END     "pci-hole-end"
> -#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
> -#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
> -#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
>  #define PCI_HOST_BELOW_4G_MEM_SIZE     "below-4g-mem-size"
>  #define PCI_HOST_ABOVE_4G_MEM_SIZE     "above-4g-mem-size"
>  
> diff --git a/include/hw/pci/pci_host.h b/include/hw/pci/pci_host.h
> index ba31595fc7..e343f4d9ca 100644
> --- a/include/hw/pci/pci_host.h
> +++ b/include/hw/pci/pci_host.h
> @@ -38,6 +38,12 @@
>  #define PCI_HOST_BRIDGE_GET_CLASS(obj) \
>       OBJECT_GET_CLASS(PCIHostBridgeClass, (obj), TYPE_PCI_HOST_BRIDGE)
>  
> +#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
> +#define PCI_HOST_PROP_PCI_HOLE_END     "pci-hole-end"
> +#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
> +#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
> +#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
these are pc/q53 machine specific properties and do not belong here


> +
>  struct PCIHostState {
>      SysBusDevice busdev;
>  

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 09/24] hw: i386: Move PCI host definitions to pci_host.h
@ 2018-11-09 14:30     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-09 14:30 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:32 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> The PCI hole properties are not pc or i386 specific. They belong to the
> PCI host header instead.
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/i386/pc.h      | 5 -----
>  include/hw/pci/pci_host.h | 6 ++++++
>  2 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index fed136fcdd..bbbdb33ea3 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -182,11 +182,6 @@ void pc_acpi_init(const char *default_dsdt);
>  
>  void pc_guest_info_init(PCMachineState *pcms);
>  
> -#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
> -#define PCI_HOST_PROP_PCI_HOLE_END     "pci-hole-end"
> -#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
> -#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
> -#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
>  #define PCI_HOST_BELOW_4G_MEM_SIZE     "below-4g-mem-size"
>  #define PCI_HOST_ABOVE_4G_MEM_SIZE     "above-4g-mem-size"
>  
> diff --git a/include/hw/pci/pci_host.h b/include/hw/pci/pci_host.h
> index ba31595fc7..e343f4d9ca 100644
> --- a/include/hw/pci/pci_host.h
> +++ b/include/hw/pci/pci_host.h
> @@ -38,6 +38,12 @@
>  #define PCI_HOST_BRIDGE_GET_CLASS(obj) \
>       OBJECT_GET_CLASS(PCIHostBridgeClass, (obj), TYPE_PCI_HOST_BRIDGE)
>  
> +#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
> +#define PCI_HOST_PROP_PCI_HOLE_END     "pci-hole-end"
> +#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
> +#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
> +#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
these are pc/q53 machine specific properties and do not belong here


> +
>  struct PCIHostState {
>      SysBusDevice busdev;
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-13 15:59     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-13 15:59 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Mon,  5 Nov 2018 02:40:33 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This is going to be needed by the hardware reduced implementation, so
> let's export it.
> Once the ACPI builder methods and getters will be implemented, the
> acpi_get_pci_host() implementation will become hardware agnostic.
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/acpi/aml-build.h |  2 ++
>  hw/acpi/aml-build.c         | 43 +++++++++++++++++++++++++++++++++
>  hw/i386/acpi-build.c        | 47 ++-----------------------------------
>  3 files changed, 47 insertions(+), 45 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index c27c0935ae..fde2785b9a 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
>               const char *oem_id, const char *oem_table_id);
>  void *acpi_data_push(GArray *table_data, unsigned size);
>  unsigned acpi_data_len(GArray *table);
> +Object *acpi_get_pci_host(void);
> +void acpi_get_pci_holes(Range *hole, Range *hole64);
>  /* Align AML blob size to a multiple of 'align' */
>  void acpi_align_size(GArray *blob, unsigned align);
>  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 2b9a636e75..b8e32f15f7 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
>      g_array_free(tables->vmgenid, mfre);
>  }

> +/*
> + * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> + */
> +Object *acpi_get_pci_host(void)
> +{
> +    PCIHostState *host;
> +
> +    host = OBJECT_CHECK(PCIHostState,
> +                        object_resolve_path("/machine/i440fx", NULL),
> +                        TYPE_PCI_HOST_BRIDGE);
> +    if (!host) {
> +        host = OBJECT_CHECK(PCIHostState,
> +                            object_resolve_path("/machine/q35", NULL),
> +                            TYPE_PCI_HOST_BRIDGE);
> +    }
> +
> +    return OBJECT(host);
> +}
in general aml-build.c is a place to put ACPI spec primitives,
so I'd suggest to move it somewhere else.

Considering it's x86 code (so far), maybe move it to something like
hw/i386/acpi-pci.c

Also it might be good to get rid of acpi_get_pci_host() and pass
a pointer to pci_host as acpi_setup() an argument, since it's static
for life of boar we can keep it in AcpiBuildState, and reuse for
mfg/pci_hole/pci bus accesses.
That way we can simplify code a bit and avoid lookup cost of
object_resolve_path() that's called several times unnecessarily.

> +void acpi_get_pci_holes(Range *hole, Range *hole64)
> +{
> +    Object *pci_host;
> +
> +    pci_host = acpi_get_pci_host();
> +    g_assert(pci_host);
> +
> +    range_set_bounds1(hole,
> +                      object_property_get_uint(pci_host,
> +                                               PCI_HOST_PROP_PCI_HOLE_START,
> +                                               NULL),
> +                      object_property_get_uint(pci_host,
> +                                               PCI_HOST_PROP_PCI_HOLE_END,
> +                                               NULL));
> +    range_set_bounds1(hole64,
> +                      object_property_get_uint(pci_host,
> +                                               PCI_HOST_PROP_PCI_HOLE64_START,
> +                                               NULL),
> +                      object_property_get_uint(pci_host,
> +                                               PCI_HOST_PROP_PCI_HOLE64_END,
> +                                               NULL));
> +}
> +
>  static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
>  {
>      CrsRangeEntry *entry;
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index bd147a6bd2..a5f5f83500 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -232,49 +232,6 @@ static void acpi_get_misc_info(AcpiMiscInfo *info)
>      info->applesmc_io_base = applesmc_port();
>  }
>  
> -/*
> - * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> - * On i386 arch we only have two pci hosts, so we can look only for them.
> - */
> -static Object *acpi_get_i386_pci_host(void)
> -{
> -    PCIHostState *host;
> -
> -    host = OBJECT_CHECK(PCIHostState,
> -                        object_resolve_path("/machine/i440fx", NULL),
> -                        TYPE_PCI_HOST_BRIDGE);
> -    if (!host) {
> -        host = OBJECT_CHECK(PCIHostState,
> -                            object_resolve_path("/machine/q35", NULL),
> -                            TYPE_PCI_HOST_BRIDGE);
> -    }
> -
> -    return OBJECT(host);
> -}
> -
> -static void acpi_get_pci_holes(Range *hole, Range *hole64)
> -{
> -    Object *pci_host;
> -
> -    pci_host = acpi_get_i386_pci_host();
> -    g_assert(pci_host);
> -
> -    range_set_bounds1(hole,
> -                      object_property_get_uint(pci_host,
> -                                               PCI_HOST_PROP_PCI_HOLE_START,
> -                                               NULL),
> -                      object_property_get_uint(pci_host,
> -                                               PCI_HOST_PROP_PCI_HOLE_END,
> -                                               NULL));
> -    range_set_bounds1(hole64,
> -                      object_property_get_uint(pci_host,
> -                                               PCI_HOST_PROP_PCI_HOLE64_START,
> -                                               NULL),
> -                      object_property_get_uint(pci_host,
> -                                               PCI_HOST_PROP_PCI_HOLE64_END,
> -                                               NULL));
> -}
> -
>  /* FACS */
>  static void
>  build_facs(GArray *table_data, BIOSLinker *linker)
> @@ -1634,7 +1591,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>          Object *pci_host;
>          PCIBus *bus = NULL;
>  
> -        pci_host = acpi_get_i386_pci_host();
> +        pci_host = acpi_get_pci_host();
>          if (pci_host) {
>              bus = PCI_HOST_BRIDGE(pci_host)->bus;
>          }
> @@ -2008,7 +1965,7 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
>      Object *pci_host;
>      QObject *o;
>  
> -    pci_host = acpi_get_i386_pci_host();
> +    pci_host = acpi_get_pci_host();
>      g_assert(pci_host);
>  
>      o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters
@ 2018-11-13 15:59     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-13 15:59 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Mon,  5 Nov 2018 02:40:33 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This is going to be needed by the hardware reduced implementation, so
> let's export it.
> Once the ACPI builder methods and getters will be implemented, the
> acpi_get_pci_host() implementation will become hardware agnostic.
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/acpi/aml-build.h |  2 ++
>  hw/acpi/aml-build.c         | 43 +++++++++++++++++++++++++++++++++
>  hw/i386/acpi-build.c        | 47 ++-----------------------------------
>  3 files changed, 47 insertions(+), 45 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index c27c0935ae..fde2785b9a 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
>               const char *oem_id, const char *oem_table_id);
>  void *acpi_data_push(GArray *table_data, unsigned size);
>  unsigned acpi_data_len(GArray *table);
> +Object *acpi_get_pci_host(void);
> +void acpi_get_pci_holes(Range *hole, Range *hole64);
>  /* Align AML blob size to a multiple of 'align' */
>  void acpi_align_size(GArray *blob, unsigned align);
>  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 2b9a636e75..b8e32f15f7 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
>      g_array_free(tables->vmgenid, mfre);
>  }

> +/*
> + * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> + */
> +Object *acpi_get_pci_host(void)
> +{
> +    PCIHostState *host;
> +
> +    host = OBJECT_CHECK(PCIHostState,
> +                        object_resolve_path("/machine/i440fx", NULL),
> +                        TYPE_PCI_HOST_BRIDGE);
> +    if (!host) {
> +        host = OBJECT_CHECK(PCIHostState,
> +                            object_resolve_path("/machine/q35", NULL),
> +                            TYPE_PCI_HOST_BRIDGE);
> +    }
> +
> +    return OBJECT(host);
> +}
in general aml-build.c is a place to put ACPI spec primitives,
so I'd suggest to move it somewhere else.

Considering it's x86 code (so far), maybe move it to something like
hw/i386/acpi-pci.c

Also it might be good to get rid of acpi_get_pci_host() and pass
a pointer to pci_host as acpi_setup() an argument, since it's static
for life of boar we can keep it in AcpiBuildState, and reuse for
mfg/pci_hole/pci bus accesses.
That way we can simplify code a bit and avoid lookup cost of
object_resolve_path() that's called several times unnecessarily.

> +void acpi_get_pci_holes(Range *hole, Range *hole64)
> +{
> +    Object *pci_host;
> +
> +    pci_host = acpi_get_pci_host();
> +    g_assert(pci_host);
> +
> +    range_set_bounds1(hole,
> +                      object_property_get_uint(pci_host,
> +                                               PCI_HOST_PROP_PCI_HOLE_START,
> +                                               NULL),
> +                      object_property_get_uint(pci_host,
> +                                               PCI_HOST_PROP_PCI_HOLE_END,
> +                                               NULL));
> +    range_set_bounds1(hole64,
> +                      object_property_get_uint(pci_host,
> +                                               PCI_HOST_PROP_PCI_HOLE64_START,
> +                                               NULL),
> +                      object_property_get_uint(pci_host,
> +                                               PCI_HOST_PROP_PCI_HOLE64_END,
> +                                               NULL));
> +}
> +
>  static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
>  {
>      CrsRangeEntry *entry;
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index bd147a6bd2..a5f5f83500 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -232,49 +232,6 @@ static void acpi_get_misc_info(AcpiMiscInfo *info)
>      info->applesmc_io_base = applesmc_port();
>  }
>  
> -/*
> - * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> - * On i386 arch we only have two pci hosts, so we can look only for them.
> - */
> -static Object *acpi_get_i386_pci_host(void)
> -{
> -    PCIHostState *host;
> -
> -    host = OBJECT_CHECK(PCIHostState,
> -                        object_resolve_path("/machine/i440fx", NULL),
> -                        TYPE_PCI_HOST_BRIDGE);
> -    if (!host) {
> -        host = OBJECT_CHECK(PCIHostState,
> -                            object_resolve_path("/machine/q35", NULL),
> -                            TYPE_PCI_HOST_BRIDGE);
> -    }
> -
> -    return OBJECT(host);
> -}
> -
> -static void acpi_get_pci_holes(Range *hole, Range *hole64)
> -{
> -    Object *pci_host;
> -
> -    pci_host = acpi_get_i386_pci_host();
> -    g_assert(pci_host);
> -
> -    range_set_bounds1(hole,
> -                      object_property_get_uint(pci_host,
> -                                               PCI_HOST_PROP_PCI_HOLE_START,
> -                                               NULL),
> -                      object_property_get_uint(pci_host,
> -                                               PCI_HOST_PROP_PCI_HOLE_END,
> -                                               NULL));
> -    range_set_bounds1(hole64,
> -                      object_property_get_uint(pci_host,
> -                                               PCI_HOST_PROP_PCI_HOLE64_START,
> -                                               NULL),
> -                      object_property_get_uint(pci_host,
> -                                               PCI_HOST_PROP_PCI_HOLE64_END,
> -                                               NULL));
> -}
> -
>  /* FACS */
>  static void
>  build_facs(GArray *table_data, BIOSLinker *linker)
> @@ -1634,7 +1591,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>          Object *pci_host;
>          PCIBus *bus = NULL;
>  
> -        pci_host = acpi_get_i386_pci_host();
> +        pci_host = acpi_get_pci_host();
>          if (pci_host) {
>              bus = PCI_HOST_BRIDGE(pci_host)->bus;
>          }
> @@ -2008,7 +1965,7 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
>      Object *pci_host;
>      QObject *o;
>  
> -    pci_host = acpi_get_i386_pci_host();
> +    pci_host = acpi_get_pci_host();
>      g_assert(pci_host);
>  
>      o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-14 10:55     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-14 10:55 UTC (permalink / raw)
  To: Samuel Ortiz, Marcel Apfelbaum
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, xen-devel, Paolo Bonzini, Michael S. Tsirkin,
	qemu-arm, Peter Maydell, Eduardo Habkost, Yang Zhong,
	Rob Bradford

On Mon,  5 Nov 2018 02:40:34 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Yang Zhong <yang.zhong@intel.com>
> 
> The AML build routines for the PCI host bridge and the corresponding
> DSDT addition are neither x86 nor PC machine type specific.
> We can move them to the architecture agnostic hw/acpi folder, and by
> carrying all the needed information through a new AcpiPciBus structure,
> we can make them PC machine type independent.

I'm don't know anything about PCI, but functional changes doesn't look
correct to me. See more detailed comments below.

Marcel,
could you take a look on this patch (in particular main csr changes), pls?

> 
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> Signed-off-by: Rob Bradford <robert.bradford@intel.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/acpi/aml-build.h |   8 ++
>  hw/acpi/aml-build.c         | 157 ++++++++++++++++++++++++++++++++++++
>  hw/i386/acpi-build.c        | 115 ++------------------------
>  3 files changed, 173 insertions(+), 107 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index fde2785b9a..1861e37ebf 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
>      uint32_t mcfg_size;
>  } AcpiMcfgInfo;
>  
> +typedef struct AcpiPciBus {
> +    PCIBus *pci_bus;
> +    Range *pci_hole;
> +    Range *pci_hole64;
> +} AcpiPciBus;
Again, this and all below is not aml-build material.
Consider adding/using pci specific acpi file for it.

Also even though pci AML in arm/virt is to a large degree a subset
of x86 target and it would be much better to unify ARM part with x86,
it probably will be to big/complex of a change if we take on it in
one go.

So not to derail you from the goal too much, we probably should
generalize this a little bit less, limiting refactoring to x86
target for now.

For example, move generic x86 pci parts to hw/i386/acpi-pci.[hc],
and structure it so that building blocks in acpi-pci.c could be
reused for x86 reduced profile later.
Once it's been done, it might be easier and less complex to
unify a bit more generic code in i386/acpi-pci.c with corresponding
ARM code.

Patch is too big and should be split into smaller logical chunks
and you should separate code movement vs functional changes you're
a making here.

Once you split patch properly, it should be easier to assess
changes.

>  typedef struct CrsRangeEntry {
>      uint64_t base;
>      uint64_t limit;
> @@ -411,6 +417,8 @@ Aml *build_osc_method(uint32_t value);
>  void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
>  Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
>  Aml *build_prt(bool is_pci0_prt);
> +void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
> +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
>  void crs_range_set_init(CrsRangeSet *range_set);
>  Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
>  void crs_replace_with_free_ranges(GPtrArray *ranges,
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index b8e32f15f7..869ed70db3 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -29,6 +29,19 @@
>  #include "hw/pci/pci_bus.h"
>  #include "qemu/range.h"
>  #include "hw/pci/pci_bridge.h"
> +#include "hw/i386/pc.h"
> +#include "sysemu/tpm.h"
> +#include "hw/acpi/tpm.h"
> +
> +#define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
> +#define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
> +#define PCI_HOST_BRIDGE_IO_0_MAX_ADDR      0x0cf7
> +#define PCI_HOST_BRIDGE_IO_1_MIN_ADDR      0x0d00
> +#define PCI_HOST_BRIDGE_IO_1_MAX_ADDR      0xffff
> +#define PCI_VGA_MEM_BASE_ADDR              0x000a0000
> +#define PCI_VGA_MEM_MAX_ADDR               0x000bffff
> +#define IO_0_LEN                           0xcf8
> +#define VGA_MEM_LEN                        0x20000
>  
>  static GArray *build_alloc_array(void)
>  {
> @@ -2142,6 +2155,150 @@ Aml *build_prt(bool is_pci0_prt)
>      return method;
>  }
>  
> +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
name doesn't reflect exactly what function does,
it builds device descriptions for expander buses (including their csr)
and then it builds csr for for main pci host but not pci device description.

I'd suggest to split out expander buses part into separate function
that returns an expander bus device description, updates crs_range_set
and let the caller to enumerate buses and add descriptions to dsdt.

Then after it we could do a generic csr generation function for the main pci host
if it's possible at all (main pci host csr seems heavily board depended)

Instead of taking table and adding stuff directly in to it
it should be cleaner to take as argument empty csr (crs = aml_resource_template();)
add stuff to it and let the caller to add/extend csr as/where necessary.

> +{
> +    CrsRangeEntry *entry;
> +    Aml *scope, *dev, *crs;
> +    CrsRangeSet crs_range_set;
> +    Range *pci_hole = NULL;
> +    Range *pci_hole64 = NULL;
> +    PCIBus *bus = NULL;
> +    int root_bus_limit = 0xFF;
> +    int i;
> +
> +    bus = pci_host->pci_bus;
> +    assert(bus);
> +    pci_hole = pci_host->pci_hole;
> +    pci_hole64 = pci_host->pci_hole64;
> +
> +    crs_range_set_init(&crs_range_set);
> +    QLIST_FOREACH(bus, &bus->child, sibling) {
> +        uint8_t bus_num = pci_bus_num(bus);
> +        uint8_t numa_node = pci_bus_numa_node(bus);
> +
> +        /* look only for expander root buses */
> +        if (!pci_bus_is_root(bus)) {
> +            continue;
> +        }
> +
> +        if (bus_num < root_bus_limit) {
> +            root_bus_limit = bus_num - 1;
> +        }
> +
> +        scope = aml_scope("\\_SB");
> +        dev = aml_device("PC%.02X", bus_num);
> +        aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> +        aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
> +        aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> +        if (pci_bus_is_express(bus)) {
> +            aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> +            aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> +            aml_append(dev, build_osc_method(0x1F));
> +        }
> +        if (numa_node != NUMA_NODE_UNASSIGNED) {
> +            aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
> +        }
> +
> +        aml_append(dev, build_prt(false));
> +        crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
> +        aml_append(dev, aml_name_decl("_CRS", crs));
> +        aml_append(scope, dev);
> +        aml_append(table, scope);
> +    }
> +    scope = aml_scope("\\_SB.PCI0");
> +    /* build PCI0._CRS */
> +    crs = aml_resource_template();
> +    /* set the pcie bus num */
> +    aml_append(crs,
> +        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> +                            0x0000, 0x0, root_bus_limit,
> +                            0x0000, root_bus_limit + 1));

vvvv
> +    aml_append(crs, aml_io(AML_DECODE16, PCI_HOST_BRIDGE_CONFIG_ADDR,
> +                           PCI_HOST_BRIDGE_CONFIG_ADDR, 0x01, 0x08));
> +    /* set the io region 0 in pci host bridge */
> +    aml_append(crs,
> +        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> +                    AML_POS_DECODE, AML_ENTIRE_RANGE,
> +                    0x0000, PCI_HOST_BRIDGE_IO_0_MIN_ADDR,
> +                    PCI_HOST_BRIDGE_IO_0_MAX_ADDR, 0x0000, IO_0_LEN));
> +
> +    /* set the io region 1 in pci host bridge */
> +    crs_replace_with_free_ranges(crs_range_set.io_ranges,
> +                                 PCI_HOST_BRIDGE_IO_1_MIN_ADDR,
> +                                 PCI_HOST_BRIDGE_IO_1_MAX_ADDR);
above code doesn't look as just a movement, it's something totally new,
so it should be in it's own patch with a justification why it's ok
to replace concrete addresses with some kind of window.


> +    for (i = 0; i < crs_range_set.io_ranges->len; i++) {
> +        entry = g_ptr_array_index(crs_range_set.io_ranges, i);
> +        aml_append(crs,
> +            aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> +                        AML_POS_DECODE, AML_ENTIRE_RANGE,
> +                        0x0000, entry->base, entry->limit,
> +                        0x0000, entry->limit - entry->base + 1));
> +    }
> +

> +    /* set the vga mem region(0) in pci host bridge */
> +    aml_append(crs,
> +        aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
> +                         AML_CACHEABLE, AML_READ_WRITE,
> +                         0, PCI_VGA_MEM_BASE_ADDR, PCI_VGA_MEM_MAX_ADDR,
> +                         0, VGA_MEM_LEN));
this part doesn't look generic enough.
I'd assume new shiny i386/virt won't have VGA and
assuming one day we would reuse this code for ARM it probably doesn't
make sense to put it in generic code.

> +
> +    /* set the mem region 1 in pci host bridge */
> +    crs_replace_with_free_ranges(crs_range_set.mem_ranges,
> +                                 range_lob(pci_hole),
> +                                 range_upb(pci_hole));
> +    for (i = 0; i < crs_range_set.mem_ranges->len; i++) {
> +        entry = g_ptr_array_index(crs_range_set.mem_ranges, i);
> +        aml_append(crs,
> +            aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
> +                             AML_NON_CACHEABLE, AML_READ_WRITE,
> +                             0, entry->base, entry->limit,
> +                             0, entry->limit - entry->base + 1));
> +    }
> +
> +    /* set the mem region 2 in pci host bridge */
> +    if (!range_is_empty(pci_hole64)) {
> +        crs_replace_with_free_ranges(crs_range_set.mem_64bit_ranges,
> +                                     range_lob(pci_hole64),
> +                                     range_upb(pci_hole64));
> +        for (i = 0; i < crs_range_set.mem_64bit_ranges->len; i++) {
> +            entry = g_ptr_array_index(crs_range_set.mem_64bit_ranges, i);
> +            aml_append(crs,
> +                       aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> +                                        AML_MAX_FIXED,
> +                                        AML_CACHEABLE, AML_READ_WRITE,
> +                                        0, entry->base, entry->limit,
> +                                        0, entry->limit - entry->base + 1));
> +        }
> +    }
> +
> +    if (TPM_IS_TIS(tpm_find())) {
> +        aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
> +                   TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
ditto, board depended and not generic enough to go here

> +    }
> +
> +    aml_append(scope, aml_name_decl("_CRS", crs));
> +    crs_range_set_free(&crs_range_set);
> +    return scope;
> +}
> +
> +void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host)
not used in this patch, drop it for now.

> +{
> +    Aml *dev, *pci_scope;
> +
> +    dev = aml_device("\\_SB.PCI0");
> +    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
> +    aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> +    aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
> +    aml_append(dev, aml_name_decl("_UID", aml_int(1)));
> +    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> +    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> +    aml_append(dev, build_osc_method(0x1F));
Does above description applicable to all boards (pc/q35/future i386/virt)?
current code builds PCI differently depending on board.

> +    aml_append(dsdt, dev);
it would be better to return PCI dev and let caller to add it to dsdt.

> +
> +    pci_scope = build_pci_host_bridge(dsdt, pci_host);
> +    aml_append(dsdt, pci_scope);
> +}
> +
>  /* Build rsdt table */
>  void
>  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index a5f5f83500..14e2624d14 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1253,16 +1253,11 @@ static void build_piix4_pci_hotplug(Aml *table)
>  static void
>  build_dsdt(GArray *table_data, BIOSLinker *linker,
>             AcpiPmInfo *pm, AcpiMiscInfo *misc,
> -           Range *pci_hole, Range *pci_hole64,
> +           AcpiPciBus *pci_host,
>             MachineState *machine, AcpiConfiguration *acpi_conf)
>  {
> -    CrsRangeEntry *entry;
>      Aml *dsdt, *sb_scope, *scope, *dev, *method, *field, *pkg, *crs;
> -    CrsRangeSet crs_range_set;
>      uint32_t nr_mem = machine->ram_slots;
> -    int root_bus_limit = 0xFF;
> -    PCIBus *bus = NULL;
> -    int i;
>  
>      dsdt = init_aml_allocator();
>  
> @@ -1337,104 +1332,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>      }
>      aml_append(dsdt, scope);
>  
> -    crs_range_set_init(&crs_range_set);
> -    bus = PC_MACHINE(machine)->bus;
> -    if (bus) {
> -        QLIST_FOREACH(bus, &bus->child, sibling) {
> -            uint8_t bus_num = pci_bus_num(bus);
> -            uint8_t numa_node = pci_bus_numa_node(bus);
> -
> -            /* look only for expander root buses */
> -            if (!pci_bus_is_root(bus)) {
> -                continue;
> -            }
> -
> -            if (bus_num < root_bus_limit) {
> -                root_bus_limit = bus_num - 1;
> -            }
> -
> -            scope = aml_scope("\\_SB");
> -            dev = aml_device("PC%.02X", bus_num);
> -            aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> -            aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
> -            aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> -            if (pci_bus_is_express(bus)) {
> -                aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL));
> -            }
> -
> -            if (numa_node != NUMA_NODE_UNASSIGNED) {
> -                aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
> -            }
> -
> -            aml_append(dev, build_prt(false));
> -            crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
> -            aml_append(dev, aml_name_decl("_CRS", crs));
> -            aml_append(scope, dev);
> -            aml_append(dsdt, scope);
> -        }
> -    }
> -
> -    scope = aml_scope("\\_SB.PCI0");
> -    /* build PCI0._CRS */
> -    crs = aml_resource_template();
> -    aml_append(crs,
> -        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> -                            0x0000, 0x0, root_bus_limit,
> -                            0x0000, root_bus_limit + 1));
> -    aml_append(crs, aml_io(AML_DECODE16, 0x0CF8, 0x0CF8, 0x01, 0x08));
> -
> -    aml_append(crs,
> -        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> -                    AML_POS_DECODE, AML_ENTIRE_RANGE,
> -                    0x0000, 0x0000, 0x0CF7, 0x0000, 0x0CF8));
> -
> -    crs_replace_with_free_ranges(crs_range_set.io_ranges, 0x0D00, 0xFFFF);
> -    for (i = 0; i < crs_range_set.io_ranges->len; i++) {
> -        entry = g_ptr_array_index(crs_range_set.io_ranges, i);
> -        aml_append(crs,
> -            aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> -                        AML_POS_DECODE, AML_ENTIRE_RANGE,
> -                        0x0000, entry->base, entry->limit,
> -                        0x0000, entry->limit - entry->base + 1));
> -    }
> -
> -    aml_append(crs,
> -        aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
> -                         AML_CACHEABLE, AML_READ_WRITE,
> -                         0, 0x000A0000, 0x000BFFFF, 0, 0x00020000));
> -
> -    crs_replace_with_free_ranges(crs_range_set.mem_ranges,
> -                                 range_lob(pci_hole),
> -                                 range_upb(pci_hole));
> -    for (i = 0; i < crs_range_set.mem_ranges->len; i++) {
> -        entry = g_ptr_array_index(crs_range_set.mem_ranges, i);
> -        aml_append(crs,
> -            aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
> -                             AML_NON_CACHEABLE, AML_READ_WRITE,
> -                             0, entry->base, entry->limit,
> -                             0, entry->limit - entry->base + 1));
> -    }
> -
> -    if (!range_is_empty(pci_hole64)) {
> -        crs_replace_with_free_ranges(crs_range_set.mem_64bit_ranges,
> -                                     range_lob(pci_hole64),
> -                                     range_upb(pci_hole64));
> -        for (i = 0; i < crs_range_set.mem_64bit_ranges->len; i++) {
> -            entry = g_ptr_array_index(crs_range_set.mem_64bit_ranges, i);
> -            aml_append(crs,
> -                       aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> -                                        AML_MAX_FIXED,
> -                                        AML_CACHEABLE, AML_READ_WRITE,
> -                                        0, entry->base, entry->limit,
> -                                        0, entry->limit - entry->base + 1));
> -        }
> -    }
> -
> -    if (TPM_IS_TIS(tpm_find())) {
> -        aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
> -                   TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
> -    }
> -    aml_append(scope, aml_name_decl("_CRS", crs));
> +    scope = build_pci_host_bridge(dsdt, pci_host);
>  
>      /* reserve GPE0 block resources */
>      dev = aml_device("GPE0");
> @@ -1454,8 +1352,6 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>      aml_append(dev, aml_name_decl("_CRS", crs));
>      aml_append(scope, dev);
>  
> -    crs_range_set_free(&crs_range_set);
> -
>      /* reserve PCIHP resources */
>      if (pm->pcihp_io_len) {
>          dev = aml_device("PHPR");
> @@ -2012,6 +1908,11 @@ void acpi_build(AcpiBuildTables *tables,
>                               64 /* Ensure FACS is aligned */,
>                               false /* high memory */);
>  
> +    AcpiPciBus pci_host = {
> +        .pci_bus    = PC_MACHINE(machine)->bus,
> +        .pci_hole   = &pci_hole,
> +        .pci_hole64 = &pci_hole64,
> +    };
>      /*
>       * FACS is pointed to by FADT.
>       * We place it first since it's the only table that has alignment
> @@ -2023,7 +1924,7 @@ void acpi_build(AcpiBuildTables *tables,
>      /* DSDT is pointed to by FADT */
>      dsdt = tables_blob->len;
>      build_dsdt(tables_blob, tables->linker, &pm, &misc,
> -               &pci_hole, &pci_hole64, machine, acpi_conf);
> +               &pci_host, machine, acpi_conf);
>  
>      /* Count the size of the DSDT and SSDT, we will need it for legacy
>       * sizing of ACPI tables.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API
@ 2018-11-14 10:55     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-14 10:55 UTC (permalink / raw)
  To: Samuel Ortiz, Marcel Apfelbaum
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Rob Bradford, Michael S. Tsirkin, qemu-devel, Shannon Zhao,
	qemu-arm, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:34 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Yang Zhong <yang.zhong@intel.com>
> 
> The AML build routines for the PCI host bridge and the corresponding
> DSDT addition are neither x86 nor PC machine type specific.
> We can move them to the architecture agnostic hw/acpi folder, and by
> carrying all the needed information through a new AcpiPciBus structure,
> we can make them PC machine type independent.

I'm don't know anything about PCI, but functional changes doesn't look
correct to me. See more detailed comments below.

Marcel,
could you take a look on this patch (in particular main csr changes), pls?

> 
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> Signed-off-by: Rob Bradford <robert.bradford@intel.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/acpi/aml-build.h |   8 ++
>  hw/acpi/aml-build.c         | 157 ++++++++++++++++++++++++++++++++++++
>  hw/i386/acpi-build.c        | 115 ++------------------------
>  3 files changed, 173 insertions(+), 107 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index fde2785b9a..1861e37ebf 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
>      uint32_t mcfg_size;
>  } AcpiMcfgInfo;
>  
> +typedef struct AcpiPciBus {
> +    PCIBus *pci_bus;
> +    Range *pci_hole;
> +    Range *pci_hole64;
> +} AcpiPciBus;
Again, this and all below is not aml-build material.
Consider adding/using pci specific acpi file for it.

Also even though pci AML in arm/virt is to a large degree a subset
of x86 target and it would be much better to unify ARM part with x86,
it probably will be to big/complex of a change if we take on it in
one go.

So not to derail you from the goal too much, we probably should
generalize this a little bit less, limiting refactoring to x86
target for now.

For example, move generic x86 pci parts to hw/i386/acpi-pci.[hc],
and structure it so that building blocks in acpi-pci.c could be
reused for x86 reduced profile later.
Once it's been done, it might be easier and less complex to
unify a bit more generic code in i386/acpi-pci.c with corresponding
ARM code.

Patch is too big and should be split into smaller logical chunks
and you should separate code movement vs functional changes you're
a making here.

Once you split patch properly, it should be easier to assess
changes.

>  typedef struct CrsRangeEntry {
>      uint64_t base;
>      uint64_t limit;
> @@ -411,6 +417,8 @@ Aml *build_osc_method(uint32_t value);
>  void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
>  Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
>  Aml *build_prt(bool is_pci0_prt);
> +void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
> +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
>  void crs_range_set_init(CrsRangeSet *range_set);
>  Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
>  void crs_replace_with_free_ranges(GPtrArray *ranges,
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index b8e32f15f7..869ed70db3 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -29,6 +29,19 @@
>  #include "hw/pci/pci_bus.h"
>  #include "qemu/range.h"
>  #include "hw/pci/pci_bridge.h"
> +#include "hw/i386/pc.h"
> +#include "sysemu/tpm.h"
> +#include "hw/acpi/tpm.h"
> +
> +#define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
> +#define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
> +#define PCI_HOST_BRIDGE_IO_0_MAX_ADDR      0x0cf7
> +#define PCI_HOST_BRIDGE_IO_1_MIN_ADDR      0x0d00
> +#define PCI_HOST_BRIDGE_IO_1_MAX_ADDR      0xffff
> +#define PCI_VGA_MEM_BASE_ADDR              0x000a0000
> +#define PCI_VGA_MEM_MAX_ADDR               0x000bffff
> +#define IO_0_LEN                           0xcf8
> +#define VGA_MEM_LEN                        0x20000
>  
>  static GArray *build_alloc_array(void)
>  {
> @@ -2142,6 +2155,150 @@ Aml *build_prt(bool is_pci0_prt)
>      return method;
>  }
>  
> +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
name doesn't reflect exactly what function does,
it builds device descriptions for expander buses (including their csr)
and then it builds csr for for main pci host but not pci device description.

I'd suggest to split out expander buses part into separate function
that returns an expander bus device description, updates crs_range_set
and let the caller to enumerate buses and add descriptions to dsdt.

Then after it we could do a generic csr generation function for the main pci host
if it's possible at all (main pci host csr seems heavily board depended)

Instead of taking table and adding stuff directly in to it
it should be cleaner to take as argument empty csr (crs = aml_resource_template();)
add stuff to it and let the caller to add/extend csr as/where necessary.

> +{
> +    CrsRangeEntry *entry;
> +    Aml *scope, *dev, *crs;
> +    CrsRangeSet crs_range_set;
> +    Range *pci_hole = NULL;
> +    Range *pci_hole64 = NULL;
> +    PCIBus *bus = NULL;
> +    int root_bus_limit = 0xFF;
> +    int i;
> +
> +    bus = pci_host->pci_bus;
> +    assert(bus);
> +    pci_hole = pci_host->pci_hole;
> +    pci_hole64 = pci_host->pci_hole64;
> +
> +    crs_range_set_init(&crs_range_set);
> +    QLIST_FOREACH(bus, &bus->child, sibling) {
> +        uint8_t bus_num = pci_bus_num(bus);
> +        uint8_t numa_node = pci_bus_numa_node(bus);
> +
> +        /* look only for expander root buses */
> +        if (!pci_bus_is_root(bus)) {
> +            continue;
> +        }
> +
> +        if (bus_num < root_bus_limit) {
> +            root_bus_limit = bus_num - 1;
> +        }
> +
> +        scope = aml_scope("\\_SB");
> +        dev = aml_device("PC%.02X", bus_num);
> +        aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> +        aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
> +        aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> +        if (pci_bus_is_express(bus)) {
> +            aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> +            aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> +            aml_append(dev, build_osc_method(0x1F));
> +        }
> +        if (numa_node != NUMA_NODE_UNASSIGNED) {
> +            aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
> +        }
> +
> +        aml_append(dev, build_prt(false));
> +        crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
> +        aml_append(dev, aml_name_decl("_CRS", crs));
> +        aml_append(scope, dev);
> +        aml_append(table, scope);
> +    }
> +    scope = aml_scope("\\_SB.PCI0");
> +    /* build PCI0._CRS */
> +    crs = aml_resource_template();
> +    /* set the pcie bus num */
> +    aml_append(crs,
> +        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> +                            0x0000, 0x0, root_bus_limit,
> +                            0x0000, root_bus_limit + 1));

vvvv
> +    aml_append(crs, aml_io(AML_DECODE16, PCI_HOST_BRIDGE_CONFIG_ADDR,
> +                           PCI_HOST_BRIDGE_CONFIG_ADDR, 0x01, 0x08));
> +    /* set the io region 0 in pci host bridge */
> +    aml_append(crs,
> +        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> +                    AML_POS_DECODE, AML_ENTIRE_RANGE,
> +                    0x0000, PCI_HOST_BRIDGE_IO_0_MIN_ADDR,
> +                    PCI_HOST_BRIDGE_IO_0_MAX_ADDR, 0x0000, IO_0_LEN));
> +
> +    /* set the io region 1 in pci host bridge */
> +    crs_replace_with_free_ranges(crs_range_set.io_ranges,
> +                                 PCI_HOST_BRIDGE_IO_1_MIN_ADDR,
> +                                 PCI_HOST_BRIDGE_IO_1_MAX_ADDR);
above code doesn't look as just a movement, it's something totally new,
so it should be in it's own patch with a justification why it's ok
to replace concrete addresses with some kind of window.


> +    for (i = 0; i < crs_range_set.io_ranges->len; i++) {
> +        entry = g_ptr_array_index(crs_range_set.io_ranges, i);
> +        aml_append(crs,
> +            aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> +                        AML_POS_DECODE, AML_ENTIRE_RANGE,
> +                        0x0000, entry->base, entry->limit,
> +                        0x0000, entry->limit - entry->base + 1));
> +    }
> +

> +    /* set the vga mem region(0) in pci host bridge */
> +    aml_append(crs,
> +        aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
> +                         AML_CACHEABLE, AML_READ_WRITE,
> +                         0, PCI_VGA_MEM_BASE_ADDR, PCI_VGA_MEM_MAX_ADDR,
> +                         0, VGA_MEM_LEN));
this part doesn't look generic enough.
I'd assume new shiny i386/virt won't have VGA and
assuming one day we would reuse this code for ARM it probably doesn't
make sense to put it in generic code.

> +
> +    /* set the mem region 1 in pci host bridge */
> +    crs_replace_with_free_ranges(crs_range_set.mem_ranges,
> +                                 range_lob(pci_hole),
> +                                 range_upb(pci_hole));
> +    for (i = 0; i < crs_range_set.mem_ranges->len; i++) {
> +        entry = g_ptr_array_index(crs_range_set.mem_ranges, i);
> +        aml_append(crs,
> +            aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
> +                             AML_NON_CACHEABLE, AML_READ_WRITE,
> +                             0, entry->base, entry->limit,
> +                             0, entry->limit - entry->base + 1));
> +    }
> +
> +    /* set the mem region 2 in pci host bridge */
> +    if (!range_is_empty(pci_hole64)) {
> +        crs_replace_with_free_ranges(crs_range_set.mem_64bit_ranges,
> +                                     range_lob(pci_hole64),
> +                                     range_upb(pci_hole64));
> +        for (i = 0; i < crs_range_set.mem_64bit_ranges->len; i++) {
> +            entry = g_ptr_array_index(crs_range_set.mem_64bit_ranges, i);
> +            aml_append(crs,
> +                       aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> +                                        AML_MAX_FIXED,
> +                                        AML_CACHEABLE, AML_READ_WRITE,
> +                                        0, entry->base, entry->limit,
> +                                        0, entry->limit - entry->base + 1));
> +        }
> +    }
> +
> +    if (TPM_IS_TIS(tpm_find())) {
> +        aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
> +                   TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
ditto, board depended and not generic enough to go here

> +    }
> +
> +    aml_append(scope, aml_name_decl("_CRS", crs));
> +    crs_range_set_free(&crs_range_set);
> +    return scope;
> +}
> +
> +void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host)
not used in this patch, drop it for now.

> +{
> +    Aml *dev, *pci_scope;
> +
> +    dev = aml_device("\\_SB.PCI0");
> +    aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
> +    aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> +    aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
> +    aml_append(dev, aml_name_decl("_UID", aml_int(1)));
> +    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> +    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> +    aml_append(dev, build_osc_method(0x1F));
Does above description applicable to all boards (pc/q35/future i386/virt)?
current code builds PCI differently depending on board.

> +    aml_append(dsdt, dev);
it would be better to return PCI dev and let caller to add it to dsdt.

> +
> +    pci_scope = build_pci_host_bridge(dsdt, pci_host);
> +    aml_append(dsdt, pci_scope);
> +}
> +
>  /* Build rsdt table */
>  void
>  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index a5f5f83500..14e2624d14 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1253,16 +1253,11 @@ static void build_piix4_pci_hotplug(Aml *table)
>  static void
>  build_dsdt(GArray *table_data, BIOSLinker *linker,
>             AcpiPmInfo *pm, AcpiMiscInfo *misc,
> -           Range *pci_hole, Range *pci_hole64,
> +           AcpiPciBus *pci_host,
>             MachineState *machine, AcpiConfiguration *acpi_conf)
>  {
> -    CrsRangeEntry *entry;
>      Aml *dsdt, *sb_scope, *scope, *dev, *method, *field, *pkg, *crs;
> -    CrsRangeSet crs_range_set;
>      uint32_t nr_mem = machine->ram_slots;
> -    int root_bus_limit = 0xFF;
> -    PCIBus *bus = NULL;
> -    int i;
>  
>      dsdt = init_aml_allocator();
>  
> @@ -1337,104 +1332,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>      }
>      aml_append(dsdt, scope);
>  
> -    crs_range_set_init(&crs_range_set);
> -    bus = PC_MACHINE(machine)->bus;
> -    if (bus) {
> -        QLIST_FOREACH(bus, &bus->child, sibling) {
> -            uint8_t bus_num = pci_bus_num(bus);
> -            uint8_t numa_node = pci_bus_numa_node(bus);
> -
> -            /* look only for expander root buses */
> -            if (!pci_bus_is_root(bus)) {
> -                continue;
> -            }
> -
> -            if (bus_num < root_bus_limit) {
> -                root_bus_limit = bus_num - 1;
> -            }
> -
> -            scope = aml_scope("\\_SB");
> -            dev = aml_device("PC%.02X", bus_num);
> -            aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> -            aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
> -            aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> -            if (pci_bus_is_express(bus)) {
> -                aml_append(dev, build_osc_method(ACPI_OSC_CTRL_PCI_ALL));
> -            }
> -
> -            if (numa_node != NUMA_NODE_UNASSIGNED) {
> -                aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
> -            }
> -
> -            aml_append(dev, build_prt(false));
> -            crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
> -            aml_append(dev, aml_name_decl("_CRS", crs));
> -            aml_append(scope, dev);
> -            aml_append(dsdt, scope);
> -        }
> -    }
> -
> -    scope = aml_scope("\\_SB.PCI0");
> -    /* build PCI0._CRS */
> -    crs = aml_resource_template();
> -    aml_append(crs,
> -        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> -                            0x0000, 0x0, root_bus_limit,
> -                            0x0000, root_bus_limit + 1));
> -    aml_append(crs, aml_io(AML_DECODE16, 0x0CF8, 0x0CF8, 0x01, 0x08));
> -
> -    aml_append(crs,
> -        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> -                    AML_POS_DECODE, AML_ENTIRE_RANGE,
> -                    0x0000, 0x0000, 0x0CF7, 0x0000, 0x0CF8));
> -
> -    crs_replace_with_free_ranges(crs_range_set.io_ranges, 0x0D00, 0xFFFF);
> -    for (i = 0; i < crs_range_set.io_ranges->len; i++) {
> -        entry = g_ptr_array_index(crs_range_set.io_ranges, i);
> -        aml_append(crs,
> -            aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> -                        AML_POS_DECODE, AML_ENTIRE_RANGE,
> -                        0x0000, entry->base, entry->limit,
> -                        0x0000, entry->limit - entry->base + 1));
> -    }
> -
> -    aml_append(crs,
> -        aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
> -                         AML_CACHEABLE, AML_READ_WRITE,
> -                         0, 0x000A0000, 0x000BFFFF, 0, 0x00020000));
> -
> -    crs_replace_with_free_ranges(crs_range_set.mem_ranges,
> -                                 range_lob(pci_hole),
> -                                 range_upb(pci_hole));
> -    for (i = 0; i < crs_range_set.mem_ranges->len; i++) {
> -        entry = g_ptr_array_index(crs_range_set.mem_ranges, i);
> -        aml_append(crs,
> -            aml_dword_memory(AML_POS_DECODE, AML_MIN_FIXED, AML_MAX_FIXED,
> -                             AML_NON_CACHEABLE, AML_READ_WRITE,
> -                             0, entry->base, entry->limit,
> -                             0, entry->limit - entry->base + 1));
> -    }
> -
> -    if (!range_is_empty(pci_hole64)) {
> -        crs_replace_with_free_ranges(crs_range_set.mem_64bit_ranges,
> -                                     range_lob(pci_hole64),
> -                                     range_upb(pci_hole64));
> -        for (i = 0; i < crs_range_set.mem_64bit_ranges->len; i++) {
> -            entry = g_ptr_array_index(crs_range_set.mem_64bit_ranges, i);
> -            aml_append(crs,
> -                       aml_qword_memory(AML_POS_DECODE, AML_MIN_FIXED,
> -                                        AML_MAX_FIXED,
> -                                        AML_CACHEABLE, AML_READ_WRITE,
> -                                        0, entry->base, entry->limit,
> -                                        0, entry->limit - entry->base + 1));
> -        }
> -    }
> -
> -    if (TPM_IS_TIS(tpm_find())) {
> -        aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
> -                   TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
> -    }
> -    aml_append(scope, aml_name_decl("_CRS", crs));
> +    scope = build_pci_host_bridge(dsdt, pci_host);
>  
>      /* reserve GPE0 block resources */
>      dev = aml_device("GPE0");
> @@ -1454,8 +1352,6 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>      aml_append(dev, aml_name_decl("_CRS", crs));
>      aml_append(scope, dev);
>  
> -    crs_range_set_free(&crs_range_set);
> -
>      /* reserve PCIHP resources */
>      if (pm->pcihp_io_len) {
>          dev = aml_device("PHPR");
> @@ -2012,6 +1908,11 @@ void acpi_build(AcpiBuildTables *tables,
>                               64 /* Ensure FACS is aligned */,
>                               false /* high memory */);
>  
> +    AcpiPciBus pci_host = {
> +        .pci_bus    = PC_MACHINE(machine)->bus,
> +        .pci_hole   = &pci_hole,
> +        .pci_hole64 = &pci_hole64,
> +    };
>      /*
>       * FACS is pointed to by FADT.
>       * We place it first since it's the only table that has alignment
> @@ -2023,7 +1924,7 @@ void acpi_build(AcpiBuildTables *tables,
>      /* DSDT is pointed to by FADT */
>      dsdt = tables_blob->len;
>      build_dsdt(tables_blob, tables->linker, &pm, &misc,
> -               &pci_hole, &pci_hole64, machine, acpi_conf);
> +               &pci_host, machine, acpi_conf);
>  
>      /* Count the size of the DSDT and SSDT, we will need it for legacy
>       * sizing of ACPI tables.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-15 12:36     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-15 12:36 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost,
	Yang Zhong

On Mon,  5 Nov 2018 02:40:35 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Yang Zhong <yang.zhong@intel.com>
> 
> The ACPI MCFG getter is not x86 specific and could be called from
> anywhere within generic ACPI API, so let's export it.
So far it's x86 or more exactly q35 specific thing,
for example it won't work with arm/virt without refactoring
the later.

We probably over-engineered it and could get by without
properties here at all, but that's not related to this series.

So for now I'd suggest to move it to x86/acpi-pci.c for now
and we can generalize the way we get address/size later,
but if you feel like replacing all this property complications
and making it simpler go ahead.


 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> ---
>  include/hw/acpi/aml-build.h |  1 +
>  hw/acpi/aml-build.c         | 24 ++++++++++++++++++++++++
>  hw/i386/acpi-build.c        | 22 ----------------------
>  3 files changed, 25 insertions(+), 22 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index 1861e37ebf..64ea371656 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -408,6 +408,7 @@ void *acpi_data_push(GArray *table_data, unsigned size);
>  unsigned acpi_data_len(GArray *table);
>  Object *acpi_get_pci_host(void);
>  void acpi_get_pci_holes(Range *hole, Range *hole64);
> +bool acpi_get_mcfg(AcpiMcfgInfo *mcfg);
>  /* Align AML blob size to a multiple of 'align' */
>  void acpi_align_size(GArray *blob, unsigned align);
>  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 869ed70db3..2c5446ab23 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -32,6 +32,8 @@
>  #include "hw/i386/pc.h"
>  #include "sysemu/tpm.h"
>  #include "hw/acpi/tpm.h"
> +#include "qom/qom-qobject.h"
> +#include "qapi/qmp/qnum.h"
>  
>  #define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
>  #define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
> @@ -1657,6 +1659,28 @@ void acpi_get_pci_holes(Range *hole, Range *hole64)
>                                                 NULL));
>  }
>  
> +bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
> +{
> +    Object *pci_host;
> +    QObject *o;
> +
> +    pci_host = acpi_get_pci_host();

it would be better to get bus from struct AcpiPciBus
instead of doing lookup again. (that's a separate patch from moving function around)

> +    g_assert(pci_host);
> +
> +    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
> +    if (!o) {
> +        return false;
> +    }
> +    mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
> +    qobject_unref(o);
> +
> +    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
> +    assert(o);
> +    mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
> +    qobject_unref(o);
> +    return true;
> +}
> +
>  static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
>  {
>      CrsRangeEntry *entry;
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 14e2624d14..d8bba16776 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1856,28 +1856,6 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
>                   "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
>  }
>  
> -static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
> -{
> -    Object *pci_host;
> -    QObject *o;
> -
> -    pci_host = acpi_get_pci_host();
> -    g_assert(pci_host);
> -
> -    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
> -    if (!o) {
> -        return false;
> -    }
> -    mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
> -    qobject_unref(o);
> -
> -    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
> -    assert(o);
> -    mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
> -    qobject_unref(o);
> -    return true;
> -}
> -
>  static
>  void acpi_build(AcpiBuildTables *tables,
>                  MachineState *machine, AcpiConfiguration *acpi_conf)

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 12/24] hw: acpi: Export the MCFG getter
@ 2018-11-15 12:36     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-15 12:36 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:35 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Yang Zhong <yang.zhong@intel.com>
> 
> The ACPI MCFG getter is not x86 specific and could be called from
> anywhere within generic ACPI API, so let's export it.
So far it's x86 or more exactly q35 specific thing,
for example it won't work with arm/virt without refactoring
the later.

We probably over-engineered it and could get by without
properties here at all, but that's not related to this series.

So for now I'd suggest to move it to x86/acpi-pci.c for now
and we can generalize the way we get address/size later,
but if you feel like replacing all this property complications
and making it simpler go ahead.


 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> ---
>  include/hw/acpi/aml-build.h |  1 +
>  hw/acpi/aml-build.c         | 24 ++++++++++++++++++++++++
>  hw/i386/acpi-build.c        | 22 ----------------------
>  3 files changed, 25 insertions(+), 22 deletions(-)
> 
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index 1861e37ebf..64ea371656 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -408,6 +408,7 @@ void *acpi_data_push(GArray *table_data, unsigned size);
>  unsigned acpi_data_len(GArray *table);
>  Object *acpi_get_pci_host(void);
>  void acpi_get_pci_holes(Range *hole, Range *hole64);
> +bool acpi_get_mcfg(AcpiMcfgInfo *mcfg);
>  /* Align AML blob size to a multiple of 'align' */
>  void acpi_align_size(GArray *blob, unsigned align);
>  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 869ed70db3..2c5446ab23 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -32,6 +32,8 @@
>  #include "hw/i386/pc.h"
>  #include "sysemu/tpm.h"
>  #include "hw/acpi/tpm.h"
> +#include "qom/qom-qobject.h"
> +#include "qapi/qmp/qnum.h"
>  
>  #define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
>  #define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
> @@ -1657,6 +1659,28 @@ void acpi_get_pci_holes(Range *hole, Range *hole64)
>                                                 NULL));
>  }
>  
> +bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
> +{
> +    Object *pci_host;
> +    QObject *o;
> +
> +    pci_host = acpi_get_pci_host();

it would be better to get bus from struct AcpiPciBus
instead of doing lookup again. (that's a separate patch from moving function around)

> +    g_assert(pci_host);
> +
> +    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
> +    if (!o) {
> +        return false;
> +    }
> +    mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
> +    qobject_unref(o);
> +
> +    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
> +    assert(o);
> +    mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
> +    qobject_unref(o);
> +    return true;
> +}
> +
>  static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
>  {
>      CrsRangeEntry *entry;
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 14e2624d14..d8bba16776 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1856,28 +1856,6 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
>                   "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
>  }
>  
> -static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
> -{
> -    Object *pci_host;
> -    QObject *o;
> -
> -    pci_host = acpi_get_pci_host();
> -    g_assert(pci_host);
> -
> -    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
> -    if (!o) {
> -        return false;
> -    }
> -    mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
> -    qobject_unref(o);
> -
> -    o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
> -    assert(o);
> -    mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
> -    qobject_unref(o);
> -    return true;
> -}
> -
>  static
>  void acpi_build(AcpiBuildTables *tables,
>                  MachineState *machine, AcpiConfiguration *acpi_conf)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 14/24] hw: i386: Make the hotpluggable memory size property more generic
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-15 12:49     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-15 12:49 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost

On Mon,  5 Nov 2018 02:40:37 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This property is currently defined under i386/pc while it only describes
> a region size that's eventually fetched from the AML ACPI code.
> 
> We can make it more generic and shareable across machine types by moving
> it to memory-device.h instead.
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
not sure it belong to this series, but regardless where it end-ups

Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> ---
>  include/hw/i386/pc.h           | 1 -
>  include/hw/mem/memory-device.h | 2 ++
>  hw/i386/acpi-build.c           | 2 +-
>  hw/i386/pc.c                   | 3 ++-
>  4 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index bbbdb33ea3..44cb6bf3f3 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -62,7 +62,6 @@ struct PCMachineState {
>  };
>  
>  #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
> -#define PC_MACHINE_DEVMEM_REGION_SIZE "device-memory-region-size"
>  #define PC_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
>  #define PC_MACHINE_VMPORT           "vmport"
>  #define PC_MACHINE_SMM              "smm"
> diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h
> index e904e194d5..d9a4fc7c3e 100644
> --- a/include/hw/mem/memory-device.h
> +++ b/include/hw/mem/memory-device.h
> @@ -97,6 +97,8 @@ typedef struct MemoryDeviceClass {
>                               MemoryDeviceInfo *info);
>  } MemoryDeviceClass;
>  
> +#define MEMORY_DEVICE_REGION_SIZE "memory-device-region-size"
> +
>  MemoryDeviceInfoList *qmp_memory_device_list(void);
>  uint64_t get_plugged_memory_size(void);
>  void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms,
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index d8bba16776..1ef1a38441 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1628,7 +1628,7 @@ build_srat(GArray *table_data, BIOSLinker *linker,
>      MachineClass *mc = MACHINE_GET_CLASS(machine);
>      const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
>      ram_addr_t hotplugabble_address_space_size =
> -        object_property_get_int(OBJECT(machine), PC_MACHINE_DEVMEM_REGION_SIZE,
> +        object_property_get_int(OBJECT(machine), MEMORY_DEVICE_REGION_SIZE,
>                                  NULL);
>  
>      srat_start = table_data->len;
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 090f969933..c9ffc8cff6 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -67,6 +67,7 @@
>  #include "hw/boards.h"
>  #include "acpi-build.h"
>  #include "hw/mem/pc-dimm.h"
> +#include "hw/mem/memory-device.h"
>  #include "qapi/error.h"
>  #include "qapi/qapi-visit-common.h"
>  #include "qapi/visitor.h"
> @@ -2443,7 +2444,7 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
>      nc->nmi_monitor_handler = x86_nmi;
>      mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
>  
> -    object_class_property_add(oc, PC_MACHINE_DEVMEM_REGION_SIZE, "int",
> +    object_class_property_add(oc, MEMORY_DEVICE_REGION_SIZE, "int",
>          pc_machine_get_device_memory_region_size, NULL,
>          NULL, NULL, &error_abort);
>  

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 14/24] hw: i386: Make the hotpluggable memory size property more generic
@ 2018-11-15 12:49     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-15 12:49 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Mon,  5 Nov 2018 02:40:37 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This property is currently defined under i386/pc while it only describes
> a region size that's eventually fetched from the AML ACPI code.
> 
> We can make it more generic and shareable across machine types by moving
> it to memory-device.h instead.
> 
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
not sure it belong to this series, but regardless where it end-ups

Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> ---
>  include/hw/i386/pc.h           | 1 -
>  include/hw/mem/memory-device.h | 2 ++
>  hw/i386/acpi-build.c           | 2 +-
>  hw/i386/pc.c                   | 3 ++-
>  4 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index bbbdb33ea3..44cb6bf3f3 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -62,7 +62,6 @@ struct PCMachineState {
>  };
>  
>  #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
> -#define PC_MACHINE_DEVMEM_REGION_SIZE "device-memory-region-size"
>  #define PC_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
>  #define PC_MACHINE_VMPORT           "vmport"
>  #define PC_MACHINE_SMM              "smm"
> diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h
> index e904e194d5..d9a4fc7c3e 100644
> --- a/include/hw/mem/memory-device.h
> +++ b/include/hw/mem/memory-device.h
> @@ -97,6 +97,8 @@ typedef struct MemoryDeviceClass {
>                               MemoryDeviceInfo *info);
>  } MemoryDeviceClass;
>  
> +#define MEMORY_DEVICE_REGION_SIZE "memory-device-region-size"
> +
>  MemoryDeviceInfoList *qmp_memory_device_list(void);
>  uint64_t get_plugged_memory_size(void);
>  void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms,
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index d8bba16776..1ef1a38441 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1628,7 +1628,7 @@ build_srat(GArray *table_data, BIOSLinker *linker,
>      MachineClass *mc = MACHINE_GET_CLASS(machine);
>      const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
>      ram_addr_t hotplugabble_address_space_size =
> -        object_property_get_int(OBJECT(machine), PC_MACHINE_DEVMEM_REGION_SIZE,
> +        object_property_get_int(OBJECT(machine), MEMORY_DEVICE_REGION_SIZE,
>                                  NULL);
>  
>      srat_start = table_data->len;
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 090f969933..c9ffc8cff6 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -67,6 +67,7 @@
>  #include "hw/boards.h"
>  #include "acpi-build.h"
>  #include "hw/mem/pc-dimm.h"
> +#include "hw/mem/memory-device.h"
>  #include "qapi/error.h"
>  #include "qapi/qapi-visit-common.h"
>  #include "qapi/visitor.h"
> @@ -2443,7 +2444,7 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
>      nc->nmi_monitor_handler = x86_nmi;
>      mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
>  
> -    object_class_property_add(oc, PC_MACHINE_DEVMEM_REGION_SIZE, "int",
> +    object_class_property_add(oc, MEMORY_DEVICE_REGION_SIZE, "int",
>          pc_machine_get_device_memory_region_size, NULL,
>          NULL, NULL, &error_abort);
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-15 13:28     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-15 13:28 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Mon,  5 Nov 2018 02:40:38 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This is the standard way of building SRAT on x86 platfoms. But future
> machine types could decide to define their own custom SRAT build method
> through the ACPI builder methods.
> Moreover, we will also need to reach build_srat() from outside of
> acpi-build in order to use it as the ACPI builder SRAT build method.
SRAT is usually highly machine specific (memory holes, layout, guest OS
specific quirks) so it's hard to generalize it.

I'd  drop SRAT related patches from this series and introduce
i386/virt specific SRAT when you post patches for it.

What we could generalize here is building blocks used to
create entries into acpi/aml-build.c
   build_srat_memory -> build_srat_memory_entry()
   build_apic_entry()
   build_x2apic_entry()
and please switch these blocks to build_append_int_noprefix() API
before moving them to acpi/aml-build.c

> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  hw/i386/acpi-build.h | 5 +++++
>  hw/i386/acpi-build.c | 2 +-
>  2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
> index 065a1d8250..d73c41fe8f 100644
> --- a/hw/i386/acpi-build.h
> +++ b/hw/i386/acpi-build.h
> @@ -4,6 +4,11 @@
>  
>  #include "hw/acpi/acpi.h"
>  
> +/* ACPI SRAT (Static Resource Affinity Table) build method for x86 */
> +void
> +build_srat(GArray *table_data, BIOSLinker *linker,
> +           MachineState *machine, AcpiConfiguration *acpi_conf);
> +
>  void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
>  
>  #endif
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 1ef1a38441..673c5dfafc 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1615,7 +1615,7 @@ build_tpm2(GArray *table_data, BIOSLinker *linker, GArray *tcpalog)
>  #define HOLE_640K_START  (640 * KiB)
>  #define HOLE_640K_END   (1 * MiB)
>  
> -static void
> +void
>  build_srat(GArray *table_data, BIOSLinker *linker,
>             MachineState *machine, AcpiConfiguration *acpi_conf)
>  {

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method
@ 2018-11-15 13:28     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-15 13:28 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Mon,  5 Nov 2018 02:40:38 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This is the standard way of building SRAT on x86 platfoms. But future
> machine types could decide to define their own custom SRAT build method
> through the ACPI builder methods.
> Moreover, we will also need to reach build_srat() from outside of
> acpi-build in order to use it as the ACPI builder SRAT build method.
SRAT is usually highly machine specific (memory holes, layout, guest OS
specific quirks) so it's hard to generalize it.

I'd  drop SRAT related patches from this series and introduce
i386/virt specific SRAT when you post patches for it.

What we could generalize here is building blocks used to
create entries into acpi/aml-build.c
   build_srat_memory -> build_srat_memory_entry()
   build_apic_entry()
   build_x2apic_entry()
and please switch these blocks to build_append_int_noprefix() API
before moving them to acpi/aml-build.c

> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  hw/i386/acpi-build.h | 5 +++++
>  hw/i386/acpi-build.c | 2 +-
>  2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
> index 065a1d8250..d73c41fe8f 100644
> --- a/hw/i386/acpi-build.h
> +++ b/hw/i386/acpi-build.h
> @@ -4,6 +4,11 @@
>  
>  #include "hw/acpi/acpi.h"
>  
> +/* ACPI SRAT (Static Resource Affinity Table) build method for x86 */
> +void
> +build_srat(GArray *table_data, BIOSLinker *linker,
> +           MachineState *machine, AcpiConfiguration *acpi_conf);
> +
>  void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
>  
>  #endif
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 1ef1a38441..673c5dfafc 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1615,7 +1615,7 @@ build_tpm2(GArray *table_data, BIOSLinker *linker, GArray *tcpalog)
>  #define HOLE_640K_START  (640 * KiB)
>  #define HOLE_640K_END   (1 * MiB)
>  
> -static void
> +void
>  build_srat(GArray *table_data, BIOSLinker *linker,
>             MachineState *machine, AcpiConfiguration *acpi_conf)
>  {


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 18/24] hw: i386: Export the MADT build method
  2018-11-05  1:40   ` Samuel Ortiz
  (?)
  (?)
@ 2018-11-16  9:27   ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16  9:27 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Mon,  5 Nov 2018 02:40:41 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> It is going to be used by the PC machine type as the MADT table builder
> method and thus needs to be exported outside of acpi-build.c
> 
> Also, now that the generic build_madt() API is exported, we have to
> rename the ARM static one in order to avoid build time conflicts.
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/i386/acpi.h   | 28 ++++++++++++++++++++++++++++
>  hw/arm/virt-acpi-build.c |  4 ++--
>  hw/i386/acpi-build.c     |  4 ++--
>  3 files changed, 32 insertions(+), 4 deletions(-)
>  create mode 100644 include/hw/i386/acpi.h
> 
> diff --git a/include/hw/i386/acpi.h b/include/hw/i386/acpi.h
> new file mode 100644
> index 0000000000..b7a887111d
> --- /dev/null
> +++ b/include/hw/i386/acpi.h
[...]

> +/* ACPI MADT (Multiple APIC Description Table) build method */
> +void build_madt(GArray *table_data, BIOSLinker *linker,
> +                MachineState *ms, AcpiConfiguration *conf);
> +
> +#endif
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index b5e165543a..b0354c5f03 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -564,7 +564,7 @@ build_gtdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>  
>  /* MADT */
>  static void
> -build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
> +virt_build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>  {
You are moving build_madt() into x86 specific header i386/acpi.h
so question is why do you touch ARM variant at all?

[...]

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 18/24] hw: i386: Export the MADT build method
  2018-11-05  1:40   ` Samuel Ortiz
  (?)
@ 2018-11-16  9:27   ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16  9:27 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Mon,  5 Nov 2018 02:40:41 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> It is going to be used by the PC machine type as the MADT table builder
> method and thus needs to be exported outside of acpi-build.c
> 
> Also, now that the generic build_madt() API is exported, we have to
> rename the ARM static one in order to avoid build time conflicts.
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/i386/acpi.h   | 28 ++++++++++++++++++++++++++++
>  hw/arm/virt-acpi-build.c |  4 ++--
>  hw/i386/acpi-build.c     |  4 ++--
>  3 files changed, 32 insertions(+), 4 deletions(-)
>  create mode 100644 include/hw/i386/acpi.h
> 
> diff --git a/include/hw/i386/acpi.h b/include/hw/i386/acpi.h
> new file mode 100644
> index 0000000000..b7a887111d
> --- /dev/null
> +++ b/include/hw/i386/acpi.h
[...]

> +/* ACPI MADT (Multiple APIC Description Table) build method */
> +void build_madt(GArray *table_data, BIOSLinker *linker,
> +                MachineState *ms, AcpiConfiguration *conf);
> +
> +#endif
> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> index b5e165543a..b0354c5f03 100644
> --- a/hw/arm/virt-acpi-build.c
> +++ b/hw/arm/virt-acpi-build.c
> @@ -564,7 +564,7 @@ build_gtdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>  
>  /* MADT */
>  static void
> -build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
> +virt_build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
>  {
You are moving build_madt() into x86 specific header i386/acpi.h
so question is why do you touch ARM variant at all?

[...]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-16  9:39     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16  9:39 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost,
	Sebastien Boeuf, Jing Liu

On Mon,  5 Nov 2018 02:40:42 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> 
> Instead of using the machine type specific method find_i440fx() to
> retrieve the PCI bus, this commit aims to rely on the fact that the
> PCI bus is known by the structure AcpiPciHpState.
> 
> When the structure is initialized through acpi_pcihp_init() call,
> it saves the PCI bus, which means there is no need to invoke a
> special function later on.
> 
> Based on the fact that find_i440fx() was only used there, this
> patch also removes the function find_i440fx() itself from the
> entire codebase.
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>
Thanks for cleaning it up

minor nit:
Taking in account that you're removing '/* TODO: Q35 support */'
comment along with find_i440fx(), it might be worth to mention
in this commit message. Something along lines that ACPI PCIHP
exist to support guests without SHPC support on PCI
based PC machine. Considering that Q35 provides native
PCI-E hotplug, there is no need to add ACPI hotplug there.


with commit message fixed

Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> ---
>  include/hw/i386/pc.h  |  1 -
>  hw/acpi/pcihp.c       | 10 ++++------
>  hw/pci-host/piix.c    |  8 --------
>  stubs/pci-host-piix.c |  6 ------
>  stubs/Makefile.objs   |  1 -
>  5 files changed, 4 insertions(+), 22 deletions(-)
>  delete mode 100644 stubs/pci-host-piix.c
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index 44cb6bf3f3..8e5f1464eb 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
>                      MemoryRegion *pci_memory,
>                      MemoryRegion *ram_memory);
>  
> -PCIBus *find_i440fx(void);
>  /* piix4.c */
>  extern PCIDevice *piix4_dev;
>  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> index 80d42e12ff..254b2e50ab 100644
> --- a/hw/acpi/pcihp.c
> +++ b/hw/acpi/pcihp.c
> @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void *opaque)
>      return bsel_alloc;
>  }
>  
> -static void acpi_set_pci_info(void)
> +static void acpi_set_pci_info(AcpiPciHpState *s)
>  {
>      static bool bsel_is_set;
> -    PCIBus *bus;
>      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
>  
>      if (bsel_is_set) {
> @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
>      }
>      bsel_is_set = true;
>  
> -    bus = find_i440fx(); /* TODO: Q35 support */
> -    if (bus) {
> +    if (s->root) {
>          /* Scan all PCI buses. Set property to enable acpi based hotplug. */
> -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL, &bsel_alloc);
> +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL, &bsel_alloc);
>      }
>  }
>  
> @@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState *s)
>  
>  void acpi_pcihp_reset(AcpiPciHpState *s)
>  {
> -    acpi_set_pci_info();
> +    acpi_set_pci_info(s);
>      acpi_pcihp_update(s);
>  }
>  
> diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> index 47293a3915..658460264b 100644
> --- a/hw/pci-host/piix.c
> +++ b/hw/pci-host/piix.c
> @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
>      return b;
>  }
>  
> -PCIBus *find_i440fx(void)
> -{
> -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> -                                   object_resolve_path("/machine/i440fx", NULL),
> -                                   TYPE_PCI_HOST_BRIDGE);
> -    return s ? s->bus : NULL;
> -}
> -
>  /* PIIX3 PCI to ISA bridge */
>  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
>  {
> diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> deleted file mode 100644
> index 6ed81b1f21..0000000000
> --- a/stubs/pci-host-piix.c
> +++ /dev/null
> @@ -1,6 +0,0 @@
> -#include "qemu/osdep.h"
> -#include "hw/i386/pc.h"
> -PCIBus *find_i440fx(void)
> -{
> -    return NULL;
> -}
> diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> index 5dd0aeeec6..725f78bedc 100644
> --- a/stubs/Makefile.objs
> +++ b/stubs/Makefile.objs
> @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
>  stub-obj-y += vmgenid.o
>  stub-obj-y += xen-common.o
>  stub-obj-y += xen-hvm.o
> -stub-obj-y += pci-host-piix.o
>  stub-obj-y += ram-block.o
>  stub-obj-y += ramfb.o

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
@ 2018-11-16  9:39     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16  9:39 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Jing Liu, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Sebastien Boeuf, Richard Henderson

On Mon,  5 Nov 2018 02:40:42 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> 
> Instead of using the machine type specific method find_i440fx() to
> retrieve the PCI bus, this commit aims to rely on the fact that the
> PCI bus is known by the structure AcpiPciHpState.
> 
> When the structure is initialized through acpi_pcihp_init() call,
> it saves the PCI bus, which means there is no need to invoke a
> special function later on.
> 
> Based on the fact that find_i440fx() was only used there, this
> patch also removes the function find_i440fx() itself from the
> entire codebase.
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>
Thanks for cleaning it up

minor nit:
Taking in account that you're removing '/* TODO: Q35 support */'
comment along with find_i440fx(), it might be worth to mention
in this commit message. Something along lines that ACPI PCIHP
exist to support guests without SHPC support on PCI
based PC machine. Considering that Q35 provides native
PCI-E hotplug, there is no need to add ACPI hotplug there.


with commit message fixed

Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> ---
>  include/hw/i386/pc.h  |  1 -
>  hw/acpi/pcihp.c       | 10 ++++------
>  hw/pci-host/piix.c    |  8 --------
>  stubs/pci-host-piix.c |  6 ------
>  stubs/Makefile.objs   |  1 -
>  5 files changed, 4 insertions(+), 22 deletions(-)
>  delete mode 100644 stubs/pci-host-piix.c
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index 44cb6bf3f3..8e5f1464eb 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
>                      MemoryRegion *pci_memory,
>                      MemoryRegion *ram_memory);
>  
> -PCIBus *find_i440fx(void);
>  /* piix4.c */
>  extern PCIDevice *piix4_dev;
>  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> index 80d42e12ff..254b2e50ab 100644
> --- a/hw/acpi/pcihp.c
> +++ b/hw/acpi/pcihp.c
> @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void *opaque)
>      return bsel_alloc;
>  }
>  
> -static void acpi_set_pci_info(void)
> +static void acpi_set_pci_info(AcpiPciHpState *s)
>  {
>      static bool bsel_is_set;
> -    PCIBus *bus;
>      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
>  
>      if (bsel_is_set) {
> @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
>      }
>      bsel_is_set = true;
>  
> -    bus = find_i440fx(); /* TODO: Q35 support */
> -    if (bus) {
> +    if (s->root) {
>          /* Scan all PCI buses. Set property to enable acpi based hotplug. */
> -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL, &bsel_alloc);
> +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL, &bsel_alloc);
>      }
>  }
>  
> @@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState *s)
>  
>  void acpi_pcihp_reset(AcpiPciHpState *s)
>  {
> -    acpi_set_pci_info();
> +    acpi_set_pci_info(s);
>      acpi_pcihp_update(s);
>  }
>  
> diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> index 47293a3915..658460264b 100644
> --- a/hw/pci-host/piix.c
> +++ b/hw/pci-host/piix.c
> @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
>      return b;
>  }
>  
> -PCIBus *find_i440fx(void)
> -{
> -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> -                                   object_resolve_path("/machine/i440fx", NULL),
> -                                   TYPE_PCI_HOST_BRIDGE);
> -    return s ? s->bus : NULL;
> -}
> -
>  /* PIIX3 PCI to ISA bridge */
>  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
>  {
> diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> deleted file mode 100644
> index 6ed81b1f21..0000000000
> --- a/stubs/pci-host-piix.c
> +++ /dev/null
> @@ -1,6 +0,0 @@
> -#include "qemu/osdep.h"
> -#include "hw/i386/pc.h"
> -PCIBus *find_i440fx(void)
> -{
> -    return NULL;
> -}
> diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> index 5dd0aeeec6..725f78bedc 100644
> --- a/stubs/Makefile.objs
> +++ b/stubs/Makefile.objs
> @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
>  stub-obj-y += vmgenid.o
>  stub-obj-y += xen-common.o
>  stub-obj-y += xen-hvm.o
> -stub-obj-y += pci-host-piix.o
>  stub-obj-y += ram-block.o
>  stub-obj-y += ramfb.o


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 22/24] hw: pci-host: piix: Return PCI host pointer instead of PCI bus
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-16 11:09     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16 11:09 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost,
	Auger Eric, Andrew Jones

On Mon,  5 Nov 2018 02:40:45 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

All remaining patches a bit out of proper order,
they should be around patch 12/24 where you started to touch MCFG code.

> For building the MCFG table, we need to track a given machine
> type PCI host pointer, and we can't get it from the bus pointer alone.
> As piix returns a PCI bus pointer, we simply modify its builder to
> return a PCI host pointer instead.
PC machine doesn't build MCFG so we don't really need it to provide
this pointer, having this patch doesn't hurt but I'm not sure we
need it.

CCing ARM folks since we are talking about generalizing MCFG table generation.

we have following invariants wrt using MCFG:

   pc [pci_host != NULL] -> bail out early + do not build MCFG

   pc [pci_host == NULL] -> would explode if not only for [has_acpi_build = false] guard
                            should be: do not even call acpi_get_mcfg().

   q35 [pci_host == NULL] -> not valid combo and must assert

   q35 [pci_host != NULL && mcfg_base != PCIE_BASE_ADDR_UNMAPPED]
        generate MCFG using mcfg_base/size

   q35 [pci_host != NULL && mcfg_base == PCIE_BASE_ADDR_UNMAPPED]
        generate place-holder 'QEMU' table for legacy machine versions without
        resizable MemoryRegion support.
        Mapped/not mapped could be dynamic accross reboots, so we need
        access to PCIE(pci_host) to fetch current values.

   arm/virt gpex [memmap[ecam_id].base/size] do build MCFG
        hacked up variant that doesn't use pci_host mcfg_base/size fields
        not sure if it's possible to disable ecam as on q35 (does it need any fixing?)
        we should fix arm/virt to use pci-host mcfg_base/size so we
        could reuse properties PCIE_HOST_MCFG_BASE/PCIE_HOST_MCFG_SIZE
        on ARM and generic build_mcfg()

So we have quite a mess here, the current acpi_get_mcfg() does 2 things
  1. indirectly checks that pci_host is PCI-E (presence of PCIE_HOST_MCFG_BASE property)
  2. fetches mcfg_base/size if it's PCI-E host

and i386/build_mcfg() is called only when #1 is true

As far as see we use pci_host only to fetch mcfg_base/size and not for anything
else.
Maybe as refactoring plan we should"
 * pass to acpi_setup(PCIHostState* pcie_host) as an argument pcie host pointer,
   which for PC will be NULL and for the rest set it to q35/gxpe/... PCI-E host.
 * call build_mcfg() if pcie_host != NULL
   (no more indirect guessing using PCIE_HOST_MCFG_BASE property presence)
 * move MCFG/QEMU table signature decision out of build_mcfg() and pass
   it as argument 'build_mcfg(...,char *mcfg_signature)'. It moves out masking
   table quirk into caller, where q35 can decide to change signature
   if ECAM is not mapped. The rest (arm|i386/virt) will always pass 'MCFG'.
   Or even better if ecam is mapped, create MCFG (with masking trick if q35
   machine is old and do not support resizable MemoryRegions).

> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/i386/pc.h | 21 +++++++++++----------
>  hw/i386/pc_piix.c    | 18 +++++++++++-------
>  hw/pci-host/piix.c   | 24 ++++++++++++------------
>  3 files changed, 34 insertions(+), 29 deletions(-)
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index 8e5f1464eb..b6b79e146d 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -244,16 +244,17 @@ typedef struct PCII440FXState PCII440FXState;
>   */
>  #define RCR_IOPORT 0xcf9
>  
> -PCIBus *i440fx_init(const char *host_type, const char *pci_type,
> -                    PCII440FXState **pi440fx_state, int *piix_devfn,
> -                    ISABus **isa_bus, qemu_irq *pic,
> -                    MemoryRegion *address_space_mem,
> -                    MemoryRegion *address_space_io,
> -                    ram_addr_t ram_size,
> -                    ram_addr_t below_4g_mem_size,
> -                    ram_addr_t above_4g_mem_size,
> -                    MemoryRegion *pci_memory,
> -                    MemoryRegion *ram_memory);
> +struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
> +                                 PCII440FXState **pi440fx_state,
> +                                 int *piix_devfn,
> +                                 ISABus **isa_bus, qemu_irq *pic,
> +                                 MemoryRegion *address_space_mem,
> +                                 MemoryRegion *address_space_io,
> +                                 ram_addr_t ram_size,
> +                                 ram_addr_t below_4g_mem_size,
> +                                 ram_addr_t above_4g_mem_size,
> +                                 MemoryRegion *pci_memory,
> +                                 MemoryRegion *ram_memory);
>  
>  /* piix4.c */
>  extern PCIDevice *piix4_dev;
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 0620d10715..f5b139a3eb 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -32,6 +32,7 @@
>  #include "hw/display/ramfb.h"
>  #include "hw/smbios/smbios.h"
>  #include "hw/pci/pci.h"
> +#include "hw/pci/pci_host.h"
>  #include "hw/pci/pci_ids.h"
>  #include "hw/usb.h"
>  #include "net/net.h"
> @@ -75,6 +76,7 @@ static void pc_init1(MachineState *machine,
>      MemoryRegion *system_memory = get_system_memory();
>      MemoryRegion *system_io = get_system_io();
>      int i;
> +    struct PCIHostState *pci_host;
>      PCIBus *pci_bus;
>      ISABus *isa_bus;
>      PCII440FXState *i440fx_state;
> @@ -196,15 +198,17 @@ static void pc_init1(MachineState *machine,
>      }
>  
>      if (pcmc->pci_enabled) {
> -        pci_bus = i440fx_init(host_type,
> -                              pci_type,
> -                              &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
> -                              system_memory, system_io, machine->ram_size,
> -                              acpi_conf->below_4g_mem_size,
> -                              acpi_conf->above_4g_mem_size,
> -                              pci_memory, ram_memory);
> +        pci_host = i440fx_init(host_type,
> +                               pci_type,
> +                               &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
> +                               system_memory, system_io, machine->ram_size,
> +                               acpi_conf->below_4g_mem_size,
> +                               acpi_conf->above_4g_mem_size,
> +                               pci_memory, ram_memory);
> +        pci_bus = pci_host->bus;
>          pcms->bus = pci_bus;
>      } else {
> +        pci_host = NULL;
>          pci_bus = NULL;
>          i440fx_state = NULL;
>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
> diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> index 658460264b..4a412db44c 100644
> --- a/hw/pci-host/piix.c
> +++ b/hw/pci-host/piix.c
> @@ -342,17 +342,17 @@ static void i440fx_realize(PCIDevice *dev, Error **errp)
>      }
>  }
>  
> -PCIBus *i440fx_init(const char *host_type, const char *pci_type,
> -                    PCII440FXState **pi440fx_state,
> -                    int *piix3_devfn,
> -                    ISABus **isa_bus, qemu_irq *pic,
> -                    MemoryRegion *address_space_mem,
> -                    MemoryRegion *address_space_io,
> -                    ram_addr_t ram_size,
> -                    ram_addr_t below_4g_mem_size,
> -                    ram_addr_t above_4g_mem_size,
> -                    MemoryRegion *pci_address_space,
> -                    MemoryRegion *ram_memory)
> +struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
> +                                 PCII440FXState **pi440fx_state,
> +                                 int *piix3_devfn,
> +                                 ISABus **isa_bus, qemu_irq *pic,
> +                                 MemoryRegion *address_space_mem,
> +                                 MemoryRegion *address_space_io,
> +                                 ram_addr_t ram_size,
> +                                 ram_addr_t below_4g_mem_size,
> +                                 ram_addr_t above_4g_mem_size,
> +                                 MemoryRegion *pci_address_space,
> +                                 MemoryRegion *ram_memory)
>  {
>      DeviceState *dev;
>      PCIBus *b;
> @@ -442,7 +442,7 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
>  
>      i440fx_update_memory_mappings(f);
>  
> -    return b;
> +    return s;
>  }
>  
>  /* PIIX3 PCI to ISA bridge */

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 22/24] hw: pci-host: piix: Return PCI host pointer instead of PCI bus
@ 2018-11-16 11:09     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16 11:09 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Andrew Jones, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Auger Eric, Richard Henderson

On Mon,  5 Nov 2018 02:40:45 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

All remaining patches a bit out of proper order,
they should be around patch 12/24 where you started to touch MCFG code.

> For building the MCFG table, we need to track a given machine
> type PCI host pointer, and we can't get it from the bus pointer alone.
> As piix returns a PCI bus pointer, we simply modify its builder to
> return a PCI host pointer instead.
PC machine doesn't build MCFG so we don't really need it to provide
this pointer, having this patch doesn't hurt but I'm not sure we
need it.

CCing ARM folks since we are talking about generalizing MCFG table generation.

we have following invariants wrt using MCFG:

   pc [pci_host != NULL] -> bail out early + do not build MCFG

   pc [pci_host == NULL] -> would explode if not only for [has_acpi_build = false] guard
                            should be: do not even call acpi_get_mcfg().

   q35 [pci_host == NULL] -> not valid combo and must assert

   q35 [pci_host != NULL && mcfg_base != PCIE_BASE_ADDR_UNMAPPED]
        generate MCFG using mcfg_base/size

   q35 [pci_host != NULL && mcfg_base == PCIE_BASE_ADDR_UNMAPPED]
        generate place-holder 'QEMU' table for legacy machine versions without
        resizable MemoryRegion support.
        Mapped/not mapped could be dynamic accross reboots, so we need
        access to PCIE(pci_host) to fetch current values.

   arm/virt gpex [memmap[ecam_id].base/size] do build MCFG
        hacked up variant that doesn't use pci_host mcfg_base/size fields
        not sure if it's possible to disable ecam as on q35 (does it need any fixing?)
        we should fix arm/virt to use pci-host mcfg_base/size so we
        could reuse properties PCIE_HOST_MCFG_BASE/PCIE_HOST_MCFG_SIZE
        on ARM and generic build_mcfg()

So we have quite a mess here, the current acpi_get_mcfg() does 2 things
  1. indirectly checks that pci_host is PCI-E (presence of PCIE_HOST_MCFG_BASE property)
  2. fetches mcfg_base/size if it's PCI-E host

and i386/build_mcfg() is called only when #1 is true

As far as see we use pci_host only to fetch mcfg_base/size and not for anything
else.
Maybe as refactoring plan we should"
 * pass to acpi_setup(PCIHostState* pcie_host) as an argument pcie host pointer,
   which for PC will be NULL and for the rest set it to q35/gxpe/... PCI-E host.
 * call build_mcfg() if pcie_host != NULL
   (no more indirect guessing using PCIE_HOST_MCFG_BASE property presence)
 * move MCFG/QEMU table signature decision out of build_mcfg() and pass
   it as argument 'build_mcfg(...,char *mcfg_signature)'. It moves out masking
   table quirk into caller, where q35 can decide to change signature
   if ECAM is not mapped. The rest (arm|i386/virt) will always pass 'MCFG'.
   Or even better if ecam is mapped, create MCFG (with masking trick if q35
   machine is old and do not support resizable MemoryRegions).

> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/i386/pc.h | 21 +++++++++++----------
>  hw/i386/pc_piix.c    | 18 +++++++++++-------
>  hw/pci-host/piix.c   | 24 ++++++++++++------------
>  3 files changed, 34 insertions(+), 29 deletions(-)
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index 8e5f1464eb..b6b79e146d 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -244,16 +244,17 @@ typedef struct PCII440FXState PCII440FXState;
>   */
>  #define RCR_IOPORT 0xcf9
>  
> -PCIBus *i440fx_init(const char *host_type, const char *pci_type,
> -                    PCII440FXState **pi440fx_state, int *piix_devfn,
> -                    ISABus **isa_bus, qemu_irq *pic,
> -                    MemoryRegion *address_space_mem,
> -                    MemoryRegion *address_space_io,
> -                    ram_addr_t ram_size,
> -                    ram_addr_t below_4g_mem_size,
> -                    ram_addr_t above_4g_mem_size,
> -                    MemoryRegion *pci_memory,
> -                    MemoryRegion *ram_memory);
> +struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
> +                                 PCII440FXState **pi440fx_state,
> +                                 int *piix_devfn,
> +                                 ISABus **isa_bus, qemu_irq *pic,
> +                                 MemoryRegion *address_space_mem,
> +                                 MemoryRegion *address_space_io,
> +                                 ram_addr_t ram_size,
> +                                 ram_addr_t below_4g_mem_size,
> +                                 ram_addr_t above_4g_mem_size,
> +                                 MemoryRegion *pci_memory,
> +                                 MemoryRegion *ram_memory);
>  
>  /* piix4.c */
>  extern PCIDevice *piix4_dev;
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 0620d10715..f5b139a3eb 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -32,6 +32,7 @@
>  #include "hw/display/ramfb.h"
>  #include "hw/smbios/smbios.h"
>  #include "hw/pci/pci.h"
> +#include "hw/pci/pci_host.h"
>  #include "hw/pci/pci_ids.h"
>  #include "hw/usb.h"
>  #include "net/net.h"
> @@ -75,6 +76,7 @@ static void pc_init1(MachineState *machine,
>      MemoryRegion *system_memory = get_system_memory();
>      MemoryRegion *system_io = get_system_io();
>      int i;
> +    struct PCIHostState *pci_host;
>      PCIBus *pci_bus;
>      ISABus *isa_bus;
>      PCII440FXState *i440fx_state;
> @@ -196,15 +198,17 @@ static void pc_init1(MachineState *machine,
>      }
>  
>      if (pcmc->pci_enabled) {
> -        pci_bus = i440fx_init(host_type,
> -                              pci_type,
> -                              &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
> -                              system_memory, system_io, machine->ram_size,
> -                              acpi_conf->below_4g_mem_size,
> -                              acpi_conf->above_4g_mem_size,
> -                              pci_memory, ram_memory);
> +        pci_host = i440fx_init(host_type,
> +                               pci_type,
> +                               &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
> +                               system_memory, system_io, machine->ram_size,
> +                               acpi_conf->below_4g_mem_size,
> +                               acpi_conf->above_4g_mem_size,
> +                               pci_memory, ram_memory);
> +        pci_bus = pci_host->bus;
>          pcms->bus = pci_bus;
>      } else {
> +        pci_host = NULL;
>          pci_bus = NULL;
>          i440fx_state = NULL;
>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
> diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> index 658460264b..4a412db44c 100644
> --- a/hw/pci-host/piix.c
> +++ b/hw/pci-host/piix.c
> @@ -342,17 +342,17 @@ static void i440fx_realize(PCIDevice *dev, Error **errp)
>      }
>  }
>  
> -PCIBus *i440fx_init(const char *host_type, const char *pci_type,
> -                    PCII440FXState **pi440fx_state,
> -                    int *piix3_devfn,
> -                    ISABus **isa_bus, qemu_irq *pic,
> -                    MemoryRegion *address_space_mem,
> -                    MemoryRegion *address_space_io,
> -                    ram_addr_t ram_size,
> -                    ram_addr_t below_4g_mem_size,
> -                    ram_addr_t above_4g_mem_size,
> -                    MemoryRegion *pci_address_space,
> -                    MemoryRegion *ram_memory)
> +struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
> +                                 PCII440FXState **pi440fx_state,
> +                                 int *piix3_devfn,
> +                                 ISABus **isa_bus, qemu_irq *pic,
> +                                 MemoryRegion *address_space_mem,
> +                                 MemoryRegion *address_space_io,
> +                                 ram_addr_t ram_size,
> +                                 ram_addr_t below_4g_mem_size,
> +                                 ram_addr_t above_4g_mem_size,
> +                                 MemoryRegion *pci_address_space,
> +                                 MemoryRegion *ram_memory)
>  {
>      DeviceState *dev;
>      PCIBus *b;
> @@ -442,7 +442,7 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
>  
>      i440fx_update_memory_mappings(f);
>  
> -    return b;
> +    return s;
>  }
>  
>  /* PIIX3 PCI to ISA bridge */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface
  2018-11-05  1:40   ` Samuel Ortiz
@ 2018-11-16 16:02     ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16 16:02 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Mon,  5 Nov 2018 02:40:43 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> In order to decouple ACPI APIs from specific machine types, we are
> creating an ACPI builder interface that each ACPI platform can choose to
> implement.
> This way, a new machine type can re-use the high level ACPI APIs and
> define some custom table build methods, without having to duplicate most
> of the existing implementation only to add small variations to it.
I'm not sure about motivation behind so high APIs,
what obvious here is an extra level of indirection for not clear gain.

Yep using table callbacks, one can attempt to generalize
acpi_setup() and help boards to decide which tables do not build
(MCFG comes to the mind). But I'm not convinced that acpi_setup()
could be cleanly generalized as a whole (probably some parts but
not everything) so it's minor benefit for extra headache of
figuring out what callback will be actually called when reading code.

However if board needs a slightly different table, it will have to
duplicate an exiting one and then modify to suit its needs.

to me it pretty much looks the same as calling build_foo()
we use now but with an extra indirection level and then
duplicating the later for usage in another board in slightly
different manner.

I agree with Paolo's suggestion to use interfaces for generalization,
however I'd suggest a fine grained approach for providing board/target
specific items/actions for generic tables.

For example take a look at AcpiDeviceIfClass interface that is
implemented by GPE devices and its madt_cpu() method. That should
simplify generalizing cpu hotplug for arm/virt and also help
to generalize build_madt(). It's not cleanest impl. by far but
headed in the right generic direction. I have it on my TODO list to
do list to generalize acpi parts of build_fadt()/cpu hotplug
some day as part of the project that adds cpu hotplug to arm/virt board.

GPE/GED device might be not ideal place to implement that interface
(worked for pc/q35) and may be we should move it to machine level
as board has access to much more data for building ACPI tables.
For i386/virt, I'd extend/modify AcpiDeviceIfClass when/where it's
necessary instead of adding high level table hooks.
That way generic build_foo() API's would share much more code.

> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/acpi/builder.h | 100 ++++++++++++++++++++++++++++++++++++++
>  hw/acpi/builder.c         |  97 ++++++++++++++++++++++++++++++++++++
>  hw/acpi/Makefile.objs     |   1 +
>  3 files changed, 198 insertions(+)
>  create mode 100644 include/hw/acpi/builder.h
>  create mode 100644 hw/acpi/builder.c
> 
> diff --git a/include/hw/acpi/builder.h b/include/hw/acpi/builder.h
> new file mode 100644
> index 0000000000..a63b88ffe9
> --- /dev/null
> +++ b/include/hw/acpi/builder.h
> @@ -0,0 +1,100 @@
> +/*
> + *
> + * Copyright (c) 2018 Intel Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef ACPI_BUILDER_H
> +#define ACPI_BUILDER_H
> +
> +#include "qemu/osdep.h"
> +#include "hw/acpi/bios-linker-loader.h"
> +#include "qom/object.h"
> +
> +#define TYPE_ACPI_BUILDER "acpi-builder"
> +
> +#define ACPI_BUILDER_METHODS(klass) \
> +     OBJECT_CLASS_CHECK(AcpiBuilderMethods, (klass), TYPE_ACPI_BUILDER)
> +#define ACPI_BUILDER_GET_METHODS(obj) \
> +     OBJECT_GET_CLASS(AcpiBuilderMethods, (obj), TYPE_ACPI_BUILDER)
> +#define ACPI_BUILDER(obj)                                       \
> +     INTERFACE_CHECK(AcpiBuilder, (obj), TYPE_ACPI_BUILDER)
> +
> +typedef struct AcpiConfiguration AcpiConfiguration;
> +typedef struct AcpiBuildState AcpiBuildState;
> +typedef struct AcpiMcfgInfo AcpiMcfgInfo;
> +
> +typedef struct AcpiBuilder {
> +    /* <private> */
> +    Object Parent;
> +} AcpiBuilder;
> +
> +/**
> + * AcpiBuildMethods:
> + *
> + * Interface to be implemented by a machine type that needs to provide
> + * custom ACPI tables build method.
> + *
> + * @parent: Opaque parent interface.
> + * @rsdp: ACPI RSDP (Root System Description Pointer) table build callback.
> + * @madt: ACPI MADT (Multiple APIC Description Table) table build callback.
> + * @mcfg: ACPI MCFG table build callback.
> + * @srat: ACPI SRAT (System/Static Resource Affinity Table)
> + *        table build callback.
> + * @slit: ACPI SLIT (System Locality System Information Table)
> + *        table build callback.
> + * @configuration: ACPI configuration getter.
> + *                 This is used to query the machine instance for its
> + *                 AcpiConfiguration pointer.
> + */
> +typedef struct AcpiBuilderMethods {
> +    /* <private> */
> +    InterfaceClass parent;
> +
> +    /* <public> */
> +    void (*rsdp)(GArray *table_data, BIOSLinker *linker,
> +                 unsigned rsdt_tbl_offset);
> +    void (*madt)(GArray *table_data, BIOSLinker *linker,
> +                 MachineState *ms, AcpiConfiguration *conf);
> +    void (*mcfg)(GArray *table_data, BIOSLinker *linker,
> +                 AcpiMcfgInfo *info);
> +    void (*srat)(GArray *table_data, BIOSLinker *linker,
> +                 MachineState *machine, AcpiConfiguration *conf);
> +    void (*slit)(GArray *table_data, BIOSLinker *linker);
> +
> +    AcpiConfiguration *(*configuration)(AcpiBuilder *builder);
> +} AcpiBuilderMethods;
> +
> +void acpi_builder_rsdp(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       unsigned rsdt_tbl_offset);
> +
> +void acpi_builder_madt(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       MachineState *ms, AcpiConfiguration *conf);
> +
> +void acpi_builder_mcfg(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       AcpiMcfgInfo *info);
> +
> +void acpi_builder_srat(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       MachineState *machine, AcpiConfiguration *conf);
> +
> +void acpi_builder_slit(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker);
> +
> +AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder);
> +
> +#endif
> diff --git a/hw/acpi/builder.c b/hw/acpi/builder.c
> new file mode 100644
> index 0000000000..c29a614793
> --- /dev/null
> +++ b/hw/acpi/builder.c
> @@ -0,0 +1,97 @@
> +/*
> + *
> + * Copyright (c) 2018 Intel Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/module.h"
> +#include "qom/object.h"
> +#include "hw/acpi/builder.h"
> +
> +void acpi_builder_rsdp(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       unsigned rsdt_tbl_offset)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->rsdp) {
> +        abm->rsdp(table_data, linker, rsdt_tbl_offset);
> +    }
> +}
> +
> +void acpi_builder_madt(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       MachineState *ms, AcpiConfiguration *conf)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->madt) {
> +        abm->madt(table_data, linker, ms, conf);
> +    }
> +}
> +
> +void acpi_builder_mcfg(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       AcpiMcfgInfo *info)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->mcfg) {
> +        abm->mcfg(table_data, linker, info);
> +    }
> +}
> +
> +void acpi_builder_srat(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       MachineState *machine, AcpiConfiguration *conf)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->srat) {
> +        abm->srat(table_data, linker, machine, conf);
> +    }
> +}
> +
> +void acpi_builder_slit(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->slit) {
> +        abm->slit(table_data, linker);
> +    }
> +}
> +
> +AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +    if (abm && abm->configuration) {
> +        return abm->configuration(builder);
> +    }
> +    return NULL;
> +}
> +
> +static const TypeInfo acpi_builder_info = {
> +    .name          = TYPE_ACPI_BUILDER,
> +    .parent        = TYPE_INTERFACE,
> +    .class_size    = sizeof(AcpiBuilderMethods),
> +};
> +
> +static void acpi_builder_register_type(void)
> +{
> +    type_register_static(&acpi_builder_info);
> +}
> +
> +type_init(acpi_builder_register_type)
> diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
> index 11c35bcb44..2f383adc6f 100644
> --- a/hw/acpi/Makefile.objs
> +++ b/hw/acpi/Makefile.objs
> @@ -11,6 +11,7 @@ common-obj-$(call lnot,$(CONFIG_ACPI_X86)) += acpi-stub.o
>  common-obj-y += acpi_interface.o
>  common-obj-y += bios-linker-loader.o
>  common-obj-y += aml-build.o
> +common-obj-y += builder.o
>  
>  common-obj-$(CONFIG_IPMI) += ipmi.o
>  common-obj-$(call lnot,$(CONFIG_IPMI)) += ipmi-stub.o

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface
@ 2018-11-16 16:02     ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16 16:02 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Mon,  5 Nov 2018 02:40:43 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> In order to decouple ACPI APIs from specific machine types, we are
> creating an ACPI builder interface that each ACPI platform can choose to
> implement.
> This way, a new machine type can re-use the high level ACPI APIs and
> define some custom table build methods, without having to duplicate most
> of the existing implementation only to add small variations to it.
I'm not sure about motivation behind so high APIs,
what obvious here is an extra level of indirection for not clear gain.

Yep using table callbacks, one can attempt to generalize
acpi_setup() and help boards to decide which tables do not build
(MCFG comes to the mind). But I'm not convinced that acpi_setup()
could be cleanly generalized as a whole (probably some parts but
not everything) so it's minor benefit for extra headache of
figuring out what callback will be actually called when reading code.

However if board needs a slightly different table, it will have to
duplicate an exiting one and then modify to suit its needs.

to me it pretty much looks the same as calling build_foo()
we use now but with an extra indirection level and then
duplicating the later for usage in another board in slightly
different manner.

I agree with Paolo's suggestion to use interfaces for generalization,
however I'd suggest a fine grained approach for providing board/target
specific items/actions for generic tables.

For example take a look at AcpiDeviceIfClass interface that is
implemented by GPE devices and its madt_cpu() method. That should
simplify generalizing cpu hotplug for arm/virt and also help
to generalize build_madt(). It's not cleanest impl. by far but
headed in the right generic direction. I have it on my TODO list to
do list to generalize acpi parts of build_fadt()/cpu hotplug
some day as part of the project that adds cpu hotplug to arm/virt board.

GPE/GED device might be not ideal place to implement that interface
(worked for pc/q35) and may be we should move it to machine level
as board has access to much more data for building ACPI tables.
For i386/virt, I'd extend/modify AcpiDeviceIfClass when/where it's
necessary instead of adding high level table hooks.
That way generic build_foo() API's would share much more code.

> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> ---
>  include/hw/acpi/builder.h | 100 ++++++++++++++++++++++++++++++++++++++
>  hw/acpi/builder.c         |  97 ++++++++++++++++++++++++++++++++++++
>  hw/acpi/Makefile.objs     |   1 +
>  3 files changed, 198 insertions(+)
>  create mode 100644 include/hw/acpi/builder.h
>  create mode 100644 hw/acpi/builder.c
> 
> diff --git a/include/hw/acpi/builder.h b/include/hw/acpi/builder.h
> new file mode 100644
> index 0000000000..a63b88ffe9
> --- /dev/null
> +++ b/include/hw/acpi/builder.h
> @@ -0,0 +1,100 @@
> +/*
> + *
> + * Copyright (c) 2018 Intel Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef ACPI_BUILDER_H
> +#define ACPI_BUILDER_H
> +
> +#include "qemu/osdep.h"
> +#include "hw/acpi/bios-linker-loader.h"
> +#include "qom/object.h"
> +
> +#define TYPE_ACPI_BUILDER "acpi-builder"
> +
> +#define ACPI_BUILDER_METHODS(klass) \
> +     OBJECT_CLASS_CHECK(AcpiBuilderMethods, (klass), TYPE_ACPI_BUILDER)
> +#define ACPI_BUILDER_GET_METHODS(obj) \
> +     OBJECT_GET_CLASS(AcpiBuilderMethods, (obj), TYPE_ACPI_BUILDER)
> +#define ACPI_BUILDER(obj)                                       \
> +     INTERFACE_CHECK(AcpiBuilder, (obj), TYPE_ACPI_BUILDER)
> +
> +typedef struct AcpiConfiguration AcpiConfiguration;
> +typedef struct AcpiBuildState AcpiBuildState;
> +typedef struct AcpiMcfgInfo AcpiMcfgInfo;
> +
> +typedef struct AcpiBuilder {
> +    /* <private> */
> +    Object Parent;
> +} AcpiBuilder;
> +
> +/**
> + * AcpiBuildMethods:
> + *
> + * Interface to be implemented by a machine type that needs to provide
> + * custom ACPI tables build method.
> + *
> + * @parent: Opaque parent interface.
> + * @rsdp: ACPI RSDP (Root System Description Pointer) table build callback.
> + * @madt: ACPI MADT (Multiple APIC Description Table) table build callback.
> + * @mcfg: ACPI MCFG table build callback.
> + * @srat: ACPI SRAT (System/Static Resource Affinity Table)
> + *        table build callback.
> + * @slit: ACPI SLIT (System Locality System Information Table)
> + *        table build callback.
> + * @configuration: ACPI configuration getter.
> + *                 This is used to query the machine instance for its
> + *                 AcpiConfiguration pointer.
> + */
> +typedef struct AcpiBuilderMethods {
> +    /* <private> */
> +    InterfaceClass parent;
> +
> +    /* <public> */
> +    void (*rsdp)(GArray *table_data, BIOSLinker *linker,
> +                 unsigned rsdt_tbl_offset);
> +    void (*madt)(GArray *table_data, BIOSLinker *linker,
> +                 MachineState *ms, AcpiConfiguration *conf);
> +    void (*mcfg)(GArray *table_data, BIOSLinker *linker,
> +                 AcpiMcfgInfo *info);
> +    void (*srat)(GArray *table_data, BIOSLinker *linker,
> +                 MachineState *machine, AcpiConfiguration *conf);
> +    void (*slit)(GArray *table_data, BIOSLinker *linker);
> +
> +    AcpiConfiguration *(*configuration)(AcpiBuilder *builder);
> +} AcpiBuilderMethods;
> +
> +void acpi_builder_rsdp(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       unsigned rsdt_tbl_offset);
> +
> +void acpi_builder_madt(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       MachineState *ms, AcpiConfiguration *conf);
> +
> +void acpi_builder_mcfg(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       AcpiMcfgInfo *info);
> +
> +void acpi_builder_srat(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       MachineState *machine, AcpiConfiguration *conf);
> +
> +void acpi_builder_slit(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker);
> +
> +AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder);
> +
> +#endif
> diff --git a/hw/acpi/builder.c b/hw/acpi/builder.c
> new file mode 100644
> index 0000000000..c29a614793
> --- /dev/null
> +++ b/hw/acpi/builder.c
> @@ -0,0 +1,97 @@
> +/*
> + *
> + * Copyright (c) 2018 Intel Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/module.h"
> +#include "qom/object.h"
> +#include "hw/acpi/builder.h"
> +
> +void acpi_builder_rsdp(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       unsigned rsdt_tbl_offset)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->rsdp) {
> +        abm->rsdp(table_data, linker, rsdt_tbl_offset);
> +    }
> +}
> +
> +void acpi_builder_madt(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       MachineState *ms, AcpiConfiguration *conf)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->madt) {
> +        abm->madt(table_data, linker, ms, conf);
> +    }
> +}
> +
> +void acpi_builder_mcfg(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       AcpiMcfgInfo *info)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->mcfg) {
> +        abm->mcfg(table_data, linker, info);
> +    }
> +}
> +
> +void acpi_builder_srat(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker,
> +                       MachineState *machine, AcpiConfiguration *conf)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->srat) {
> +        abm->srat(table_data, linker, machine, conf);
> +    }
> +}
> +
> +void acpi_builder_slit(AcpiBuilder *builder,
> +                       GArray *table_data, BIOSLinker *linker)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +
> +    if (abm && abm->slit) {
> +        abm->slit(table_data, linker);
> +    }
> +}
> +
> +AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder)
> +{
> +    AcpiBuilderMethods *abm = ACPI_BUILDER_GET_METHODS(builder);
> +    if (abm && abm->configuration) {
> +        return abm->configuration(builder);
> +    }
> +    return NULL;
> +}
> +
> +static const TypeInfo acpi_builder_info = {
> +    .name          = TYPE_ACPI_BUILDER,
> +    .parent        = TYPE_INTERFACE,
> +    .class_size    = sizeof(AcpiBuilderMethods),
> +};
> +
> +static void acpi_builder_register_type(void)
> +{
> +    type_register_static(&acpi_builder_info);
> +}
> +
> +type_init(acpi_builder_register_type)
> diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
> index 11c35bcb44..2f383adc6f 100644
> --- a/hw/acpi/Makefile.objs
> +++ b/hw/acpi/Makefile.objs
> @@ -11,6 +11,7 @@ common-obj-$(call lnot,$(CONFIG_ACPI_X86)) += acpi-stub.o
>  common-obj-y += acpi_interface.o
>  common-obj-y += bios-linker-loader.o
>  common-obj-y += aml-build.o
> +common-obj-y += builder.o
>  
>  common-obj-$(CONFIG_IPMI) += ipmi.o
>  common-obj-$(call lnot,$(CONFIG_IPMI)) += ipmi-stub.o


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-05  1:40 ` Samuel Ortiz
@ 2018-11-16 16:29   ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16 16:29 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Mon,  5 Nov 2018 02:40:23 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This patch set provides an ACPI code reorganization in preparation for
> adding a shared hardware-reduced ACPI API to QEMU.
> 
> The changes are coming from the NEMU [1] project where we're defining
> a new x86 machine type: i386/virt. This is an EFI only, ACPI
> hardware-reduced platform that is built on top of a generic
> hardware-reduced ACPI API [2]. This API was initially based off the
> generic parts of the arm/virt-acpi-build.c implementation, and the goal
> is for both i386/virt and arm/virt to duplicate as little code as
> possible by using this new, shared API.
> 
> As a preliminary for adding this hardware-reduced ACPI API to QEMU, we did
> some ACPI code reorganization with the following goals:
> 
> * Share as much as possible of the current ACPI build APIs between
>   legacy and hardware-reduced ACPI.
> * Share the ACPI build code across machine types and architectures and
>   remove the typical PC machine type dependency.
> 
> The patches are also available in their own git branch [3].
I think, I'm done with reviewing this patchset, to sum up
thanks for trying generalize acpi parts. It is implemented not
exactly generic way and patches aren't split perfectly but
we can work on it.

General suggestions for this series:
  1. Preferably don't do multiple changes within a patch
     neither post huge patches (unless it's pure code movement).
     (it's easy to squash patches later it necessary)
  2. Start small, pick a table generalize it and send as
     one small patchset. Tables are often independent
     and it's much easier on both author/reviewer to agree upon
     changes and rewrite it if necessary.
  3. when you think about refactoring acpi into a generic API
     think about it as routines that go into a separate library
     (pure acpi spec code) and qemu/acpi glue routines and
      divide them correspondingly.

> [1] https://github.com/intel/nemu
> [2] https://github.com/intel/nemu/blob/topic/virt-x86/hw/acpi/reduced.c
> [3] https://github.com/intel/nemu/tree/topic/upstream/acpi
> 
> v1 -> v2:
>    * Drop the hardware-reduced implementation for now. Our next patch
>    * set
>      will add hardware-reduced and convert arm/virt to it.
>    * Implement the ACPI build methods as a QOM Interface Class and
>    * convert
>      the PC machine type to it.
>    * acpi_conf_pc_init() uses a PCMachineState pointer and not a
>      MachineState one as its argument.
> 
> v2 -> v3:
>    * Cc all relevant maintainers, no functional changes.
> 
> v3 -> v4:
>    * Renamed all AcpiConfiguration pointers from conf to acpi_conf.
>    * Removed the ACPI_BUILD_ALIGN_SIZE export.
>    * Temporarily updated the arm virt build_rsdp() prototype for
>      bisectability purposes.
>    * Removed unneeded pci headers from acpi-build.c.
>    * Refactor the acpi PCI host getter so that it truly is architecture
>      agnostic, by carrying the PCI host pointer through the
>      AcpiConfiguration structure.
>    * Splitted the PCI host AML builder API export patch from the PCI
>      host and holes getter one.
>    * Reduced the build_srat() export scope to hw/i386 instead of the
>      broader hw/acpi. SRAT builders are truly architecture specific
>      and can hardly be generalized.
>    * Completed the ACPI builder documentation.
> 
> v4 -> v5:
>    * Reorganize the ACPI RSDP export and XSDT implementation into 3
>      patches.
>    * Fix the hw/i386/acpi header inclusions.
> 
> Samuel Ortiz (16):
>   hw: i386: Decouple the ACPI build from the PC machine type
>   hw: acpi: Export ACPI build alignment API
>   hw: acpi: The RSDP build API can return void
>   hw: acpi: Export the RSDP build API
>   hw: acpi: Implement XSDT support for RSDP
>   hw: acpi: Factorize the RSDP build API implementation
>   hw: i386: Move PCI host definitions to pci_host.h
>   hw: acpi: Export the PCI host and holes getters
>   hw: acpi: Do not create hotplug method when handler is not defined
>   hw: i386: Make the hotpluggable memory size property more generic
>   hw: i386: Export the i386 ACPI SRAT build method
>   hw: i386: Export the MADT build method
>   hw: acpi: Define ACPI tables builder interface
>   hw: i386: Implement the ACPI builder interface for PC
>   hw: pci-host: piix: Return PCI host pointer instead of PCI bus
>   hw: i386: Set ACPI configuration PCI host pointer
> 
> Sebastien Boeuf (2):
>   hw: acpi: Export the PCI hotplug API
>   hw: acpi: Retrieve the PCI bus from AcpiPciHpState
> 
> Yang Zhong (6):
>   hw: acpi: Generalize AML build routines
>   hw: acpi: Factorize _OSC AML across architectures
>   hw: acpi: Export and generalize the PCI host AML API
>   hw: acpi: Export the MCFG getter
>   hw: acpi: Fix memory hotplug AML generation error
>   hw: i386: Refactor PCI host getter
> 
>  hw/i386/acpi-build.h           |    9 +-
>  include/hw/acpi/acpi-defs.h    |   14 +
>  include/hw/acpi/acpi.h         |   44 ++
>  include/hw/acpi/aml-build.h    |   47 ++
>  include/hw/acpi/builder.h      |  100 +++
>  include/hw/i386/acpi.h         |   28 +
>  include/hw/i386/pc.h           |   49 +-
>  include/hw/mem/memory-device.h |    2 +
>  include/hw/pci/pci_host.h      |    6 +
>  hw/acpi/aml-build.c            |  981 +++++++++++++++++++++++++++++
>  hw/acpi/builder.c              |   97 +++
>  hw/acpi/cpu.c                  |    8 +-
>  hw/acpi/cpu_hotplug.c          |    9 +-
>  hw/acpi/memory_hotplug.c       |   21 +-
>  hw/acpi/pcihp.c                |   10 +-
>  hw/arm/virt-acpi-build.c       |   93 +--
>  hw/i386/acpi-build.c           | 1072 +++-----------------------------
>  hw/i386/pc.c                   |  198 +++---
>  hw/i386/pc_piix.c              |   36 +-
>  hw/i386/pc_q35.c               |   22 +-
>  hw/i386/xen/xen-hvm.c          |   19 +-
>  hw/pci-host/piix.c             |   32 +-
>  stubs/pci-host-piix.c          |    6 -
>  hw/acpi/Makefile.objs          |    1 +
>  stubs/Makefile.objs            |    1 -
>  25 files changed, 1644 insertions(+), 1261 deletions(-)
>  create mode 100644 include/hw/acpi/builder.h
>  create mode 100644 include/hw/i386/acpi.h
>  create mode 100644 hw/acpi/builder.c
>  delete mode 100644 stubs/pci-host-piix.c
> 

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-16 16:29   ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-16 16:29 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Mon,  5 Nov 2018 02:40:23 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> This patch set provides an ACPI code reorganization in preparation for
> adding a shared hardware-reduced ACPI API to QEMU.
> 
> The changes are coming from the NEMU [1] project where we're defining
> a new x86 machine type: i386/virt. This is an EFI only, ACPI
> hardware-reduced platform that is built on top of a generic
> hardware-reduced ACPI API [2]. This API was initially based off the
> generic parts of the arm/virt-acpi-build.c implementation, and the goal
> is for both i386/virt and arm/virt to duplicate as little code as
> possible by using this new, shared API.
> 
> As a preliminary for adding this hardware-reduced ACPI API to QEMU, we did
> some ACPI code reorganization with the following goals:
> 
> * Share as much as possible of the current ACPI build APIs between
>   legacy and hardware-reduced ACPI.
> * Share the ACPI build code across machine types and architectures and
>   remove the typical PC machine type dependency.
> 
> The patches are also available in their own git branch [3].
I think, I'm done with reviewing this patchset, to sum up
thanks for trying generalize acpi parts. It is implemented not
exactly generic way and patches aren't split perfectly but
we can work on it.

General suggestions for this series:
  1. Preferably don't do multiple changes within a patch
     neither post huge patches (unless it's pure code movement).
     (it's easy to squash patches later it necessary)
  2. Start small, pick a table generalize it and send as
     one small patchset. Tables are often independent
     and it's much easier on both author/reviewer to agree upon
     changes and rewrite it if necessary.
  3. when you think about refactoring acpi into a generic API
     think about it as routines that go into a separate library
     (pure acpi spec code) and qemu/acpi glue routines and
      divide them correspondingly.

> [1] https://github.com/intel/nemu
> [2] https://github.com/intel/nemu/blob/topic/virt-x86/hw/acpi/reduced.c
> [3] https://github.com/intel/nemu/tree/topic/upstream/acpi
> 
> v1 -> v2:
>    * Drop the hardware-reduced implementation for now. Our next patch
>    * set
>      will add hardware-reduced and convert arm/virt to it.
>    * Implement the ACPI build methods as a QOM Interface Class and
>    * convert
>      the PC machine type to it.
>    * acpi_conf_pc_init() uses a PCMachineState pointer and not a
>      MachineState one as its argument.
> 
> v2 -> v3:
>    * Cc all relevant maintainers, no functional changes.
> 
> v3 -> v4:
>    * Renamed all AcpiConfiguration pointers from conf to acpi_conf.
>    * Removed the ACPI_BUILD_ALIGN_SIZE export.
>    * Temporarily updated the arm virt build_rsdp() prototype for
>      bisectability purposes.
>    * Removed unneeded pci headers from acpi-build.c.
>    * Refactor the acpi PCI host getter so that it truly is architecture
>      agnostic, by carrying the PCI host pointer through the
>      AcpiConfiguration structure.
>    * Splitted the PCI host AML builder API export patch from the PCI
>      host and holes getter one.
>    * Reduced the build_srat() export scope to hw/i386 instead of the
>      broader hw/acpi. SRAT builders are truly architecture specific
>      and can hardly be generalized.
>    * Completed the ACPI builder documentation.
> 
> v4 -> v5:
>    * Reorganize the ACPI RSDP export and XSDT implementation into 3
>      patches.
>    * Fix the hw/i386/acpi header inclusions.
> 
> Samuel Ortiz (16):
>   hw: i386: Decouple the ACPI build from the PC machine type
>   hw: acpi: Export ACPI build alignment API
>   hw: acpi: The RSDP build API can return void
>   hw: acpi: Export the RSDP build API
>   hw: acpi: Implement XSDT support for RSDP
>   hw: acpi: Factorize the RSDP build API implementation
>   hw: i386: Move PCI host definitions to pci_host.h
>   hw: acpi: Export the PCI host and holes getters
>   hw: acpi: Do not create hotplug method when handler is not defined
>   hw: i386: Make the hotpluggable memory size property more generic
>   hw: i386: Export the i386 ACPI SRAT build method
>   hw: i386: Export the MADT build method
>   hw: acpi: Define ACPI tables builder interface
>   hw: i386: Implement the ACPI builder interface for PC
>   hw: pci-host: piix: Return PCI host pointer instead of PCI bus
>   hw: i386: Set ACPI configuration PCI host pointer
> 
> Sebastien Boeuf (2):
>   hw: acpi: Export the PCI hotplug API
>   hw: acpi: Retrieve the PCI bus from AcpiPciHpState
> 
> Yang Zhong (6):
>   hw: acpi: Generalize AML build routines
>   hw: acpi: Factorize _OSC AML across architectures
>   hw: acpi: Export and generalize the PCI host AML API
>   hw: acpi: Export the MCFG getter
>   hw: acpi: Fix memory hotplug AML generation error
>   hw: i386: Refactor PCI host getter
> 
>  hw/i386/acpi-build.h           |    9 +-
>  include/hw/acpi/acpi-defs.h    |   14 +
>  include/hw/acpi/acpi.h         |   44 ++
>  include/hw/acpi/aml-build.h    |   47 ++
>  include/hw/acpi/builder.h      |  100 +++
>  include/hw/i386/acpi.h         |   28 +
>  include/hw/i386/pc.h           |   49 +-
>  include/hw/mem/memory-device.h |    2 +
>  include/hw/pci/pci_host.h      |    6 +
>  hw/acpi/aml-build.c            |  981 +++++++++++++++++++++++++++++
>  hw/acpi/builder.c              |   97 +++
>  hw/acpi/cpu.c                  |    8 +-
>  hw/acpi/cpu_hotplug.c          |    9 +-
>  hw/acpi/memory_hotplug.c       |   21 +-
>  hw/acpi/pcihp.c                |   10 +-
>  hw/arm/virt-acpi-build.c       |   93 +--
>  hw/i386/acpi-build.c           | 1072 +++-----------------------------
>  hw/i386/pc.c                   |  198 +++---
>  hw/i386/pc_piix.c              |   36 +-
>  hw/i386/pc_q35.c               |   22 +-
>  hw/i386/xen/xen-hvm.c          |   19 +-
>  hw/pci-host/piix.c             |   32 +-
>  stubs/pci-host-piix.c          |    6 -
>  hw/acpi/Makefile.objs          |    1 +
>  stubs/Makefile.objs            |    1 -
>  25 files changed, 1644 insertions(+), 1261 deletions(-)
>  create mode 100644 include/hw/acpi/builder.h
>  create mode 100644 include/hw/i386/acpi.h
>  create mode 100644 hw/acpi/builder.c
>  delete mode 100644 stubs/pci-host-piix.c
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-16 16:29   ` Igor Mammedov
@ 2018-11-16 16:37     ` Paolo Bonzini
  -1 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-16 16:37 UTC (permalink / raw)
  To: Igor Mammedov, Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Anthony Perard,
	xen-devel, Richard Henderson

On 16/11/18 17:29, Igor Mammedov wrote:
> General suggestions for this series:
>   1. Preferably don't do multiple changes within a patch
>      neither post huge patches (unless it's pure code movement).
>      (it's easy to squash patches later it necessary)
>   2. Start small, pick a table generalize it and send as
>      one small patchset. Tables are often independent
>      and it's much easier on both author/reviewer to agree upon
>      changes and rewrite it if necessary.

How would that be done?  This series is on the bigger side, agreed, but
most of it is really just code movement.  It's a starting point, having
a generic ACPI library is way beyond what this is trying to do.

Paolo

>   3. when you think about refactoring acpi into a generic API
>      think about it as routines that go into a separate library
>      (pure acpi spec code) and qemu/acpi glue routines and
>       divide them correspondingly.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-16 16:37     ` Paolo Bonzini
  0 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-16 16:37 UTC (permalink / raw)
  To: Igor Mammedov, Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On 16/11/18 17:29, Igor Mammedov wrote:
> General suggestions for this series:
>   1. Preferably don't do multiple changes within a patch
>      neither post huge patches (unless it's pure code movement).
>      (it's easy to squash patches later it necessary)
>   2. Start small, pick a table generalize it and send as
>      one small patchset. Tables are often independent
>      and it's much easier on both author/reviewer to agree upon
>      changes and rewrite it if necessary.

How would that be done?  This series is on the bigger side, agreed, but
most of it is really just code movement.  It's a starting point, having
a generic ACPI library is way beyond what this is trying to do.

Paolo

>   3. when you think about refactoring acpi into a generic API
>      think about it as routines that go into a separate library
>      (pure acpi spec code) and qemu/acpi glue routines and
>       divide them correspondingly.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
  2018-11-16  9:39     ` Igor Mammedov
  (?)
@ 2018-11-16 19:42     ` Boeuf, Sebastien
  2018-11-19 15:37         ` Igor Mammedov
  -1 siblings, 1 reply; 170+ messages in thread
From: Boeuf, Sebastien @ 2018-11-16 19:42 UTC (permalink / raw)
  To: sameo, imammedo
  Cc: peter.maydell, anthony.perard, sstabellini, jing2.liu, mst,
	qemu-devel, ehabkost, shannon.zhaosl, pbonzini, qemu-arm, rth,
	marcel.apfelbaum, xen-devel

Hi Igor,

On Fri, 2018-11-16 at 10:39 +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:42 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > 
> > From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > 
> > Instead of using the machine type specific method find_i440fx() to
> > retrieve the PCI bus, this commit aims to rely on the fact that the
> > PCI bus is known by the structure AcpiPciHpState.
> > 
> > When the structure is initialized through acpi_pcihp_init() call,
> > it saves the PCI bus, which means there is no need to invoke a
> > special function later on.
> > 
> > Based on the fact that find_i440fx() was only used there, this
> > patch also removes the function find_i440fx() itself from the
> > entire codebase.
> > 
> > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>
> Thanks for cleaning it up
> 
> minor nit:
> Taking in account that you're removing '/* TODO: Q35 support */'
> comment along with find_i440fx(), it might be worth to mention
> in this commit message. Something along lines that ACPI PCIHP
> exist to support guests without SHPC support on PCI
> based PC machine. Considering that Q35 provides native
> PCI-E hotplug, there is no need to add ACPI hotplug there.

Oh yes sure we can update the commit message :). But just wanted to
mention that 'pc' machine type uses ACPI PCIHP and does support
SHPC, so it's not mutually exclusive.

> 
> 
> with commit message fixed
> 
> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> 
> > 
> > ---
> >  include/hw/i386/pc.h  |  1 -
> >  hw/acpi/pcihp.c       | 10 ++++------
> >  hw/pci-host/piix.c    |  8 --------
> >  stubs/pci-host-piix.c |  6 ------
> >  stubs/Makefile.objs   |  1 -
> >  5 files changed, 4 insertions(+), 22 deletions(-)
> >  delete mode 100644 stubs/pci-host-piix.c
> > 
> > diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > index 44cb6bf3f3..8e5f1464eb 100644
> > --- a/include/hw/i386/pc.h
> > +++ b/include/hw/i386/pc.h
> > @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type,
> > const char *pci_type,
> >                      MemoryRegion *pci_memory,
> >                      MemoryRegion *ram_memory);
> >  
> > -PCIBus *find_i440fx(void);
> >  /* piix4.c */
> >  extern PCIDevice *piix4_dev;
> >  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> > index 80d42e12ff..254b2e50ab 100644
> > --- a/hw/acpi/pcihp.c
> > +++ b/hw/acpi/pcihp.c
> > @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void
> > *opaque)
> >      return bsel_alloc;
> >  }
> >  
> > -static void acpi_set_pci_info(void)
> > +static void acpi_set_pci_info(AcpiPciHpState *s)
> >  {
> >      static bool bsel_is_set;
> > -    PCIBus *bus;
> >      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
> >  
> >      if (bsel_is_set) {
> > @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
> >      }
> >      bsel_is_set = true;
> >  
> > -    bus = find_i440fx(); /* TODO: Q35 support */
> > -    if (bus) {
> > +    if (s->root) {
> >          /* Scan all PCI buses. Set property to enable acpi based
> > hotplug. */
> > -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL,
> > &bsel_alloc);
> > +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL,
> > &bsel_alloc);
> >      }
> >  }
> >  
> > @@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState
> > *s)
> >  
> >  void acpi_pcihp_reset(AcpiPciHpState *s)
> >  {
> > -    acpi_set_pci_info();
> > +    acpi_set_pci_info(s);
> >      acpi_pcihp_update(s);
> >  }
> >  
> > diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> > index 47293a3915..658460264b 100644
> > --- a/hw/pci-host/piix.c
> > +++ b/hw/pci-host/piix.c
> > @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type,
> > const char *pci_type,
> >      return b;
> >  }
> >  
> > -PCIBus *find_i440fx(void)
> > -{
> > -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> > -                                   object_resolve_path("/machine/i
> > 440fx", NULL),
> > -                                   TYPE_PCI_HOST_BRIDGE);
> > -    return s ? s->bus : NULL;
> > -}
> > -
> >  /* PIIX3 PCI to ISA bridge */
> >  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
> >  {
> > diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> > deleted file mode 100644
> > index 6ed81b1f21..0000000000
> > --- a/stubs/pci-host-piix.c
> > +++ /dev/null
> > @@ -1,6 +0,0 @@
> > -#include "qemu/osdep.h"
> > -#include "hw/i386/pc.h"
> > -PCIBus *find_i440fx(void)
> > -{
> > -    return NULL;
> > -}
> > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > index 5dd0aeeec6..725f78bedc 100644
> > --- a/stubs/Makefile.objs
> > +++ b/stubs/Makefile.objs
> > @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
> >  stub-obj-y += vmgenid.o
> >  stub-obj-y += xen-common.o
> >  stub-obj-y += xen-hvm.o
> > -stub-obj-y += pci-host-piix.o
> >  stub-obj-y += ram-block.o
> >  stub-obj-y += ramfb.o

Thanks,
Sebastien

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
  2018-11-16  9:39     ` Igor Mammedov
  (?)
  (?)
@ 2018-11-16 19:42     ` Boeuf, Sebastien
  -1 siblings, 0 replies; 170+ messages in thread
From: Boeuf, Sebastien @ 2018-11-16 19:42 UTC (permalink / raw)
  To: sameo, imammedo
  Cc: peter.maydell, sstabellini, ehabkost, mst, jing2.liu, qemu-devel,
	shannon.zhaosl, qemu-arm, marcel.apfelbaum, xen-devel,
	anthony.perard, pbonzini, rth

Hi Igor,

On Fri, 2018-11-16 at 10:39 +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:42 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > 
> > From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > 
> > Instead of using the machine type specific method find_i440fx() to
> > retrieve the PCI bus, this commit aims to rely on the fact that the
> > PCI bus is known by the structure AcpiPciHpState.
> > 
> > When the structure is initialized through acpi_pcihp_init() call,
> > it saves the PCI bus, which means there is no need to invoke a
> > special function later on.
> > 
> > Based on the fact that find_i440fx() was only used there, this
> > patch also removes the function find_i440fx() itself from the
> > entire codebase.
> > 
> > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>
> Thanks for cleaning it up
> 
> minor nit:
> Taking in account that you're removing '/* TODO: Q35 support */'
> comment along with find_i440fx(), it might be worth to mention
> in this commit message. Something along lines that ACPI PCIHP
> exist to support guests without SHPC support on PCI
> based PC machine. Considering that Q35 provides native
> PCI-E hotplug, there is no need to add ACPI hotplug there.

Oh yes sure we can update the commit message :). But just wanted to
mention that 'pc' machine type uses ACPI PCIHP and does support
SHPC, so it's not mutually exclusive.

> 
> 
> with commit message fixed
> 
> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> 
> > 
> > ---
> >  include/hw/i386/pc.h  |  1 -
> >  hw/acpi/pcihp.c       | 10 ++++------
> >  hw/pci-host/piix.c    |  8 --------
> >  stubs/pci-host-piix.c |  6 ------
> >  stubs/Makefile.objs   |  1 -
> >  5 files changed, 4 insertions(+), 22 deletions(-)
> >  delete mode 100644 stubs/pci-host-piix.c
> > 
> > diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > index 44cb6bf3f3..8e5f1464eb 100644
> > --- a/include/hw/i386/pc.h
> > +++ b/include/hw/i386/pc.h
> > @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type,
> > const char *pci_type,
> >                      MemoryRegion *pci_memory,
> >                      MemoryRegion *ram_memory);
> >  
> > -PCIBus *find_i440fx(void);
> >  /* piix4.c */
> >  extern PCIDevice *piix4_dev;
> >  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> > index 80d42e12ff..254b2e50ab 100644
> > --- a/hw/acpi/pcihp.c
> > +++ b/hw/acpi/pcihp.c
> > @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void
> > *opaque)
> >      return bsel_alloc;
> >  }
> >  
> > -static void acpi_set_pci_info(void)
> > +static void acpi_set_pci_info(AcpiPciHpState *s)
> >  {
> >      static bool bsel_is_set;
> > -    PCIBus *bus;
> >      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
> >  
> >      if (bsel_is_set) {
> > @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
> >      }
> >      bsel_is_set = true;
> >  
> > -    bus = find_i440fx(); /* TODO: Q35 support */
> > -    if (bus) {
> > +    if (s->root) {
> >          /* Scan all PCI buses. Set property to enable acpi based
> > hotplug. */
> > -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL,
> > &bsel_alloc);
> > +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL,
> > &bsel_alloc);
> >      }
> >  }
> >  
> > @@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState
> > *s)
> >  
> >  void acpi_pcihp_reset(AcpiPciHpState *s)
> >  {
> > -    acpi_set_pci_info();
> > +    acpi_set_pci_info(s);
> >      acpi_pcihp_update(s);
> >  }
> >  
> > diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> > index 47293a3915..658460264b 100644
> > --- a/hw/pci-host/piix.c
> > +++ b/hw/pci-host/piix.c
> > @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type,
> > const char *pci_type,
> >      return b;
> >  }
> >  
> > -PCIBus *find_i440fx(void)
> > -{
> > -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> > -                                   object_resolve_path("/machine/i
> > 440fx", NULL),
> > -                                   TYPE_PCI_HOST_BRIDGE);
> > -    return s ? s->bus : NULL;
> > -}
> > -
> >  /* PIIX3 PCI to ISA bridge */
> >  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
> >  {
> > diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> > deleted file mode 100644
> > index 6ed81b1f21..0000000000
> > --- a/stubs/pci-host-piix.c
> > +++ /dev/null
> > @@ -1,6 +0,0 @@
> > -#include "qemu/osdep.h"
> > -#include "hw/i386/pc.h"
> > -PCIBus *find_i440fx(void)
> > -{
> > -    return NULL;
> > -}
> > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > index 5dd0aeeec6..725f78bedc 100644
> > --- a/stubs/Makefile.objs
> > +++ b/stubs/Makefile.objs
> > @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
> >  stub-obj-y += vmgenid.o
> >  stub-obj-y += xen-common.o
> >  stub-obj-y += xen-hvm.o
> > -stub-obj-y += pci-host-piix.o
> >  stub-obj-y += ram-block.o
> >  stub-obj-y += ramfb.o

Thanks,
Sebastien
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-16 16:37     ` Paolo Bonzini
@ 2018-11-19 15:31       ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-19 15:31 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Samuel Ortiz, qemu-devel, Peter Maydell, Stefano Stabellini,
	Eduardo Habkost, Michael S. Tsirkin, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On Fri, 16 Nov 2018 17:37:54 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 16/11/18 17:29, Igor Mammedov wrote:
> > General suggestions for this series:
> >   1. Preferably don't do multiple changes within a patch
> >      neither post huge patches (unless it's pure code movement).
> >      (it's easy to squash patches later it necessary)
> >   2. Start small, pick a table generalize it and send as
> >      one small patchset. Tables are often independent
> >      and it's much easier on both author/reviewer to agree upon
> >      changes and rewrite it if necessary.  
> 
> How would that be done?  This series is on the bigger side, agreed, but
> most of it is really just code movement.  It's a starting point, having
> a generic ACPI library is way beyond what this is trying to do.
I've tried to give suggestions how to restructure series
on per patch basis. In my opinion it quite possible to split
series in several smaller ones and it should really help with
making series cleaner and easier/faster to review/amend/merge
vs what we have in v5.
(it's more frustrating to rework large series vs smaller one)

If something isn't clear, it's easy to reach out to me here
or directly (email/irc/github) for clarification/feed back.

> 
> Paolo
> 
> >   3. when you think about refactoring acpi into a generic API
> >      think about it as routines that go into a separate library
> >      (pure acpi spec code) and qemu/acpi glue routines and
> >       divide them correspondingly.  
> 

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-19 15:31       ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-19 15:31 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson, Eduardo Habkost

On Fri, 16 Nov 2018 17:37:54 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 16/11/18 17:29, Igor Mammedov wrote:
> > General suggestions for this series:
> >   1. Preferably don't do multiple changes within a patch
> >      neither post huge patches (unless it's pure code movement).
> >      (it's easy to squash patches later it necessary)
> >   2. Start small, pick a table generalize it and send as
> >      one small patchset. Tables are often independent
> >      and it's much easier on both author/reviewer to agree upon
> >      changes and rewrite it if necessary.  
> 
> How would that be done?  This series is on the bigger side, agreed, but
> most of it is really just code movement.  It's a starting point, having
> a generic ACPI library is way beyond what this is trying to do.
I've tried to give suggestions how to restructure series
on per patch basis. In my opinion it quite possible to split
series in several smaller ones and it should really help with
making series cleaner and easier/faster to review/amend/merge
vs what we have in v5.
(it's more frustrating to rework large series vs smaller one)

If something isn't clear, it's easy to reach out to me here
or directly (email/irc/github) for clarification/feed back.

> 
> Paolo
> 
> >   3. when you think about refactoring acpi into a generic API
> >      think about it as routines that go into a separate library
> >      (pure acpi spec code) and qemu/acpi glue routines and
> >       divide them correspondingly.  
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
  2018-11-16 19:42     ` [Qemu-devel] " Boeuf, Sebastien
@ 2018-11-19 15:37         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-19 15:37 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: sameo, peter.maydell, anthony.perard, sstabellini, jing2.liu,
	mst, qemu-devel, ehabkost, shannon.zhaosl, pbonzini, qemu-arm,
	rth, marcel.apfelbaum, xen-devel

On Fri, 16 Nov 2018 19:42:08 +0000
"Boeuf, Sebastien" <sebastien.boeuf@intel.com> wrote:

> Hi Igor,
> 
> On Fri, 2018-11-16 at 10:39 +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:42 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > 
> > > From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > 
> > > Instead of using the machine type specific method find_i440fx() to
> > > retrieve the PCI bus, this commit aims to rely on the fact that the
> > > PCI bus is known by the structure AcpiPciHpState.
> > > 
> > > When the structure is initialized through acpi_pcihp_init() call,
> > > it saves the PCI bus, which means there is no need to invoke a
> > > special function later on.
> > > 
> > > Based on the fact that find_i440fx() was only used there, this
> > > patch also removes the function find_i440fx() itself from the
> > > entire codebase.
> > > 
> > > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>  
> > Thanks for cleaning it up
> > 
> > minor nit:
> > Taking in account that you're removing '/* TODO: Q35 support */'
> > comment along with find_i440fx(), it might be worth to mention
> > in this commit message. Something along lines that ACPI PCIHP
> > exist to support guests without SHPC support on PCI
> > based PC machine. Considering that Q35 provides native
> > PCI-E hotplug, there is no need to add ACPI hotplug there.  
> 
> Oh yes sure we can update the commit message :). But just wanted to
> mention that 'pc' machine type uses ACPI PCIHP and does support
> SHPC, so it's not mutually exclusive.
it supports both but is it relevant to this patch?

Point was that one shouldn't remove something silently without
any justification/explanation. So that readers that come later
wouldn't wonder about the reasons why the code was removed.
 
> > 
> > with commit message fixed
> > 
> > Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> >   
> > > 
> > > ---
> > >  include/hw/i386/pc.h  |  1 -
> > >  hw/acpi/pcihp.c       | 10 ++++------
> > >  hw/pci-host/piix.c    |  8 --------
> > >  stubs/pci-host-piix.c |  6 ------
> > >  stubs/Makefile.objs   |  1 -
> > >  5 files changed, 4 insertions(+), 22 deletions(-)
> > >  delete mode 100644 stubs/pci-host-piix.c
> > > 
> > > diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > > index 44cb6bf3f3..8e5f1464eb 100644
> > > --- a/include/hw/i386/pc.h
> > > +++ b/include/hw/i386/pc.h
> > > @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > const char *pci_type,
> > >                      MemoryRegion *pci_memory,
> > >                      MemoryRegion *ram_memory);
> > >  
> > > -PCIBus *find_i440fx(void);
> > >  /* piix4.c */
> > >  extern PCIDevice *piix4_dev;
> > >  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> > > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> > > index 80d42e12ff..254b2e50ab 100644
> > > --- a/hw/acpi/pcihp.c
> > > +++ b/hw/acpi/pcihp.c
> > > @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void
> > > *opaque)
> > >      return bsel_alloc;
> > >  }
> > >  
> > > -static void acpi_set_pci_info(void)
> > > +static void acpi_set_pci_info(AcpiPciHpState *s)
> > >  {
> > >      static bool bsel_is_set;
> > > -    PCIBus *bus;
> > >      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
> > >  
> > >      if (bsel_is_set) {
> > > @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
> > >      }
> > >      bsel_is_set = true;
> > >  
> > > -    bus = find_i440fx(); /* TODO: Q35 support */
> > > -    if (bus) {
> > > +    if (s->root) {
> > >          /* Scan all PCI buses. Set property to enable acpi based
> > > hotplug. */
> > > -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL,
> > > &bsel_alloc);
> > > +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL,
> > > &bsel_alloc);
> > >      }
> > >  }
> > >  
> > > @@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState
> > > *s)
> > >  
> > >  void acpi_pcihp_reset(AcpiPciHpState *s)
> > >  {
> > > -    acpi_set_pci_info();
> > > +    acpi_set_pci_info(s);
> > >      acpi_pcihp_update(s);
> > >  }
> > >  
> > > diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> > > index 47293a3915..658460264b 100644
> > > --- a/hw/pci-host/piix.c
> > > +++ b/hw/pci-host/piix.c
> > > @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > const char *pci_type,
> > >      return b;
> > >  }
> > >  
> > > -PCIBus *find_i440fx(void)
> > > -{
> > > -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> > > -                                   object_resolve_path("/machine/i
> > > 440fx", NULL),
> > > -                                   TYPE_PCI_HOST_BRIDGE);
> > > -    return s ? s->bus : NULL;
> > > -}
> > > -
> > >  /* PIIX3 PCI to ISA bridge */
> > >  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
> > >  {
> > > diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> > > deleted file mode 100644
> > > index 6ed81b1f21..0000000000
> > > --- a/stubs/pci-host-piix.c
> > > +++ /dev/null
> > > @@ -1,6 +0,0 @@
> > > -#include "qemu/osdep.h"
> > > -#include "hw/i386/pc.h"
> > > -PCIBus *find_i440fx(void)
> > > -{
> > > -    return NULL;
> > > -}
> > > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > > index 5dd0aeeec6..725f78bedc 100644
> > > --- a/stubs/Makefile.objs
> > > +++ b/stubs/Makefile.objs
> > > @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
> > >  stub-obj-y += vmgenid.o
> > >  stub-obj-y += xen-common.o
> > >  stub-obj-y += xen-hvm.o
> > > -stub-obj-y += pci-host-piix.o
> > >  stub-obj-y += ram-block.o
> > >  stub-obj-y += ramfb.o  
> 
> Thanks,
> Sebastien

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
@ 2018-11-19 15:37         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-19 15:37 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: peter.maydell, sstabellini, sameo, mst, jing2.liu, qemu-devel,
	shannon.zhaosl, qemu-arm, marcel.apfelbaum, xen-devel,
	anthony.perard, pbonzini, rth, ehabkost

On Fri, 16 Nov 2018 19:42:08 +0000
"Boeuf, Sebastien" <sebastien.boeuf@intel.com> wrote:

> Hi Igor,
> 
> On Fri, 2018-11-16 at 10:39 +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:42 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > 
> > > From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > 
> > > Instead of using the machine type specific method find_i440fx() to
> > > retrieve the PCI bus, this commit aims to rely on the fact that the
> > > PCI bus is known by the structure AcpiPciHpState.
> > > 
> > > When the structure is initialized through acpi_pcihp_init() call,
> > > it saves the PCI bus, which means there is no need to invoke a
> > > special function later on.
> > > 
> > > Based on the fact that find_i440fx() was only used there, this
> > > patch also removes the function find_i440fx() itself from the
> > > entire codebase.
> > > 
> > > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>  
> > Thanks for cleaning it up
> > 
> > minor nit:
> > Taking in account that you're removing '/* TODO: Q35 support */'
> > comment along with find_i440fx(), it might be worth to mention
> > in this commit message. Something along lines that ACPI PCIHP
> > exist to support guests without SHPC support on PCI
> > based PC machine. Considering that Q35 provides native
> > PCI-E hotplug, there is no need to add ACPI hotplug there.  
> 
> Oh yes sure we can update the commit message :). But just wanted to
> mention that 'pc' machine type uses ACPI PCIHP and does support
> SHPC, so it's not mutually exclusive.
it supports both but is it relevant to this patch?

Point was that one shouldn't remove something silently without
any justification/explanation. So that readers that come later
wouldn't wonder about the reasons why the code was removed.
 
> > 
> > with commit message fixed
> > 
> > Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> >   
> > > 
> > > ---
> > >  include/hw/i386/pc.h  |  1 -
> > >  hw/acpi/pcihp.c       | 10 ++++------
> > >  hw/pci-host/piix.c    |  8 --------
> > >  stubs/pci-host-piix.c |  6 ------
> > >  stubs/Makefile.objs   |  1 -
> > >  5 files changed, 4 insertions(+), 22 deletions(-)
> > >  delete mode 100644 stubs/pci-host-piix.c
> > > 
> > > diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > > index 44cb6bf3f3..8e5f1464eb 100644
> > > --- a/include/hw/i386/pc.h
> > > +++ b/include/hw/i386/pc.h
> > > @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > const char *pci_type,
> > >                      MemoryRegion *pci_memory,
> > >                      MemoryRegion *ram_memory);
> > >  
> > > -PCIBus *find_i440fx(void);
> > >  /* piix4.c */
> > >  extern PCIDevice *piix4_dev;
> > >  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> > > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> > > index 80d42e12ff..254b2e50ab 100644
> > > --- a/hw/acpi/pcihp.c
> > > +++ b/hw/acpi/pcihp.c
> > > @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void
> > > *opaque)
> > >      return bsel_alloc;
> > >  }
> > >  
> > > -static void acpi_set_pci_info(void)
> > > +static void acpi_set_pci_info(AcpiPciHpState *s)
> > >  {
> > >      static bool bsel_is_set;
> > > -    PCIBus *bus;
> > >      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
> > >  
> > >      if (bsel_is_set) {
> > > @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
> > >      }
> > >      bsel_is_set = true;
> > >  
> > > -    bus = find_i440fx(); /* TODO: Q35 support */
> > > -    if (bus) {
> > > +    if (s->root) {
> > >          /* Scan all PCI buses. Set property to enable acpi based
> > > hotplug. */
> > > -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL,
> > > &bsel_alloc);
> > > +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL,
> > > &bsel_alloc);
> > >      }
> > >  }
> > >  
> > > @@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState
> > > *s)
> > >  
> > >  void acpi_pcihp_reset(AcpiPciHpState *s)
> > >  {
> > > -    acpi_set_pci_info();
> > > +    acpi_set_pci_info(s);
> > >      acpi_pcihp_update(s);
> > >  }
> > >  
> > > diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> > > index 47293a3915..658460264b 100644
> > > --- a/hw/pci-host/piix.c
> > > +++ b/hw/pci-host/piix.c
> > > @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > const char *pci_type,
> > >      return b;
> > >  }
> > >  
> > > -PCIBus *find_i440fx(void)
> > > -{
> > > -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> > > -                                   object_resolve_path("/machine/i
> > > 440fx", NULL),
> > > -                                   TYPE_PCI_HOST_BRIDGE);
> > > -    return s ? s->bus : NULL;
> > > -}
> > > -
> > >  /* PIIX3 PCI to ISA bridge */
> > >  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
> > >  {
> > > diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> > > deleted file mode 100644
> > > index 6ed81b1f21..0000000000
> > > --- a/stubs/pci-host-piix.c
> > > +++ /dev/null
> > > @@ -1,6 +0,0 @@
> > > -#include "qemu/osdep.h"
> > > -#include "hw/i386/pc.h"
> > > -PCIBus *find_i440fx(void)
> > > -{
> > > -    return NULL;
> > > -}
> > > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > > index 5dd0aeeec6..725f78bedc 100644
> > > --- a/stubs/Makefile.objs
> > > +++ b/stubs/Makefile.objs
> > > @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
> > >  stub-obj-y += vmgenid.o
> > >  stub-obj-y += xen-common.o
> > >  stub-obj-y += xen-hvm.o
> > > -stub-obj-y += pci-host-piix.o
> > >  stub-obj-y += ram-block.o
> > >  stub-obj-y += ramfb.o  
> 
> Thanks,
> Sebastien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-19 15:31       ` Igor Mammedov
@ 2018-11-19 17:14         ` Paolo Bonzini
  -1 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-19 17:14 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Samuel Ortiz, qemu-devel, Peter Maydell, Stefano Stabellini,
	Eduardo Habkost, Michael S. Tsirkin, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On 19/11/18 16:31, Igor Mammedov wrote:
> I've tried to give suggestions how to restructure series
> on per patch basis. In my opinion it quite possible to split
> series in several smaller ones and it should really help with
> making series cleaner and easier/faster to review/amend/merge
> vs what we have in v5.

This is true, on the other hand the series makes sense together and,
even if the patches are more or less independent, they also all follow
the same "plan".  For reviewing v6, are you aware of Patchew's series
diff functionality?  It can tell you which patches had comments in v5,
reorder patches if applicable, and display deleted and new patches at
the right point in the series.

v4->v5 is a bit messed up because Samuel probably added a diff order
setup
(https://patchew.org/QEMU/20181101102303.16439-1-sameo@linux.intel.com/diff/20181105014047.26447-1-sameo@linux.intel.com/)
but it's very useful in general.

Paolo

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-19 17:14         ` Paolo Bonzini
  0 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-19 17:14 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson, Eduardo Habkost

On 19/11/18 16:31, Igor Mammedov wrote:
> I've tried to give suggestions how to restructure series
> on per patch basis. In my opinion it quite possible to split
> series in several smaller ones and it should really help with
> making series cleaner and easier/faster to review/amend/merge
> vs what we have in v5.

This is true, on the other hand the series makes sense together and,
even if the patches are more or less independent, they also all follow
the same "plan".  For reviewing v6, are you aware of Patchew's series
diff functionality?  It can tell you which patches had comments in v5,
reorder patches if applicable, and display deleted and new patches at
the right point in the series.

v4->v5 is a bit messed up because Samuel probably added a diff order
setup
(https://patchew.org/QEMU/20181101102303.16439-1-sameo@linux.intel.com/diff/20181105014047.26447-1-sameo@linux.intel.com/)
but it's very useful in general.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
  2018-11-19 15:37         ` Igor Mammedov
@ 2018-11-19 18:02           ` Boeuf, Sebastien
  -1 siblings, 0 replies; 170+ messages in thread
From: Boeuf, Sebastien @ 2018-11-19 18:02 UTC (permalink / raw)
  To: imammedo
  Cc: sameo, peter.maydell, anthony.perard, sstabellini, jing2.liu,
	mst, qemu-devel, ehabkost, shannon.zhaosl, pbonzini, qemu-arm,
	rth, marcel.apfelbaum, xen-devel

On Mon, 2018-11-19 at 16:37 +0100, Igor Mammedov wrote:
> On Fri, 16 Nov 2018 19:42:08 +0000
> "Boeuf, Sebastien" <sebastien.boeuf@intel.com> wrote:
> 
> > 
> > Hi Igor,
> > 
> > On Fri, 2018-11-16 at 10:39 +0100, Igor Mammedov wrote:
> > > 
> > > On Mon,  5 Nov 2018 02:40:42 +0100
> > > Samuel Ortiz <sameo@linux.intel.com> wrote:
> > >   
> > > > 
> > > > 
> > > > From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > > 
> > > > Instead of using the machine type specific method find_i440fx()
> > > > to
> > > > retrieve the PCI bus, this commit aims to rely on the fact that
> > > > the
> > > > PCI bus is known by the structure AcpiPciHpState.
> > > > 
> > > > When the structure is initialized through acpi_pcihp_init()
> > > > call,
> > > > it saves the PCI bus, which means there is no need to invoke a
> > > > special function later on.
> > > > 
> > > > Based on the fact that find_i440fx() was only used there, this
> > > > patch also removes the function find_i440fx() itself from the
> > > > entire codebase.
> > > > 
> > > > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > > Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>  
> > > Thanks for cleaning it up
> > > 
> > > minor nit:
> > > Taking in account that you're removing '/* TODO: Q35 support */'
> > > comment along with find_i440fx(), it might be worth to mention
> > > in this commit message. Something along lines that ACPI PCIHP
> > > exist to support guests without SHPC support on PCI
> > > based PC machine. Considering that Q35 provides native
> > > PCI-E hotplug, there is no need to add ACPI hotplug there.  
> > Oh yes sure we can update the commit message :). But just wanted to
> > mention that 'pc' machine type uses ACPI PCIHP and does support
> > SHPC, so it's not mutually exclusive.
> it supports both but is it relevant to this patch?
> 
> Point was that one shouldn't remove something silently without
> any justification/explanation. So that readers that come later
> wouldn't wonder about the reasons why the code was removed.
> 

I understand the point but I think the comment was wrong in the first
place since q35 never tried to support ACPI PCIHP, as they support PCIe
native hotplug as you mentioned.

>  
> > 
> > > 
> > > 
> > > with commit message fixed
> > > 
> > > Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> > >   
> > > > 
> > > > 
> > > > ---
> > > >  include/hw/i386/pc.h  |  1 -
> > > >  hw/acpi/pcihp.c       | 10 ++++------
> > > >  hw/pci-host/piix.c    |  8 --------
> > > >  stubs/pci-host-piix.c |  6 ------
> > > >  stubs/Makefile.objs   |  1 -
> > > >  5 files changed, 4 insertions(+), 22 deletions(-)
> > > >  delete mode 100644 stubs/pci-host-piix.c
> > > > 
> > > > diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > > > index 44cb6bf3f3..8e5f1464eb 100644
> > > > --- a/include/hw/i386/pc.h
> > > > +++ b/include/hw/i386/pc.h
> > > > @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > > const char *pci_type,
> > > >                      MemoryRegion *pci_memory,
> > > >                      MemoryRegion *ram_memory);
> > > >  
> > > > -PCIBus *find_i440fx(void);
> > > >  /* piix4.c */
> > > >  extern PCIDevice *piix4_dev;
> > > >  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> > > > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> > > > index 80d42e12ff..254b2e50ab 100644
> > > > --- a/hw/acpi/pcihp.c
> > > > +++ b/hw/acpi/pcihp.c
> > > > @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void
> > > > *opaque)
> > > >      return bsel_alloc;
> > > >  }
> > > >  
> > > > -static void acpi_set_pci_info(void)
> > > > +static void acpi_set_pci_info(AcpiPciHpState *s)
> > > >  {
> > > >      static bool bsel_is_set;
> > > > -    PCIBus *bus;
> > > >      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
> > > >  
> > > >      if (bsel_is_set) {
> > > > @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
> > > >      }
> > > >      bsel_is_set = true;
> > > >  
> > > > -    bus = find_i440fx(); /* TODO: Q35 support */
> > > > -    if (bus) {
> > > > +    if (s->root) {
> > > >          /* Scan all PCI buses. Set property to enable acpi
> > > > based
> > > > hotplug. */
> > > > -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL,
> > > > &bsel_alloc);
> > > > +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel,
> > > > NULL,
> > > > &bsel_alloc);
> > > >      }
> > > >  }
> > > >  
> > > > @@ -213,7 +211,7 @@ static void
> > > > acpi_pcihp_update(AcpiPciHpState
> > > > *s)
> > > >  
> > > >  void acpi_pcihp_reset(AcpiPciHpState *s)
> > > >  {
> > > > -    acpi_set_pci_info();
> > > > +    acpi_set_pci_info(s);
> > > >      acpi_pcihp_update(s);
> > > >  }
> > > >  
> > > > diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> > > > index 47293a3915..658460264b 100644
> > > > --- a/hw/pci-host/piix.c
> > > > +++ b/hw/pci-host/piix.c
> > > > @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > > const char *pci_type,
> > > >      return b;
> > > >  }
> > > >  
> > > > -PCIBus *find_i440fx(void)
> > > > -{
> > > > -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> > > > -                                   object_resolve_path("/machi
> > > > ne/i
> > > > 440fx", NULL),
> > > > -                                   TYPE_PCI_HOST_BRIDGE);
> > > > -    return s ? s->bus : NULL;
> > > > -}
> > > > -
> > > >  /* PIIX3 PCI to ISA bridge */
> > > >  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
> > > >  {
> > > > diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> > > > deleted file mode 100644
> > > > index 6ed81b1f21..0000000000
> > > > --- a/stubs/pci-host-piix.c
> > > > +++ /dev/null
> > > > @@ -1,6 +0,0 @@
> > > > -#include "qemu/osdep.h"
> > > > -#include "hw/i386/pc.h"
> > > > -PCIBus *find_i440fx(void)
> > > > -{
> > > > -    return NULL;
> > > > -}
> > > > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > > > index 5dd0aeeec6..725f78bedc 100644
> > > > --- a/stubs/Makefile.objs
> > > > +++ b/stubs/Makefile.objs
> > > > @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
> > > >  stub-obj-y += vmgenid.o
> > > >  stub-obj-y += xen-common.o
> > > >  stub-obj-y += xen-hvm.o
> > > > -stub-obj-y += pci-host-piix.o
> > > >  stub-obj-y += ram-block.o
> > > >  stub-obj-y += ramfb.o  
> > Thanks,
> > Sebastien

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
@ 2018-11-19 18:02           ` Boeuf, Sebastien
  0 siblings, 0 replies; 170+ messages in thread
From: Boeuf, Sebastien @ 2018-11-19 18:02 UTC (permalink / raw)
  To: imammedo
  Cc: peter.maydell, sstabellini, sameo, mst, jing2.liu, qemu-devel,
	shannon.zhaosl, qemu-arm, marcel.apfelbaum, xen-devel,
	anthony.perard, pbonzini, rth, ehabkost

On Mon, 2018-11-19 at 16:37 +0100, Igor Mammedov wrote:
> On Fri, 16 Nov 2018 19:42:08 +0000
> "Boeuf, Sebastien" <sebastien.boeuf@intel.com> wrote:
> 
> > 
> > Hi Igor,
> > 
> > On Fri, 2018-11-16 at 10:39 +0100, Igor Mammedov wrote:
> > > 
> > > On Mon,  5 Nov 2018 02:40:42 +0100
> > > Samuel Ortiz <sameo@linux.intel.com> wrote:
> > >   
> > > > 
> > > > 
> > > > From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > > 
> > > > Instead of using the machine type specific method find_i440fx()
> > > > to
> > > > retrieve the PCI bus, this commit aims to rely on the fact that
> > > > the
> > > > PCI bus is known by the structure AcpiPciHpState.
> > > > 
> > > > When the structure is initialized through acpi_pcihp_init()
> > > > call,
> > > > it saves the PCI bus, which means there is no need to invoke a
> > > > special function later on.
> > > > 
> > > > Based on the fact that find_i440fx() was only used there, this
> > > > patch also removes the function find_i440fx() itself from the
> > > > entire codebase.
> > > > 
> > > > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > > Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>  
> > > Thanks for cleaning it up
> > > 
> > > minor nit:
> > > Taking in account that you're removing '/* TODO: Q35 support */'
> > > comment along with find_i440fx(), it might be worth to mention
> > > in this commit message. Something along lines that ACPI PCIHP
> > > exist to support guests without SHPC support on PCI
> > > based PC machine. Considering that Q35 provides native
> > > PCI-E hotplug, there is no need to add ACPI hotplug there.  
> > Oh yes sure we can update the commit message :). But just wanted to
> > mention that 'pc' machine type uses ACPI PCIHP and does support
> > SHPC, so it's not mutually exclusive.
> it supports both but is it relevant to this patch?
> 
> Point was that one shouldn't remove something silently without
> any justification/explanation. So that readers that come later
> wouldn't wonder about the reasons why the code was removed.
> 

I understand the point but I think the comment was wrong in the first
place since q35 never tried to support ACPI PCIHP, as they support PCIe
native hotplug as you mentioned.

>  
> > 
> > > 
> > > 
> > > with commit message fixed
> > > 
> > > Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> > >   
> > > > 
> > > > 
> > > > ---
> > > >  include/hw/i386/pc.h  |  1 -
> > > >  hw/acpi/pcihp.c       | 10 ++++------
> > > >  hw/pci-host/piix.c    |  8 --------
> > > >  stubs/pci-host-piix.c |  6 ------
> > > >  stubs/Makefile.objs   |  1 -
> > > >  5 files changed, 4 insertions(+), 22 deletions(-)
> > > >  delete mode 100644 stubs/pci-host-piix.c
> > > > 
> > > > diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > > > index 44cb6bf3f3..8e5f1464eb 100644
> > > > --- a/include/hw/i386/pc.h
> > > > +++ b/include/hw/i386/pc.h
> > > > @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > > const char *pci_type,
> > > >                      MemoryRegion *pci_memory,
> > > >                      MemoryRegion *ram_memory);
> > > >  
> > > > -PCIBus *find_i440fx(void);
> > > >  /* piix4.c */
> > > >  extern PCIDevice *piix4_dev;
> > > >  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> > > > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> > > > index 80d42e12ff..254b2e50ab 100644
> > > > --- a/hw/acpi/pcihp.c
> > > > +++ b/hw/acpi/pcihp.c
> > > > @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void
> > > > *opaque)
> > > >      return bsel_alloc;
> > > >  }
> > > >  
> > > > -static void acpi_set_pci_info(void)
> > > > +static void acpi_set_pci_info(AcpiPciHpState *s)
> > > >  {
> > > >      static bool bsel_is_set;
> > > > -    PCIBus *bus;
> > > >      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
> > > >  
> > > >      if (bsel_is_set) {
> > > > @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
> > > >      }
> > > >      bsel_is_set = true;
> > > >  
> > > > -    bus = find_i440fx(); /* TODO: Q35 support */
> > > > -    if (bus) {
> > > > +    if (s->root) {
> > > >          /* Scan all PCI buses. Set property to enable acpi
> > > > based
> > > > hotplug. */
> > > > -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL,
> > > > &bsel_alloc);
> > > > +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel,
> > > > NULL,
> > > > &bsel_alloc);
> > > >      }
> > > >  }
> > > >  
> > > > @@ -213,7 +211,7 @@ static void
> > > > acpi_pcihp_update(AcpiPciHpState
> > > > *s)
> > > >  
> > > >  void acpi_pcihp_reset(AcpiPciHpState *s)
> > > >  {
> > > > -    acpi_set_pci_info();
> > > > +    acpi_set_pci_info(s);
> > > >      acpi_pcihp_update(s);
> > > >  }
> > > >  
> > > > diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> > > > index 47293a3915..658460264b 100644
> > > > --- a/hw/pci-host/piix.c
> > > > +++ b/hw/pci-host/piix.c
> > > > @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > > const char *pci_type,
> > > >      return b;
> > > >  }
> > > >  
> > > > -PCIBus *find_i440fx(void)
> > > > -{
> > > > -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> > > > -                                   object_resolve_path("/machi
> > > > ne/i
> > > > 440fx", NULL),
> > > > -                                   TYPE_PCI_HOST_BRIDGE);
> > > > -    return s ? s->bus : NULL;
> > > > -}
> > > > -
> > > >  /* PIIX3 PCI to ISA bridge */
> > > >  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
> > > >  {
> > > > diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> > > > deleted file mode 100644
> > > > index 6ed81b1f21..0000000000
> > > > --- a/stubs/pci-host-piix.c
> > > > +++ /dev/null
> > > > @@ -1,6 +0,0 @@
> > > > -#include "qemu/osdep.h"
> > > > -#include "hw/i386/pc.h"
> > > > -PCIBus *find_i440fx(void)
> > > > -{
> > > > -    return NULL;
> > > > -}
> > > > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > > > index 5dd0aeeec6..725f78bedc 100644
> > > > --- a/stubs/Makefile.objs
> > > > +++ b/stubs/Makefile.objs
> > > > @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
> > > >  stub-obj-y += vmgenid.o
> > > >  stub-obj-y += xen-common.o
> > > >  stub-obj-y += xen-hvm.o
> > > > -stub-obj-y += pci-host-piix.o
> > > >  stub-obj-y += ram-block.o
> > > >  stub-obj-y += ramfb.o  
> > Thanks,
> > Sebastien
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-19 17:14         ` Paolo Bonzini
@ 2018-11-19 18:14           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2018-11-19 18:14 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Igor Mammedov, Samuel Ortiz, qemu-devel, Peter Maydell,
	Stefano Stabellini, Eduardo Habkost, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On Mon, Nov 19, 2018 at 06:14:26PM +0100, Paolo Bonzini wrote:
> On 19/11/18 16:31, Igor Mammedov wrote:
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.
> 
> This is true, on the other hand the series makes sense together and,
> even if the patches are more or less independent, they also all follow
> the same "plan".  For reviewing v6, are you aware of Patchew's series
> diff functionality?  It can tell you which patches had comments in v5,
> reorder patches if applicable, and display deleted and new patches at
> the right point in the series.
> 
> v4->v5 is a bit messed up because Samuel probably added a diff order
> setup
> (https://patchew.org/QEMU/20181101102303.16439-1-sameo@linux.intel.com/diff/20181105014047.26447-1-sameo@linux.intel.com/)
> but it's very useful in general.
> 
> Paolo

Oh I didn't realize difforder breaks patchew. Or is the problem
only if one switches from no order to difforder?

-- 
MST

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-19 18:14           ` Michael S. Tsirkin
  0 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2018-11-19 18:14 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz, qemu-devel,
	Shannon Zhao, qemu-arm, xen-devel, Anthony Perard, Igor Mammedov,
	Richard Henderson, Eduardo Habkost

On Mon, Nov 19, 2018 at 06:14:26PM +0100, Paolo Bonzini wrote:
> On 19/11/18 16:31, Igor Mammedov wrote:
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.
> 
> This is true, on the other hand the series makes sense together and,
> even if the patches are more or less independent, they also all follow
> the same "plan".  For reviewing v6, are you aware of Patchew's series
> diff functionality?  It can tell you which patches had comments in v5,
> reorder patches if applicable, and display deleted and new patches at
> the right point in the series.
> 
> v4->v5 is a bit messed up because Samuel probably added a diff order
> setup
> (https://patchew.org/QEMU/20181101102303.16439-1-sameo@linux.intel.com/diff/20181105014047.26447-1-sameo@linux.intel.com/)
> but it's very useful in general.
> 
> Paolo

Oh I didn't realize difforder breaks patchew. Or is the problem
only if one switches from no order to difforder?

-- 
MST

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-08 14:16     ` Igor Mammedov
@ 2018-11-19 18:27       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2018-11-19 18:27 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Samuel Ortiz, qemu-devel, Shannon Zhao, Stefano Stabellini,
	Anthony Perard, Richard Henderson, Marcel Apfelbaum, xen-devel,
	Paolo Bonzini, qemu-arm, Peter Maydell, Eduardo Habkost

On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:28 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > Description Table). RSDT only allow for 32-bit addressses and have thus
> > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > no longer RSDTs, although RSDTs are still supported for backward
> > compatibility.
> > 
> > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > length and a version field to the table.
> 
> This patch re-implements what arm/virt board already does
> and fixes checksum bug in the later and at the same time
> without a user (within the patch).
> 
> I'd suggest redo it a way similar to FADT refactoring
>   patch 1: fix checksum bug in virt/arm
>   patch 2: update reference tables in test
>   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
>              (both arm and x86) wich stores all data in hos byte order
>   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
>            ... move out to aml-build.c
>   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
>       amending it to generate rev1 variant defined by revision in AcpiRsdpData
>       (commit dd1b2037a)
> 
>   'make check V=1' shouldn't observe any ACPI tables changes after patch 2.

And your next suggestion is to add patch 6.  I guess it's doable but
this will make a single patch a 6 patch series. At this rate this series
will be at 200 patches easily.

Automated checks are cool but hey it's easy to see what changed in a
disassembled table, and we do not update them blindly. So just note in
the comment that there's a table change for ARM and expected files need
to be updated and we should be fine IMHO.

> > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > ---
> >  include/hw/acpi/aml-build.h |  3 +++
> >  hw/acpi/aml-build.c         | 37 +++++++++++++++++++++++++++++++++++++
> >  2 files changed, 40 insertions(+)
> > 
> > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > index c9bcb32d81..3580d0ce90 100644
> > --- a/include/hw/acpi/aml-build.h
> > +++ b/include/hw/acpi/aml-build.h
> > @@ -393,6 +393,9 @@ void
> >  build_rsdp(GArray *table_data,
> >             BIOSLinker *linker, unsigned rsdt_tbl_offset);
> >  void
> > +build_rsdp_xsdt(GArray *table_data,
> > +                BIOSLinker *linker, unsigned xsdt_tbl_offset);
> > +void
> >  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> >             const char *oem_id, const char *oem_table_id);
> >  void
> > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > index 51b608432f..a030d40674 100644
> > --- a/hw/acpi/aml-build.c
> > +++ b/hw/acpi/aml-build.c
> > @@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> >                   (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
> >  }
> >  
> > +/* RSDP pointing at an XSDT */
> > +void
> > +build_rsdp_xsdt(GArray *rsdp_table,
> > +                BIOSLinker *linker, unsigned xsdt_tbl_offset)
> > +{
> > +    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> > +    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
> > +    unsigned xsdt_pa_offset =
> > +        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
> > +    unsigned xsdt_offset =
> > +        (char *)&rsdp->length - rsdp_table->data;

There's a cleaner way to get at the offsets than pointer math:
1. save rsdp_table length before you push
2. add offset_of for fields

If switching to build_append_int_noprefix then it's even
easier - just save length before you append the int
you intend to patch.


> > +
> > +    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
> > +                             true /* fseg memory */);
> > +
> > +    memcpy(&rsdp->signature, "RSD PTR ", 8);
> > +    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
> > +    rsdp->length = cpu_to_le32(sizeof(*rsdp));
> > +    /* version 2, we will use the XSDT pointer */
> > +    rsdp->revision = 0x02;
> > +
> > +    /* Address to be filled by Guest linker */
> > +    bios_linker_loader_add_pointer(linker,
> > +        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
> > +        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
> > +
> > +    /* Legacy checksum to be filled by Guest linker */
> > +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> > +        (char *)rsdp - rsdp_table->data, xsdt_offset,
> > +        (char *)&rsdp->checksum - rsdp_table->data);
> > +
> > +    /* Extended checksum to be filled by Guest linker */
> > +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> > +        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> > +        (char *)&rsdp->extended_checksum - rsdp_table->data);
> > +}
> > +
> >  void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
> >                         uint64_t len, int node, MemoryAffinityFlags flags)
> >  {

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-19 18:27       ` Michael S. Tsirkin
  0 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2018-11-19 18:27 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz, qemu-devel,
	Eduardo Habkost, Shannon Zhao, qemu-arm, Marcel Apfelbaum,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:28 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > Description Table). RSDT only allow for 32-bit addressses and have thus
> > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > no longer RSDTs, although RSDTs are still supported for backward
> > compatibility.
> > 
> > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > length and a version field to the table.
> 
> This patch re-implements what arm/virt board already does
> and fixes checksum bug in the later and at the same time
> without a user (within the patch).
> 
> I'd suggest redo it a way similar to FADT refactoring
>   patch 1: fix checksum bug in virt/arm
>   patch 2: update reference tables in test
>   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
>              (both arm and x86) wich stores all data in hos byte order
>   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
>            ... move out to aml-build.c
>   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
>       amending it to generate rev1 variant defined by revision in AcpiRsdpData
>       (commit dd1b2037a)
> 
>   'make check V=1' shouldn't observe any ACPI tables changes after patch 2.

And your next suggestion is to add patch 6.  I guess it's doable but
this will make a single patch a 6 patch series. At this rate this series
will be at 200 patches easily.

Automated checks are cool but hey it's easy to see what changed in a
disassembled table, and we do not update them blindly. So just note in
the comment that there's a table change for ARM and expected files need
to be updated and we should be fine IMHO.

> > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > ---
> >  include/hw/acpi/aml-build.h |  3 +++
> >  hw/acpi/aml-build.c         | 37 +++++++++++++++++++++++++++++++++++++
> >  2 files changed, 40 insertions(+)
> > 
> > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > index c9bcb32d81..3580d0ce90 100644
> > --- a/include/hw/acpi/aml-build.h
> > +++ b/include/hw/acpi/aml-build.h
> > @@ -393,6 +393,9 @@ void
> >  build_rsdp(GArray *table_data,
> >             BIOSLinker *linker, unsigned rsdt_tbl_offset);
> >  void
> > +build_rsdp_xsdt(GArray *table_data,
> > +                BIOSLinker *linker, unsigned xsdt_tbl_offset);
> > +void
> >  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> >             const char *oem_id, const char *oem_table_id);
> >  void
> > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > index 51b608432f..a030d40674 100644
> > --- a/hw/acpi/aml-build.c
> > +++ b/hw/acpi/aml-build.c
> > @@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> >                   (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
> >  }
> >  
> > +/* RSDP pointing at an XSDT */
> > +void
> > +build_rsdp_xsdt(GArray *rsdp_table,
> > +                BIOSLinker *linker, unsigned xsdt_tbl_offset)
> > +{
> > +    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> > +    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
> > +    unsigned xsdt_pa_offset =
> > +        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
> > +    unsigned xsdt_offset =
> > +        (char *)&rsdp->length - rsdp_table->data;

There's a cleaner way to get at the offsets than pointer math:
1. save rsdp_table length before you push
2. add offset_of for fields

If switching to build_append_int_noprefix then it's even
easier - just save length before you append the int
you intend to patch.


> > +
> > +    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
> > +                             true /* fseg memory */);
> > +
> > +    memcpy(&rsdp->signature, "RSD PTR ", 8);
> > +    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
> > +    rsdp->length = cpu_to_le32(sizeof(*rsdp));
> > +    /* version 2, we will use the XSDT pointer */
> > +    rsdp->revision = 0x02;
> > +
> > +    /* Address to be filled by Guest linker */
> > +    bios_linker_loader_add_pointer(linker,
> > +        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
> > +        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
> > +
> > +    /* Legacy checksum to be filled by Guest linker */
> > +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> > +        (char *)rsdp - rsdp_table->data, xsdt_offset,
> > +        (char *)&rsdp->checksum - rsdp_table->data);
> > +
> > +    /* Extended checksum to be filled by Guest linker */
> > +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> > +        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> > +        (char *)&rsdp->extended_checksum - rsdp_table->data);
> > +}
> > +
> >  void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
> >                         uint64_t len, int node, MemoryAffinityFlags flags)
> >  {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-19 18:27       ` Michael S. Tsirkin
@ 2018-11-20  8:23         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-20  8:23 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Samuel Ortiz, qemu-devel, Shannon Zhao, Stefano Stabellini,
	Anthony Perard, Richard Henderson, Marcel Apfelbaum, xen-devel,
	Paolo Bonzini, qemu-arm, Peter Maydell, Eduardo Habkost

On Mon, 19 Nov 2018 13:27:14 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:28 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > > Description Table). RSDT only allow for 32-bit addressses and have thus
> > > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > > no longer RSDTs, although RSDTs are still supported for backward
> > > compatibility.
> > > 
> > > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > > length and a version field to the table.  
> > 
> > This patch re-implements what arm/virt board already does
> > and fixes checksum bug in the later and at the same time
> > without a user (within the patch).
> > 
> > I'd suggest redo it a way similar to FADT refactoring
> >   patch 1: fix checksum bug in virt/arm
> >   patch 2: update reference tables in test
> >   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
> >              (both arm and x86) wich stores all data in hos byte order
> >   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
> >            ... move out to aml-build.c
> >   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
> >       amending it to generate rev1 variant defined by revision in AcpiRsdpData
> >       (commit dd1b2037a)
> > 
> >   'make check V=1' shouldn't observe any ACPI tables changes after patch 2.  
> 
> And your next suggestion is to add patch 6.  I guess it's doable but
> this will make a single patch a 6 patch series. At this rate this series
> will be at 200 patches easily.
> 
> Automated checks are cool but hey it's easy to see what changed in a
> disassembled table, and we do not update them blindly. So just note in
> the comment that there's a table change for ARM and expected files need
> to be updated and we should be fine IMHO.
Point was to move patches that change tables content first,
where we would pay extra attentions to changes in tables and
then refactoring which shouldn't cause any changes will be mostly
automatic  (at least at that point we won't have to worry about 
tables correctness)


> > > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > > ---
> > >  include/hw/acpi/aml-build.h |  3 +++
> > >  hw/acpi/aml-build.c         | 37 +++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 40 insertions(+)
> > > 
> > > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > > index c9bcb32d81..3580d0ce90 100644
> > > --- a/include/hw/acpi/aml-build.h
> > > +++ b/include/hw/acpi/aml-build.h
> > > @@ -393,6 +393,9 @@ void
> > >  build_rsdp(GArray *table_data,
> > >             BIOSLinker *linker, unsigned rsdt_tbl_offset);
> > >  void
> > > +build_rsdp_xsdt(GArray *table_data,
> > > +                BIOSLinker *linker, unsigned xsdt_tbl_offset);
> > > +void
> > >  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> > >             const char *oem_id, const char *oem_table_id);
> > >  void
> > > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > > index 51b608432f..a030d40674 100644
> > > --- a/hw/acpi/aml-build.c
> > > +++ b/hw/acpi/aml-build.c
> > > @@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> > >                   (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
> > >  }
> > >  
> > > +/* RSDP pointing at an XSDT */
> > > +void
> > > +build_rsdp_xsdt(GArray *rsdp_table,
> > > +                BIOSLinker *linker, unsigned xsdt_tbl_offset)
> > > +{
> > > +    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> > > +    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
> > > +    unsigned xsdt_pa_offset =
> > > +        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
> > > +    unsigned xsdt_offset =
> > > +        (char *)&rsdp->length - rsdp_table->data;  
> 
> There's a cleaner way to get at the offsets than pointer math:
> 1. save rsdp_table length before you push
> 2. add offset_of for fields
> 
> If switching to build_append_int_noprefix then it's even
> easier - just save length before you append the int
> you intend to patch.
> 
> 
> > > +
> > > +    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
> > > +                             true /* fseg memory */);
> > > +
> > > +    memcpy(&rsdp->signature, "RSD PTR ", 8);
> > > +    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
> > > +    rsdp->length = cpu_to_le32(sizeof(*rsdp));
> > > +    /* version 2, we will use the XSDT pointer */
> > > +    rsdp->revision = 0x02;
> > > +
> > > +    /* Address to be filled by Guest linker */
> > > +    bios_linker_loader_add_pointer(linker,
> > > +        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
> > > +        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
> > > +
> > > +    /* Legacy checksum to be filled by Guest linker */
> > > +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> > > +        (char *)rsdp - rsdp_table->data, xsdt_offset,
> > > +        (char *)&rsdp->checksum - rsdp_table->data);
> > > +
> > > +    /* Extended checksum to be filled by Guest linker */
> > > +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> > > +        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> > > +        (char *)&rsdp->extended_checksum - rsdp_table->data);
> > > +}
> > > +
> > >  void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
> > >                         uint64_t len, int node, MemoryAffinityFlags flags)
> > >  {  

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-20  8:23         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-20  8:23 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz, qemu-devel,
	Eduardo Habkost, Shannon Zhao, qemu-arm, Marcel Apfelbaum,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Mon, 19 Nov 2018 13:27:14 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:28 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > > Description Table). RSDT only allow for 32-bit addressses and have thus
> > > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > > no longer RSDTs, although RSDTs are still supported for backward
> > > compatibility.
> > > 
> > > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > > length and a version field to the table.  
> > 
> > This patch re-implements what arm/virt board already does
> > and fixes checksum bug in the later and at the same time
> > without a user (within the patch).
> > 
> > I'd suggest redo it a way similar to FADT refactoring
> >   patch 1: fix checksum bug in virt/arm
> >   patch 2: update reference tables in test
> >   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
> >              (both arm and x86) wich stores all data in hos byte order
> >   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
> >            ... move out to aml-build.c
> >   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
> >       amending it to generate rev1 variant defined by revision in AcpiRsdpData
> >       (commit dd1b2037a)
> > 
> >   'make check V=1' shouldn't observe any ACPI tables changes after patch 2.  
> 
> And your next suggestion is to add patch 6.  I guess it's doable but
> this will make a single patch a 6 patch series. At this rate this series
> will be at 200 patches easily.
> 
> Automated checks are cool but hey it's easy to see what changed in a
> disassembled table, and we do not update them blindly. So just note in
> the comment that there's a table change for ARM and expected files need
> to be updated and we should be fine IMHO.
Point was to move patches that change tables content first,
where we would pay extra attentions to changes in tables and
then refactoring which shouldn't cause any changes will be mostly
automatic  (at least at that point we won't have to worry about 
tables correctness)


> > > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > > ---
> > >  include/hw/acpi/aml-build.h |  3 +++
> > >  hw/acpi/aml-build.c         | 37 +++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 40 insertions(+)
> > > 
> > > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > > index c9bcb32d81..3580d0ce90 100644
> > > --- a/include/hw/acpi/aml-build.h
> > > +++ b/include/hw/acpi/aml-build.h
> > > @@ -393,6 +393,9 @@ void
> > >  build_rsdp(GArray *table_data,
> > >             BIOSLinker *linker, unsigned rsdt_tbl_offset);
> > >  void
> > > +build_rsdp_xsdt(GArray *table_data,
> > > +                BIOSLinker *linker, unsigned xsdt_tbl_offset);
> > > +void
> > >  build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> > >             const char *oem_id, const char *oem_table_id);
> > >  void
> > > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > > index 51b608432f..a030d40674 100644
> > > --- a/hw/acpi/aml-build.c
> > > +++ b/hw/acpi/aml-build.c
> > > @@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
> > >                   (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
> > >  }
> > >  
> > > +/* RSDP pointing at an XSDT */
> > > +void
> > > +build_rsdp_xsdt(GArray *rsdp_table,
> > > +                BIOSLinker *linker, unsigned xsdt_tbl_offset)
> > > +{
> > > +    AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> > > +    unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
> > > +    unsigned xsdt_pa_offset =
> > > +        (char *)&rsdp->xsdt_physical_address - rsdp_table->data;
> > > +    unsigned xsdt_offset =
> > > +        (char *)&rsdp->length - rsdp_table->data;  
> 
> There's a cleaner way to get at the offsets than pointer math:
> 1. save rsdp_table length before you push
> 2. add offset_of for fields
> 
> If switching to build_append_int_noprefix then it's even
> easier - just save length before you append the int
> you intend to patch.
> 
> 
> > > +
> > > +    bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
> > > +                             true /* fseg memory */);
> > > +
> > > +    memcpy(&rsdp->signature, "RSD PTR ", 8);
> > > +    memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
> > > +    rsdp->length = cpu_to_le32(sizeof(*rsdp));
> > > +    /* version 2, we will use the XSDT pointer */
> > > +    rsdp->revision = 0x02;
> > > +
> > > +    /* Address to be filled by Guest linker */
> > > +    bios_linker_loader_add_pointer(linker,
> > > +        ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
> > > +        ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
> > > +
> > > +    /* Legacy checksum to be filled by Guest linker */
> > > +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> > > +        (char *)rsdp - rsdp_table->data, xsdt_offset,
> > > +        (char *)&rsdp->checksum - rsdp_table->data);
> > > +
> > > +    /* Extended checksum to be filled by Guest linker */
> > > +    bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> > > +        (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> > > +        (char *)&rsdp->extended_checksum - rsdp_table->data);
> > > +}
> > > +
> > >  void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
> > >                         uint64_t len, int node, MemoryAffinityFlags flags)
> > >  {  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
  2018-11-19 18:02           ` Boeuf, Sebastien
@ 2018-11-20  8:26             ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-20  8:26 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: sameo, peter.maydell, anthony.perard, sstabellini, jing2.liu,
	mst, qemu-devel, ehabkost, shannon.zhaosl, pbonzini, qemu-arm,
	rth, marcel.apfelbaum, xen-devel

On Mon, 19 Nov 2018 18:02:53 +0000
"Boeuf, Sebastien" <sebastien.boeuf@intel.com> wrote:

> On Mon, 2018-11-19 at 16:37 +0100, Igor Mammedov wrote:
> > On Fri, 16 Nov 2018 19:42:08 +0000
> > "Boeuf, Sebastien" <sebastien.boeuf@intel.com> wrote:
> >   
> > > 
> > > Hi Igor,
> > > 
> > > On Fri, 2018-11-16 at 10:39 +0100, Igor Mammedov wrote:  
> > > > 
> > > > On Mon,  5 Nov 2018 02:40:42 +0100
> > > > Samuel Ortiz <sameo@linux.intel.com> wrote:
> > > >     
> > > > > 
> > > > > 
> > > > > From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > > > 
> > > > > Instead of using the machine type specific method find_i440fx()
> > > > > to
> > > > > retrieve the PCI bus, this commit aims to rely on the fact that
> > > > > the
> > > > > PCI bus is known by the structure AcpiPciHpState.
> > > > > 
> > > > > When the structure is initialized through acpi_pcihp_init()
> > > > > call,
> > > > > it saves the PCI bus, which means there is no need to invoke a
> > > > > special function later on.
> > > > > 
> > > > > Based on the fact that find_i440fx() was only used there, this
> > > > > patch also removes the function find_i440fx() itself from the
> > > > > entire codebase.
> > > > > 
> > > > > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > > > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > > > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > > > Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>    
> > > > Thanks for cleaning it up
> > > > 
> > > > minor nit:
> > > > Taking in account that you're removing '/* TODO: Q35 support */'
> > > > comment along with find_i440fx(), it might be worth to mention
> > > > in this commit message. Something along lines that ACPI PCIHP
> > > > exist to support guests without SHPC support on PCI
> > > > based PC machine. Considering that Q35 provides native
> > > > PCI-E hotplug, there is no need to add ACPI hotplug there.    
> > > Oh yes sure we can update the commit message :). But just wanted to
> > > mention that 'pc' machine type uses ACPI PCIHP and does support
> > > SHPC, so it's not mutually exclusive.  
> > it supports both but is it relevant to this patch?
> > 
> > Point was that one shouldn't remove something silently without
> > any justification/explanation. So that readers that come later
> > wouldn't wonder about the reasons why the code was removed.
> >   
> 
> I understand the point but I think the comment was wrong in the first
> place since q35 never tried to support ACPI PCIHP, as they support PCIe
> native hotplug as you mentioned.
ok.

when you have something ready, feel free to ping me
(I don't mind to review on github if that helps to speed up process)

> 
> >    
> > >   
> > > > 
> > > > 
> > > > with commit message fixed
> > > > 
> > > > Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> > > >     
> > > > > 
> > > > > 
> > > > > ---
> > > > >  include/hw/i386/pc.h  |  1 -
> > > > >  hw/acpi/pcihp.c       | 10 ++++------
> > > > >  hw/pci-host/piix.c    |  8 --------
> > > > >  stubs/pci-host-piix.c |  6 ------
> > > > >  stubs/Makefile.objs   |  1 -
> > > > >  5 files changed, 4 insertions(+), 22 deletions(-)
> > > > >  delete mode 100644 stubs/pci-host-piix.c
> > > > > 
> > > > > diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > > > > index 44cb6bf3f3..8e5f1464eb 100644
> > > > > --- a/include/hw/i386/pc.h
> > > > > +++ b/include/hw/i386/pc.h
> > > > > @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > > > const char *pci_type,
> > > > >                      MemoryRegion *pci_memory,
> > > > >                      MemoryRegion *ram_memory);
> > > > >  
> > > > > -PCIBus *find_i440fx(void);
> > > > >  /* piix4.c */
> > > > >  extern PCIDevice *piix4_dev;
> > > > >  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> > > > > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> > > > > index 80d42e12ff..254b2e50ab 100644
> > > > > --- a/hw/acpi/pcihp.c
> > > > > +++ b/hw/acpi/pcihp.c
> > > > > @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void
> > > > > *opaque)
> > > > >      return bsel_alloc;
> > > > >  }
> > > > >  
> > > > > -static void acpi_set_pci_info(void)
> > > > > +static void acpi_set_pci_info(AcpiPciHpState *s)
> > > > >  {
> > > > >      static bool bsel_is_set;
> > > > > -    PCIBus *bus;
> > > > >      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
> > > > >  
> > > > >      if (bsel_is_set) {
> > > > > @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
> > > > >      }
> > > > >      bsel_is_set = true;
> > > > >  
> > > > > -    bus = find_i440fx(); /* TODO: Q35 support */
> > > > > -    if (bus) {
> > > > > +    if (s->root) {
> > > > >          /* Scan all PCI buses. Set property to enable acpi
> > > > > based
> > > > > hotplug. */
> > > > > -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL,
> > > > > &bsel_alloc);
> > > > > +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel,
> > > > > NULL,
> > > > > &bsel_alloc);
> > > > >      }
> > > > >  }
> > > > >  
> > > > > @@ -213,7 +211,7 @@ static void
> > > > > acpi_pcihp_update(AcpiPciHpState
> > > > > *s)
> > > > >  
> > > > >  void acpi_pcihp_reset(AcpiPciHpState *s)
> > > > >  {
> > > > > -    acpi_set_pci_info();
> > > > > +    acpi_set_pci_info(s);
> > > > >      acpi_pcihp_update(s);
> > > > >  }
> > > > >  
> > > > > diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> > > > > index 47293a3915..658460264b 100644
> > > > > --- a/hw/pci-host/piix.c
> > > > > +++ b/hw/pci-host/piix.c
> > > > > @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > > > const char *pci_type,
> > > > >      return b;
> > > > >  }
> > > > >  
> > > > > -PCIBus *find_i440fx(void)
> > > > > -{
> > > > > -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> > > > > -                                   object_resolve_path("/machi
> > > > > ne/i
> > > > > 440fx", NULL),
> > > > > -                                   TYPE_PCI_HOST_BRIDGE);
> > > > > -    return s ? s->bus : NULL;
> > > > > -}
> > > > > -
> > > > >  /* PIIX3 PCI to ISA bridge */
> > > > >  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
> > > > >  {
> > > > > diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> > > > > deleted file mode 100644
> > > > > index 6ed81b1f21..0000000000
> > > > > --- a/stubs/pci-host-piix.c
> > > > > +++ /dev/null
> > > > > @@ -1,6 +0,0 @@
> > > > > -#include "qemu/osdep.h"
> > > > > -#include "hw/i386/pc.h"
> > > > > -PCIBus *find_i440fx(void)
> > > > > -{
> > > > > -    return NULL;
> > > > > -}
> > > > > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > > > > index 5dd0aeeec6..725f78bedc 100644
> > > > > --- a/stubs/Makefile.objs
> > > > > +++ b/stubs/Makefile.objs
> > > > > @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
> > > > >  stub-obj-y += vmgenid.o
> > > > >  stub-obj-y += xen-common.o
> > > > >  stub-obj-y += xen-hvm.o
> > > > > -stub-obj-y += pci-host-piix.o
> > > > >  stub-obj-y += ram-block.o
> > > > >  stub-obj-y += ramfb.o    
> > > Thanks,
> > > Sebastie  

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState
@ 2018-11-20  8:26             ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-20  8:26 UTC (permalink / raw)
  To: Boeuf, Sebastien
  Cc: peter.maydell, sstabellini, sameo, mst, jing2.liu, qemu-devel,
	shannon.zhaosl, qemu-arm, marcel.apfelbaum, xen-devel,
	anthony.perard, pbonzini, rth, ehabkost

On Mon, 19 Nov 2018 18:02:53 +0000
"Boeuf, Sebastien" <sebastien.boeuf@intel.com> wrote:

> On Mon, 2018-11-19 at 16:37 +0100, Igor Mammedov wrote:
> > On Fri, 16 Nov 2018 19:42:08 +0000
> > "Boeuf, Sebastien" <sebastien.boeuf@intel.com> wrote:
> >   
> > > 
> > > Hi Igor,
> > > 
> > > On Fri, 2018-11-16 at 10:39 +0100, Igor Mammedov wrote:  
> > > > 
> > > > On Mon,  5 Nov 2018 02:40:42 +0100
> > > > Samuel Ortiz <sameo@linux.intel.com> wrote:
> > > >     
> > > > > 
> > > > > 
> > > > > From: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > > > 
> > > > > Instead of using the machine type specific method find_i440fx()
> > > > > to
> > > > > retrieve the PCI bus, this commit aims to rely on the fact that
> > > > > the
> > > > > PCI bus is known by the structure AcpiPciHpState.
> > > > > 
> > > > > When the structure is initialized through acpi_pcihp_init()
> > > > > call,
> > > > > it saves the PCI bus, which means there is no need to invoke a
> > > > > special function later on.
> > > > > 
> > > > > Based on the fact that find_i440fx() was only used there, this
> > > > > patch also removes the function find_i440fx() itself from the
> > > > > entire codebase.
> > > > > 
> > > > > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > > > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > > > > Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
> > > > > Signed-off-by: Jing Liu <jing2.liu@linux.intel.com>    
> > > > Thanks for cleaning it up
> > > > 
> > > > minor nit:
> > > > Taking in account that you're removing '/* TODO: Q35 support */'
> > > > comment along with find_i440fx(), it might be worth to mention
> > > > in this commit message. Something along lines that ACPI PCIHP
> > > > exist to support guests without SHPC support on PCI
> > > > based PC machine. Considering that Q35 provides native
> > > > PCI-E hotplug, there is no need to add ACPI hotplug there.    
> > > Oh yes sure we can update the commit message :). But just wanted to
> > > mention that 'pc' machine type uses ACPI PCIHP and does support
> > > SHPC, so it's not mutually exclusive.  
> > it supports both but is it relevant to this patch?
> > 
> > Point was that one shouldn't remove something silently without
> > any justification/explanation. So that readers that come later
> > wouldn't wonder about the reasons why the code was removed.
> >   
> 
> I understand the point but I think the comment was wrong in the first
> place since q35 never tried to support ACPI PCIHP, as they support PCIe
> native hotplug as you mentioned.
ok.

when you have something ready, feel free to ping me
(I don't mind to review on github if that helps to speed up process)

> 
> >    
> > >   
> > > > 
> > > > 
> > > > with commit message fixed
> > > > 
> > > > Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> > > >     
> > > > > 
> > > > > 
> > > > > ---
> > > > >  include/hw/i386/pc.h  |  1 -
> > > > >  hw/acpi/pcihp.c       | 10 ++++------
> > > > >  hw/pci-host/piix.c    |  8 --------
> > > > >  stubs/pci-host-piix.c |  6 ------
> > > > >  stubs/Makefile.objs   |  1 -
> > > > >  5 files changed, 4 insertions(+), 22 deletions(-)
> > > > >  delete mode 100644 stubs/pci-host-piix.c
> > > > > 
> > > > > diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > > > > index 44cb6bf3f3..8e5f1464eb 100644
> > > > > --- a/include/hw/i386/pc.h
> > > > > +++ b/include/hw/i386/pc.h
> > > > > @@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > > > const char *pci_type,
> > > > >                      MemoryRegion *pci_memory,
> > > > >                      MemoryRegion *ram_memory);
> > > > >  
> > > > > -PCIBus *find_i440fx(void);
> > > > >  /* piix4.c */
> > > > >  extern PCIDevice *piix4_dev;
> > > > >  int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
> > > > > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> > > > > index 80d42e12ff..254b2e50ab 100644
> > > > > --- a/hw/acpi/pcihp.c
> > > > > +++ b/hw/acpi/pcihp.c
> > > > > @@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void
> > > > > *opaque)
> > > > >      return bsel_alloc;
> > > > >  }
> > > > >  
> > > > > -static void acpi_set_pci_info(void)
> > > > > +static void acpi_set_pci_info(AcpiPciHpState *s)
> > > > >  {
> > > > >      static bool bsel_is_set;
> > > > > -    PCIBus *bus;
> > > > >      unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
> > > > >  
> > > > >      if (bsel_is_set) {
> > > > > @@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
> > > > >      }
> > > > >      bsel_is_set = true;
> > > > >  
> > > > > -    bus = find_i440fx(); /* TODO: Q35 support */
> > > > > -    if (bus) {
> > > > > +    if (s->root) {
> > > > >          /* Scan all PCI buses. Set property to enable acpi
> > > > > based
> > > > > hotplug. */
> > > > > -        pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL,
> > > > > &bsel_alloc);
> > > > > +        pci_for_each_bus_depth_first(s->root, acpi_set_bsel,
> > > > > NULL,
> > > > > &bsel_alloc);
> > > > >      }
> > > > >  }
> > > > >  
> > > > > @@ -213,7 +211,7 @@ static void
> > > > > acpi_pcihp_update(AcpiPciHpState
> > > > > *s)
> > > > >  
> > > > >  void acpi_pcihp_reset(AcpiPciHpState *s)
> > > > >  {
> > > > > -    acpi_set_pci_info();
> > > > > +    acpi_set_pci_info(s);
> > > > >      acpi_pcihp_update(s);
> > > > >  }
> > > > >  
> > > > > diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
> > > > > index 47293a3915..658460264b 100644
> > > > > --- a/hw/pci-host/piix.c
> > > > > +++ b/hw/pci-host/piix.c
> > > > > @@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type,
> > > > > const char *pci_type,
> > > > >      return b;
> > > > >  }
> > > > >  
> > > > > -PCIBus *find_i440fx(void)
> > > > > -{
> > > > > -    PCIHostState *s = OBJECT_CHECK(PCIHostState,
> > > > > -                                   object_resolve_path("/machi
> > > > > ne/i
> > > > > 440fx", NULL),
> > > > > -                                   TYPE_PCI_HOST_BRIDGE);
> > > > > -    return s ? s->bus : NULL;
> > > > > -}
> > > > > -
> > > > >  /* PIIX3 PCI to ISA bridge */
> > > > >  static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
> > > > >  {
> > > > > diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
> > > > > deleted file mode 100644
> > > > > index 6ed81b1f21..0000000000
> > > > > --- a/stubs/pci-host-piix.c
> > > > > +++ /dev/null
> > > > > @@ -1,6 +0,0 @@
> > > > > -#include "qemu/osdep.h"
> > > > > -#include "hw/i386/pc.h"
> > > > > -PCIBus *find_i440fx(void)
> > > > > -{
> > > > > -    return NULL;
> > > > > -}
> > > > > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > > > > index 5dd0aeeec6..725f78bedc 100644
> > > > > --- a/stubs/Makefile.objs
> > > > > +++ b/stubs/Makefile.objs
> > > > > @@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
> > > > >  stub-obj-y += vmgenid.o
> > > > >  stub-obj-y += xen-common.o
> > > > >  stub-obj-y += xen-hvm.o
> > > > > -stub-obj-y += pci-host-piix.o
> > > > >  stub-obj-y += ram-block.o
> > > > >  stub-obj-y += ramfb.o    
> > > Thanks,
> > > Sebastie  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-19 17:14         ` Paolo Bonzini
                           ` (2 preceding siblings ...)
  (?)
@ 2018-11-20 12:57         ` Igor Mammedov
  2018-11-20 21:36             ` Paolo Bonzini
  -1 siblings, 1 reply; 170+ messages in thread
From: Igor Mammedov @ 2018-11-20 12:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Samuel Ortiz, qemu-devel, Peter Maydell, Stefano Stabellini,
	Eduardo Habkost, Michael S. Tsirkin, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On Mon, 19 Nov 2018 18:14:26 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 19/11/18 16:31, Igor Mammedov wrote:
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.  
> 
> This is true, on the other hand the series makes sense together and,
> even if the patches are more or less independent, they also all follow
> the same "plan".  For reviewing v6, are you aware of Patchew's series
> diff functionality?  It can tell you which patches had comments in v5,
> reorder patches if applicable, and display deleted and new patches at
> the right point in the series.
Thanks, I'll give it a try.

Suggestion to split series mostly comes from contributor's point of view,
it much easier to amend small series than a larger one.


> v4->v5 is a bit messed up because Samuel probably added a diff order
> setup
> (https://patchew.org/QEMU/20181101102303.16439-1-sameo@linux.intel.com/diff/20181105014047.26447-1-sameo@linux.intel.com/)
> but it's very useful in general.
> Paolo

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-19 17:14         ` Paolo Bonzini
  (?)
  (?)
@ 2018-11-20 12:57         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-20 12:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson, Eduardo Habkost

On Mon, 19 Nov 2018 18:14:26 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 19/11/18 16:31, Igor Mammedov wrote:
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.  
> 
> This is true, on the other hand the series makes sense together and,
> even if the patches are more or less independent, they also all follow
> the same "plan".  For reviewing v6, are you aware of Patchew's series
> diff functionality?  It can tell you which patches had comments in v5,
> reorder patches if applicable, and display deleted and new patches at
> the right point in the series.
Thanks, I'll give it a try.

Suggestion to split series mostly comes from contributor's point of view,
it much easier to amend small series than a larger one.


> v4->v5 is a bit messed up because Samuel probably added a diff order
> setup
> (https://patchew.org/QEMU/20181101102303.16439-1-sameo@linux.intel.com/diff/20181105014047.26447-1-sameo@linux.intel.com/)
> but it's very useful in general.
> Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-19 18:14           ` Michael S. Tsirkin
@ 2018-11-20 21:35             ` Paolo Bonzini
  -1 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-20 21:35 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Igor Mammedov, Samuel Ortiz, qemu-devel, Peter Maydell,
	Stefano Stabellini, Eduardo Habkost, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On 19/11/18 19:14, Michael S. Tsirkin wrote:
> On Mon, Nov 19, 2018 at 06:14:26PM +0100, Paolo Bonzini wrote:
>> On 19/11/18 16:31, Igor Mammedov wrote:
>>> I've tried to give suggestions how to restructure series
>>> on per patch basis. In my opinion it quite possible to split
>>> series in several smaller ones and it should really help with
>>> making series cleaner and easier/faster to review/amend/merge
>>> vs what we have in v5.
>>
>> This is true, on the other hand the series makes sense together and,
>> even if the patches are more or less independent, they also all follow
>> the same "plan".  For reviewing v6, are you aware of Patchew's series
>> diff functionality?  It can tell you which patches had comments in v5,
>> reorder patches if applicable, and display deleted and new patches at
>> the right point in the series.
>>
>> v4->v5 is a bit messed up because Samuel probably added a diff order
>> setup
>> (https://patchew.org/QEMU/20181101102303.16439-1-sameo@linux.intel.com/diff/20181105014047.26447-1-sameo@linux.intel.com/)
>> but it's very useful in general.
>>
>> Paolo
> 
> Oh I didn't realize difforder breaks patchew. Or is the problem
> only if one switches from no order to difforder?

No, it's just that switching it on makes the inter-version diff much
larger, because all hunks are reordered.  difforder is not a problem.

Paolo

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-20 21:35             ` Paolo Bonzini
  0 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-20 21:35 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz, qemu-devel,
	Shannon Zhao, qemu-arm, xen-devel, Anthony Perard, Igor Mammedov,
	Richard Henderson, Eduardo Habkost

On 19/11/18 19:14, Michael S. Tsirkin wrote:
> On Mon, Nov 19, 2018 at 06:14:26PM +0100, Paolo Bonzini wrote:
>> On 19/11/18 16:31, Igor Mammedov wrote:
>>> I've tried to give suggestions how to restructure series
>>> on per patch basis. In my opinion it quite possible to split
>>> series in several smaller ones and it should really help with
>>> making series cleaner and easier/faster to review/amend/merge
>>> vs what we have in v5.
>>
>> This is true, on the other hand the series makes sense together and,
>> even if the patches are more or less independent, they also all follow
>> the same "plan".  For reviewing v6, are you aware of Patchew's series
>> diff functionality?  It can tell you which patches had comments in v5,
>> reorder patches if applicable, and display deleted and new patches at
>> the right point in the series.
>>
>> v4->v5 is a bit messed up because Samuel probably added a diff order
>> setup
>> (https://patchew.org/QEMU/20181101102303.16439-1-sameo@linux.intel.com/diff/20181105014047.26447-1-sameo@linux.intel.com/)
>> but it's very useful in general.
>>
>> Paolo
> 
> Oh I didn't realize difforder breaks patchew. Or is the problem
> only if one switches from no order to difforder?

No, it's just that switching it on makes the inter-version diff much
larger, because all hunks are reordered.  difforder is not a problem.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-20 12:57         ` Igor Mammedov
@ 2018-11-20 21:36             ` Paolo Bonzini
  0 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-20 21:36 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Samuel Ortiz, qemu-devel, Peter Maydell, Stefano Stabellini,
	Eduardo Habkost, Michael S. Tsirkin, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On 20/11/18 13:57, Igor Mammedov wrote:
> On Mon, 19 Nov 2018 18:14:26 +0100
> Paolo Bonzini <pbonzini@redhat.com> wrote:
> 
>> On 19/11/18 16:31, Igor Mammedov wrote:
>>> I've tried to give suggestions how to restructure series
>>> on per patch basis. In my opinion it quite possible to split
>>> series in several smaller ones and it should really help with
>>> making series cleaner and easier/faster to review/amend/merge
>>> vs what we have in v5.  
>>
>> This is true, on the other hand the series makes sense together and,
>> even if the patches are more or less independent, they also all follow
>> the same "plan".  For reviewing v6, are you aware of Patchew's series
>> diff functionality?  It can tell you which patches had comments in v5,
>> reorder patches if applicable, and display deleted and new patches at
>> the right point in the series.
> Thanks, I'll give it a try.
> 
> Suggestion to split series mostly comes from contributor's point of view,
> it much easier to amend small series than a larger one.

That's true, on the other hand rules exist to have exceptions. :)  IIRC
your AML builder patch was also a huge series, this is not very different.

Paolo

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-20 21:36             ` Paolo Bonzini
  0 siblings, 0 replies; 170+ messages in thread
From: Paolo Bonzini @ 2018-11-20 21:36 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson, Eduardo Habkost

On 20/11/18 13:57, Igor Mammedov wrote:
> On Mon, 19 Nov 2018 18:14:26 +0100
> Paolo Bonzini <pbonzini@redhat.com> wrote:
> 
>> On 19/11/18 16:31, Igor Mammedov wrote:
>>> I've tried to give suggestions how to restructure series
>>> on per patch basis. In my opinion it quite possible to split
>>> series in several smaller ones and it should really help with
>>> making series cleaner and easier/faster to review/amend/merge
>>> vs what we have in v5.  
>>
>> This is true, on the other hand the series makes sense together and,
>> even if the patches are more or less independent, they also all follow
>> the same "plan".  For reviewing v6, are you aware of Patchew's series
>> diff functionality?  It can tell you which patches had comments in v5,
>> reorder patches if applicable, and display deleted and new patches at
>> the right point in the series.
> Thanks, I'll give it a try.
> 
> Suggestion to split series mostly comes from contributor's point of view,
> it much easier to amend small series than a larger one.

That's true, on the other hand rules exist to have exceptions. :)  IIRC
your AML builder patch was also a huge series, this is not very different.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-19 15:31       ` Igor Mammedov
@ 2018-11-21 12:35         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2018-11-21 12:35 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Paolo Bonzini, Samuel Ortiz, qemu-devel, Peter Maydell,
	Stefano Stabellini, Eduardo Habkost, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> On Fri, 16 Nov 2018 17:37:54 +0100
> Paolo Bonzini <pbonzini@redhat.com> wrote:
> 
> > On 16/11/18 17:29, Igor Mammedov wrote:
> > > General suggestions for this series:
> > >   1. Preferably don't do multiple changes within a patch
> > >      neither post huge patches (unless it's pure code movement).
> > >      (it's easy to squash patches later it necessary)
> > >   2. Start small, pick a table generalize it and send as
> > >      one small patchset. Tables are often independent
> > >      and it's much easier on both author/reviewer to agree upon
> > >      changes and rewrite it if necessary.  
> > 
> > How would that be done?  This series is on the bigger side, agreed, but
> > most of it is really just code movement.  It's a starting point, having
> > a generic ACPI library is way beyond what this is trying to do.
> I've tried to give suggestions how to restructure series
> on per patch basis. In my opinion it quite possible to split
> series in several smaller ones and it should really help with
> making series cleaner and easier/faster to review/amend/merge
> vs what we have in v5.
> (it's more frustrating to rework large series vs smaller one)
> 
> If something isn't clear, it's easy to reach out to me here
> or directly (email/irc/github) for clarification/feed back.

I assume the #1 goal is to add reduced HW support.  So another
option to speed up merging is to just go ahead and duplicate a
bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
file.
This way it might be easier to see what's common code and what isn't.
And I think offline Igor said he might prefer that way. Right Igor?

> > 
> > Paolo
> > 
> > >   3. when you think about refactoring acpi into a generic API
> > >      think about it as routines that go into a separate library
> > >      (pure acpi spec code) and qemu/acpi glue routines and
> > >       divide them correspondingly.  
> > 

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-21 12:35         ` Michael S. Tsirkin
  0 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2018-11-21 12:35 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz, qemu-devel,
	Shannon Zhao, qemu-arm, xen-devel, Anthony Perard, Paolo Bonzini,
	Richard Henderson, Eduardo Habkost

On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> On Fri, 16 Nov 2018 17:37:54 +0100
> Paolo Bonzini <pbonzini@redhat.com> wrote:
> 
> > On 16/11/18 17:29, Igor Mammedov wrote:
> > > General suggestions for this series:
> > >   1. Preferably don't do multiple changes within a patch
> > >      neither post huge patches (unless it's pure code movement).
> > >      (it's easy to squash patches later it necessary)
> > >   2. Start small, pick a table generalize it and send as
> > >      one small patchset. Tables are often independent
> > >      and it's much easier on both author/reviewer to agree upon
> > >      changes and rewrite it if necessary.  
> > 
> > How would that be done?  This series is on the bigger side, agreed, but
> > most of it is really just code movement.  It's a starting point, having
> > a generic ACPI library is way beyond what this is trying to do.
> I've tried to give suggestions how to restructure series
> on per patch basis. In my opinion it quite possible to split
> series in several smaller ones and it should really help with
> making series cleaner and easier/faster to review/amend/merge
> vs what we have in v5.
> (it's more frustrating to rework large series vs smaller one)
> 
> If something isn't clear, it's easy to reach out to me here
> or directly (email/irc/github) for clarification/feed back.

I assume the #1 goal is to add reduced HW support.  So another
option to speed up merging is to just go ahead and duplicate a
bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
file.
This way it might be easier to see what's common code and what isn't.
And I think offline Igor said he might prefer that way. Right Igor?

> > 
> > Paolo
> > 
> > >   3. when you think about refactoring acpi into a generic API
> > >      think about it as routines that go into a separate library
> > >      (pure acpi spec code) and qemu/acpi glue routines and
> > >       divide them correspondingly.  
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-21 12:35         ` Michael S. Tsirkin
@ 2018-11-21 13:50           ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 13:50 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Igor Mammedov, Peter Maydell, Stefano Stabellini, qemu-devel,
	Shannon Zhao, qemu-arm, xen-devel, Anthony Perard, Paolo Bonzini,
	Richard Henderson, Eduardo Habkost

Hi Michael,

On Wed, Nov 21, 2018 at 07:35:47AM -0500, Michael S. Tsirkin wrote:
> On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > On Fri, 16 Nov 2018 17:37:54 +0100
> > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > 
> > > On 16/11/18 17:29, Igor Mammedov wrote:
> > > > General suggestions for this series:
> > > >   1. Preferably don't do multiple changes within a patch
> > > >      neither post huge patches (unless it's pure code movement).
> > > >      (it's easy to squash patches later it necessary)
> > > >   2. Start small, pick a table generalize it and send as
> > > >      one small patchset. Tables are often independent
> > > >      and it's much easier on both author/reviewer to agree upon
> > > >      changes and rewrite it if necessary.  
> > > 
> > > How would that be done?  This series is on the bigger side, agreed, but
> > > most of it is really just code movement.  It's a starting point, having
> > > a generic ACPI library is way beyond what this is trying to do.
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.
> > (it's more frustrating to rework large series vs smaller one)
> > 
> > If something isn't clear, it's easy to reach out to me here
> > or directly (email/irc/github) for clarification/feed back.
> 
> I assume the #1 goal is to add reduced HW support.
>From our perspective, yes. From the project's point of view, it's about
making the current ACPI code more generic and not bound to any specific
machine type.

> So another
> option to speed up merging is to just go ahead and duplicate a
> bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> file.
It's precisely what we wanted to avoid in the very first place and we
assumed this would be largely frowned upon by the community. It's also a
burden for everyone to maintain that amount of duplicated code. Also I
suppose this would also mean we'd have to eventually de-duplicate and
factorize things in.
Honestly I'd rather not rush things out and work on code sharing first.
I'll answer Igor's numerous comments today and will start addressing
some of his concerns right aways as well.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-21 13:50           ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 13:50 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Paolo Bonzini, qemu-devel, Shannon Zhao, qemu-arm, xen-devel,
	Anthony Perard, Igor Mammedov, Richard Henderson

Hi Michael,

On Wed, Nov 21, 2018 at 07:35:47AM -0500, Michael S. Tsirkin wrote:
> On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > On Fri, 16 Nov 2018 17:37:54 +0100
> > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > 
> > > On 16/11/18 17:29, Igor Mammedov wrote:
> > > > General suggestions for this series:
> > > >   1. Preferably don't do multiple changes within a patch
> > > >      neither post huge patches (unless it's pure code movement).
> > > >      (it's easy to squash patches later it necessary)
> > > >   2. Start small, pick a table generalize it and send as
> > > >      one small patchset. Tables are often independent
> > > >      and it's much easier on both author/reviewer to agree upon
> > > >      changes and rewrite it if necessary.  
> > > 
> > > How would that be done?  This series is on the bigger side, agreed, but
> > > most of it is really just code movement.  It's a starting point, having
> > > a generic ACPI library is way beyond what this is trying to do.
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.
> > (it's more frustrating to rework large series vs smaller one)
> > 
> > If something isn't clear, it's easy to reach out to me here
> > or directly (email/irc/github) for clarification/feed back.
> 
> I assume the #1 goal is to add reduced HW support.
From our perspective, yes. From the project's point of view, it's about
making the current ACPI code more generic and not bound to any specific
machine type.

> So another
> option to speed up merging is to just go ahead and duplicate a
> bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> file.
It's precisely what we wanted to avoid in the very first place and we
assumed this would be largely frowned upon by the community. It's also a
burden for everyone to maintain that amount of duplicated code. Also I
suppose this would also mean we'd have to eventually de-duplicate and
factorize things in.
Honestly I'd rather not rush things out and work on code sharing first.
I'll answer Igor's numerous comments today and will start addressing
some of his concerns right aways as well.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-21 13:50           ` Samuel Ortiz
@ 2018-11-21 13:57             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2018-11-21 13:57 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Igor Mammedov, Peter Maydell, Stefano Stabellini, qemu-devel,
	Shannon Zhao, qemu-arm, xen-devel, Anthony Perard, Paolo Bonzini,
	Richard Henderson, Eduardo Habkost

On Wed, Nov 21, 2018 at 02:50:30PM +0100, Samuel Ortiz wrote:
> Hi Michael,
> 
> On Wed, Nov 21, 2018 at 07:35:47AM -0500, Michael S. Tsirkin wrote:
> > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > > 
> > > > On 16/11/18 17:29, Igor Mammedov wrote:
> > > > > General suggestions for this series:
> > > > >   1. Preferably don't do multiple changes within a patch
> > > > >      neither post huge patches (unless it's pure code movement).
> > > > >      (it's easy to squash patches later it necessary)
> > > > >   2. Start small, pick a table generalize it and send as
> > > > >      one small patchset. Tables are often independent
> > > > >      and it's much easier on both author/reviewer to agree upon
> > > > >      changes and rewrite it if necessary.  
> > > > 
> > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > most of it is really just code movement.  It's a starting point, having
> > > > a generic ACPI library is way beyond what this is trying to do.
> > > I've tried to give suggestions how to restructure series
> > > on per patch basis. In my opinion it quite possible to split
> > > series in several smaller ones and it should really help with
> > > making series cleaner and easier/faster to review/amend/merge
> > > vs what we have in v5.
> > > (it's more frustrating to rework large series vs smaller one)
> > > 
> > > If something isn't clear, it's easy to reach out to me here
> > > or directly (email/irc/github) for clarification/feed back.
> > 
> > I assume the #1 goal is to add reduced HW support.
> >From our perspective, yes. From the project's point of view, it's about
> making the current ACPI code more generic and not bound to any specific
> machine type.
> 
> > So another
> > option to speed up merging is to just go ahead and duplicate a
> > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > file.
> It's precisely what we wanted to avoid in the very first place and we
> assumed this would be largely frowned upon by the community. It's also a
> burden for everyone to maintain that amount of duplicated code. Also I
> suppose this would also mean we'd have to eventually de-duplicate and
> factorize things in.

For sure, that's the plan.

> Honestly I'd rather not rush things out and work on code sharing first.
> I'll answer Igor's numerous comments today and will start addressing
> some of his concerns right aways as well.
> 
> Cheers,
> Samuel.

OK, no problem then - just trying to make sure you aren't blocked.

-- 
MST

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-21 13:57             ` Michael S. Tsirkin
  0 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2018-11-21 13:57 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Paolo Bonzini, qemu-devel, Shannon Zhao, qemu-arm, xen-devel,
	Anthony Perard, Igor Mammedov, Richard Henderson

On Wed, Nov 21, 2018 at 02:50:30PM +0100, Samuel Ortiz wrote:
> Hi Michael,
> 
> On Wed, Nov 21, 2018 at 07:35:47AM -0500, Michael S. Tsirkin wrote:
> > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > > 
> > > > On 16/11/18 17:29, Igor Mammedov wrote:
> > > > > General suggestions for this series:
> > > > >   1. Preferably don't do multiple changes within a patch
> > > > >      neither post huge patches (unless it's pure code movement).
> > > > >      (it's easy to squash patches later it necessary)
> > > > >   2. Start small, pick a table generalize it and send as
> > > > >      one small patchset. Tables are often independent
> > > > >      and it's much easier on both author/reviewer to agree upon
> > > > >      changes and rewrite it if necessary.  
> > > > 
> > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > most of it is really just code movement.  It's a starting point, having
> > > > a generic ACPI library is way beyond what this is trying to do.
> > > I've tried to give suggestions how to restructure series
> > > on per patch basis. In my opinion it quite possible to split
> > > series in several smaller ones and it should really help with
> > > making series cleaner and easier/faster to review/amend/merge
> > > vs what we have in v5.
> > > (it's more frustrating to rework large series vs smaller one)
> > > 
> > > If something isn't clear, it's easy to reach out to me here
> > > or directly (email/irc/github) for clarification/feed back.
> > 
> > I assume the #1 goal is to add reduced HW support.
> >From our perspective, yes. From the project's point of view, it's about
> making the current ACPI code more generic and not bound to any specific
> machine type.
> 
> > So another
> > option to speed up merging is to just go ahead and duplicate a
> > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > file.
> It's precisely what we wanted to avoid in the very first place and we
> assumed this would be largely frowned upon by the community. It's also a
> burden for everyone to maintain that amount of duplicated code. Also I
> suppose this would also mean we'd have to eventually de-duplicate and
> factorize things in.

For sure, that's the plan.

> Honestly I'd rather not rush things out and work on code sharing first.
> I'll answer Igor's numerous comments today and will start addressing
> some of his concerns right aways as well.
> 
> Cheers,
> Samuel.

OK, no problem then - just trying to make sure you aren't blocked.

-- 
MST

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-21 12:35         ` Michael S. Tsirkin
@ 2018-11-21 14:15           ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-21 14:15 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Paolo Bonzini, Samuel Ortiz, qemu-devel, Peter Maydell,
	Stefano Stabellini, Eduardo Habkost, Shannon Zhao, qemu-arm,
	Anthony Perard, xen-devel, Richard Henderson

On Wed, 21 Nov 2018 07:35:47 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > On Fri, 16 Nov 2018 17:37:54 +0100
> > Paolo Bonzini <pbonzini@redhat.com> wrote:
> >   
> > > On 16/11/18 17:29, Igor Mammedov wrote:  
> > > > General suggestions for this series:
> > > >   1. Preferably don't do multiple changes within a patch
> > > >      neither post huge patches (unless it's pure code movement).
> > > >      (it's easy to squash patches later it necessary)
> > > >   2. Start small, pick a table generalize it and send as
> > > >      one small patchset. Tables are often independent
> > > >      and it's much easier on both author/reviewer to agree upon
> > > >      changes and rewrite it if necessary.    
> > > 
> > > How would that be done?  This series is on the bigger side, agreed, but
> > > most of it is really just code movement.  It's a starting point, having
> > > a generic ACPI library is way beyond what this is trying to do.  
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.
> > (it's more frustrating to rework large series vs smaller one)
> > 
> > If something isn't clear, it's easy to reach out to me here
> > or directly (email/irc/github) for clarification/feed back.  
> 
> I assume the #1 goal is to add reduced HW support.  So another
> option to speed up merging is to just go ahead and duplicate a
> bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> file.
> This way it might be easier to see what's common code and what isn't.
> And I think offline Igor said he might prefer that way. Right Igor?
You mean probably 'x86 reduced hw' support. That's was what I've
already suggested for PCI AML code during patch review. Just don't
call it generic when it's not and place code in hw/i386/ directory beside
acpi-build.c. It might apply to some other tables (i.e. complex cases).

On per patch review I gave suggestions how to amend series to make
it acceptable without doing complex refactoring and pointed out
places we probably shouldn't refactor now and just duplicate as
it's too complex or not clear how to generalize it yet.

Problem with duplication is that a random contributor is not
around to clean code up after a feature is merged and we end up
with a bunch of messy code.

A word to the contributors,
Don't do refactoring in silence, keep discussing approaches here,
suggest alternatives. That way it's easier to reach a compromise
and merge it with less iterations. And if you do split it in smaller
parts, the process should go even faster.

I'll sent a small RSDP refactoring series for reference.

> > > Paolo
> > >   
> > > >   3. when you think about refactoring acpi into a generic API
> > > >      think about it as routines that go into a separate library
> > > >      (pure acpi spec code) and qemu/acpi glue routines and
> > > >       divide them correspondingly.    
> > >   

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-21 14:15           ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-21 14:15 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Maydell, Stefano Stabellini, Samuel Ortiz, qemu-devel,
	Shannon Zhao, qemu-arm, xen-devel, Anthony Perard, Paolo Bonzini,
	Richard Henderson, Eduardo Habkost

On Wed, 21 Nov 2018 07:35:47 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > On Fri, 16 Nov 2018 17:37:54 +0100
> > Paolo Bonzini <pbonzini@redhat.com> wrote:
> >   
> > > On 16/11/18 17:29, Igor Mammedov wrote:  
> > > > General suggestions for this series:
> > > >   1. Preferably don't do multiple changes within a patch
> > > >      neither post huge patches (unless it's pure code movement).
> > > >      (it's easy to squash patches later it necessary)
> > > >   2. Start small, pick a table generalize it and send as
> > > >      one small patchset. Tables are often independent
> > > >      and it's much easier on both author/reviewer to agree upon
> > > >      changes and rewrite it if necessary.    
> > > 
> > > How would that be done?  This series is on the bigger side, agreed, but
> > > most of it is really just code movement.  It's a starting point, having
> > > a generic ACPI library is way beyond what this is trying to do.  
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.
> > (it's more frustrating to rework large series vs smaller one)
> > 
> > If something isn't clear, it's easy to reach out to me here
> > or directly (email/irc/github) for clarification/feed back.  
> 
> I assume the #1 goal is to add reduced HW support.  So another
> option to speed up merging is to just go ahead and duplicate a
> bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> file.
> This way it might be easier to see what's common code and what isn't.
> And I think offline Igor said he might prefer that way. Right Igor?
You mean probably 'x86 reduced hw' support. That's was what I've
already suggested for PCI AML code during patch review. Just don't
call it generic when it's not and place code in hw/i386/ directory beside
acpi-build.c. It might apply to some other tables (i.e. complex cases).

On per patch review I gave suggestions how to amend series to make
it acceptable without doing complex refactoring and pointed out
places we probably shouldn't refactor now and just duplicate as
it's too complex or not clear how to generalize it yet.

Problem with duplication is that a random contributor is not
around to clean code up after a feature is merged and we end up
with a bunch of messy code.

A word to the contributors,
Don't do refactoring in silence, keep discussing approaches here,
suggest alternatives. That way it's easier to reach a compromise
and merge it with less iterations. And if you do split it in smaller
parts, the process should go even faster.

I'll sent a small RSDP refactoring series for reference.

> > > Paolo
> > >   
> > > >   3. when you think about refactoring acpi into a generic API
> > > >      think about it as routines that go into a separate library
> > > >      (pure acpi spec code) and qemu/acpi glue routines and
> > > >       divide them correspondingly.    
> > >   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-21 14:15           ` Igor Mammedov
@ 2018-11-21 14:38             ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 14:38 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Michael S. Tsirkin, Peter Maydell, Stefano Stabellini,
	qemu-devel, Shannon Zhao, qemu-arm, xen-devel, Anthony Perard,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost

Igor,

On Wed, Nov 21, 2018 at 03:15:26PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 07:35:47 -0500
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > >   
> > > > On 16/11/18 17:29, Igor Mammedov wrote:  
> > > > > General suggestions for this series:
> > > > >   1. Preferably don't do multiple changes within a patch
> > > > >      neither post huge patches (unless it's pure code movement).
> > > > >      (it's easy to squash patches later it necessary)
> > > > >   2. Start small, pick a table generalize it and send as
> > > > >      one small patchset. Tables are often independent
> > > > >      and it's much easier on both author/reviewer to agree upon
> > > > >      changes and rewrite it if necessary.    
> > > > 
> > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > most of it is really just code movement.  It's a starting point, having
> > > > a generic ACPI library is way beyond what this is trying to do.  
> > > I've tried to give suggestions how to restructure series
> > > on per patch basis. In my opinion it quite possible to split
> > > series in several smaller ones and it should really help with
> > > making series cleaner and easier/faster to review/amend/merge
> > > vs what we have in v5.
> > > (it's more frustrating to rework large series vs smaller one)
> > > 
> > > If something isn't clear, it's easy to reach out to me here
> > > or directly (email/irc/github) for clarification/feed back.  
> > 
> > I assume the #1 goal is to add reduced HW support.  So another
> > option to speed up merging is to just go ahead and duplicate a
> > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > file.
> > This way it might be easier to see what's common code and what isn't.
> > And I think offline Igor said he might prefer that way. Right Igor?
> You mean probably 'x86 reduced hw' support. That's was what I've
> already suggested for PCI AML code during patch review. Just don't
> call it generic when it's not and place code in hw/i386/ directory beside
> acpi-build.c. It might apply to some other tables (i.e. complex cases).
> 
> On per patch review I gave suggestions how to amend series to make
> it acceptable without doing complex refactoring and pointed out
> places we probably shouldn't refactor now and just duplicate as
> it's too complex or not clear how to generalize it yet.
> 
> Problem with duplication is that a random contributor is not
> around to clean code up after a feature is merged and we end up
> with a bunch of messy code.
> 
> A word to the contributors,
> Don't do refactoring in silence, keep discussing approaches here,
> suggest alternatives. That way it's easier to reach a compromise
> and merge it with less iterations. And if you do split it in smaller
> parts, the process should go even faster.
> 
> I'll sent a small RSDP refactoring series for reference.
I was already working on the RSDP changes. Let me know if I should drop
that work too.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-21 14:38             ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 14:38 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

Igor,

On Wed, Nov 21, 2018 at 03:15:26PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 07:35:47 -0500
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > >   
> > > > On 16/11/18 17:29, Igor Mammedov wrote:  
> > > > > General suggestions for this series:
> > > > >   1. Preferably don't do multiple changes within a patch
> > > > >      neither post huge patches (unless it's pure code movement).
> > > > >      (it's easy to squash patches later it necessary)
> > > > >   2. Start small, pick a table generalize it and send as
> > > > >      one small patchset. Tables are often independent
> > > > >      and it's much easier on both author/reviewer to agree upon
> > > > >      changes and rewrite it if necessary.    
> > > > 
> > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > most of it is really just code movement.  It's a starting point, having
> > > > a generic ACPI library is way beyond what this is trying to do.  
> > > I've tried to give suggestions how to restructure series
> > > on per patch basis. In my opinion it quite possible to split
> > > series in several smaller ones and it should really help with
> > > making series cleaner and easier/faster to review/amend/merge
> > > vs what we have in v5.
> > > (it's more frustrating to rework large series vs smaller one)
> > > 
> > > If something isn't clear, it's easy to reach out to me here
> > > or directly (email/irc/github) for clarification/feed back.  
> > 
> > I assume the #1 goal is to add reduced HW support.  So another
> > option to speed up merging is to just go ahead and duplicate a
> > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > file.
> > This way it might be easier to see what's common code and what isn't.
> > And I think offline Igor said he might prefer that way. Right Igor?
> You mean probably 'x86 reduced hw' support. That's was what I've
> already suggested for PCI AML code during patch review. Just don't
> call it generic when it's not and place code in hw/i386/ directory beside
> acpi-build.c. It might apply to some other tables (i.e. complex cases).
> 
> On per patch review I gave suggestions how to amend series to make
> it acceptable without doing complex refactoring and pointed out
> places we probably shouldn't refactor now and just duplicate as
> it's too complex or not clear how to generalize it yet.
> 
> Problem with duplication is that a random contributor is not
> around to clean code up after a feature is merged and we end up
> with a bunch of messy code.
> 
> A word to the contributors,
> Don't do refactoring in silence, keep discussing approaches here,
> suggest alternatives. That way it's easier to reach a compromise
> and merge it with less iterations. And if you do split it in smaller
> parts, the process should go even faster.
> 
> I'll sent a small RSDP refactoring series for reference.
I was already working on the RSDP changes. Let me know if I should drop
that work too.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-08 14:16     ` Igor Mammedov
@ 2018-11-21 14:42       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 14:42 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

Hi Igor,

On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:28 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > Description Table). RSDT only allow for 32-bit addressses and have thus
> > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > no longer RSDTs, although RSDTs are still supported for backward
> > compatibility.
> > 
> > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > length and a version field to the table.
> 
> This patch re-implements what arm/virt board already does
> and fixes checksum bug in the later and at the same time
> without a user (within the patch).
> 
> I'd suggest redo it a way similar to FADT refactoring
>   patch 1: fix checksum bug in virt/arm
>   patch 2: update reference tables in test
>   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
>              (both arm and x86) wich stores all data in hos byte order
>   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
>
>            ... move out to aml-build.c
>   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
>       amending it to generate rev1 variant defined by revision in AcpiRsdpData
>       (commit dd1b2037a)
I agree patches #1, #2 and #5 make sense. 3 and 4 as well, but here
you're asking about something that's out of scope of the current serie.
I'll move those patches from this serie and build a 6 patches new serie
as suggested.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-21 14:42       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 14:42 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

Hi Igor,

On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:28 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > Description Table). RSDT only allow for 32-bit addressses and have thus
> > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > no longer RSDTs, although RSDTs are still supported for backward
> > compatibility.
> > 
> > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > length and a version field to the table.
> 
> This patch re-implements what arm/virt board already does
> and fixes checksum bug in the later and at the same time
> without a user (within the patch).
> 
> I'd suggest redo it a way similar to FADT refactoring
>   patch 1: fix checksum bug in virt/arm
>   patch 2: update reference tables in test
>   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
>              (both arm and x86) wich stores all data in hos byte order
>   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
>
>            ... move out to aml-build.c
>   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
>       amending it to generate rev1 variant defined by revision in AcpiRsdpData
>       (commit dd1b2037a)
I agree patches #1, #2 and #5 make sense. 3 and 4 as well, but here
you're asking about something that's out of scope of the current serie.
I'll move those patches from this serie and build a 6 patches new serie
as suggested.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type
  2018-11-09 14:23     ` Igor Mammedov
@ 2018-11-21 14:42       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 14:42 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

Hi Igor,

On Fri, Nov 09, 2018 at 03:23:16PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:24 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > ACPI tables are platform and machine type and even architecture
> > agnostic, and as such we want to provide an internal ACPI API that
> > only depends on platform agnostic information.
> > 
> > For the x86 architecture, in order to build ACPI tables independently
> > from the PC or Q35 machine types, we are moving a few MachineState
> > structure fields into a machine type agnostic structure called
> > AcpiConfiguration. The structure fields we move are:
> 
> It's not obvious why new structure is needed, especially at
> the beginning of series. We probably should place this patch
> much later in the series (if we need it at all) and try
> generalize a much as possible without using it.
Patches order set aside, this new structure is needed to make the
existing API not completely bound to the pc machine type anymore and
"Decouple the ACPI build from the PC machine type".

It was either creating a structure to build ACPI tables in a machine
type independent fashion, or pass custom structures (or potentially long
list of arguments) to the existing APIs. See below.


> And try to come up with an API that doesn't need centralized collection
> of data somehow related to ACPI (most of the fields here are not generic
> and applicable to a specific board/target).
> 
> For generic API, I'd prefer a separate building blocks
> like build_fadt()/... that take as an input only parameters
> necessary to compose a table/aml part with occasional board
> interface hooks instead of all encompassing AcpiConfiguration
> and board specific 'acpi_build' that would use them when/if needed.
Let's take build_madt as an example. With my patch we define:

void build_madt(GArray *table_data, BIOSLinker *linker,
                MachineState *ms, AcpiConfiguration *conf);

And you're suggesting we'd define:

void build_madt(GArray *table_data, BIOSLinker *linker,
                MachineState *ms, HotplugHandler *acpi_dev,
                bool apic_xrupt_override);

instead. Is that correct?

Pros for the latter is the fact that, as you said, we would not need to
define a centralized structure holding all possibly needed ACPI related
fields.
Pros for the former is about defining a pointer to all needed ACPI
fields once and for all and hiding the details of the API in the AML
building implementation.


> We probably should split series into several smaller
> (if possible independent) ones, so people won't be scared of
> its sheer size and run away from reviewing it.
I will try to split it in smaller chunks if that helps.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type
@ 2018-11-21 14:42       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 14:42 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

Hi Igor,

On Fri, Nov 09, 2018 at 03:23:16PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:24 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > ACPI tables are platform and machine type and even architecture
> > agnostic, and as such we want to provide an internal ACPI API that
> > only depends on platform agnostic information.
> > 
> > For the x86 architecture, in order to build ACPI tables independently
> > from the PC or Q35 machine types, we are moving a few MachineState
> > structure fields into a machine type agnostic structure called
> > AcpiConfiguration. The structure fields we move are:
> 
> It's not obvious why new structure is needed, especially at
> the beginning of series. We probably should place this patch
> much later in the series (if we need it at all) and try
> generalize a much as possible without using it.
Patches order set aside, this new structure is needed to make the
existing API not completely bound to the pc machine type anymore and
"Decouple the ACPI build from the PC machine type".

It was either creating a structure to build ACPI tables in a machine
type independent fashion, or pass custom structures (or potentially long
list of arguments) to the existing APIs. See below.


> And try to come up with an API that doesn't need centralized collection
> of data somehow related to ACPI (most of the fields here are not generic
> and applicable to a specific board/target).
> 
> For generic API, I'd prefer a separate building blocks
> like build_fadt()/... that take as an input only parameters
> necessary to compose a table/aml part with occasional board
> interface hooks instead of all encompassing AcpiConfiguration
> and board specific 'acpi_build' that would use them when/if needed.
Let's take build_madt as an example. With my patch we define:

void build_madt(GArray *table_data, BIOSLinker *linker,
                MachineState *ms, AcpiConfiguration *conf);

And you're suggesting we'd define:

void build_madt(GArray *table_data, BIOSLinker *linker,
                MachineState *ms, HotplugHandler *acpi_dev,
                bool apic_xrupt_override);

instead. Is that correct?

Pros for the latter is the fact that, as you said, we would not need to
define a centralized structure holding all possibly needed ACPI related
fields.
Pros for the former is about defining a pointer to all needed ACPI
fields once and for all and hiding the details of the API in the AML
building implementation.


> We probably should split series into several smaller
> (if possible independent) ones, so people won't be scared of
> its sheer size and run away from reviewing it.
I will try to split it in smaller chunks if that helps.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API
  2018-11-09 14:27     ` Igor Mammedov
@ 2018-11-21 14:42       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 14:42 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: qemu-devel, Shannon Zhao, Stefano Stabellini, Anthony Perard,
	Richard Henderson, Marcel Apfelbaum, xen-devel, Paolo Bonzini,
	Michael S. Tsirkin, qemu-arm, Peter Maydell, Eduardo Habkost

On Fri, Nov 09, 2018 at 03:27:16PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:25 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > This is going to be needed by the Hardware-reduced ACPI routines.
> > 
> > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> the patch is probably misplaced withing series,
> if there is an external user within this series then this patch should
> be squashed there, otherwise it doesn't belong to this series.
hw/acpi/reduced.c needs it, I forgot to remove that patch when removing
the hardware-reduced code from the serie. I will remove it.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API
@ 2018-11-21 14:42       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 14:42 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Fri, Nov 09, 2018 at 03:27:16PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:25 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > This is going to be needed by the Hardware-reduced ACPI routines.
> > 
> > Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> the patch is probably misplaced withing series,
> if there is an external user within this series then this patch should
> be squashed there, otherwise it doesn't belong to this series.
hw/acpi/reduced.c needs it, I forgot to remove that patch when removing
the hardware-reduced code from the serie. I will remove it.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 07/24] hw: acpi: Generalize AML build routines
  2018-11-09 13:37   ` [Qemu-devel] " Igor Mammedov
@ 2018-11-21 15:00       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 15:00 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

Hi Igor,

On Fri, Nov 09, 2018 at 02:37:33PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:30 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > From: Yang Zhong <yang.zhong@intel.com>
> > 
> > Most of the AML build routines under acpi-build are not even
> > architecture specific. They can be moved to the more generic hw/acpi
> > folder where they could be shared across machine types and
> > architectures.
> 
> I'd prefer if won't pull into aml-build PCI specific headers,
> Suggest to create hw/acpi/pci.c and move generic PCI related
> code there, with corresponding header the would export API
> (preferably without PCI dependencies in it)
> 
> 
> Also patch is too big and does too much at a time.
> Here I'd suggest to split it in smaller parts to make it more digestible
> 
> 1. split it in 3 parts
>     * MCFG
>     * CRS
>     * PTR
> 2. mcfg between x86 and ARM look pretty much the same with ARM
>    open codding bus number calculation and missing migration hack
>    * a patch to make bus number calculation in ARM the same as x86
>    * a patch to bring migration hack (dummy MCFG table in case it's disabled)
>      it's questionable if we actually need it in generic,
>      we most likely need it for legacy machines that predate
>      resizable MemeoryRegion, but we probably don't need it for
>      later machines as problem doesn't exists there.
>      So it might be better to push hack out from generic code
>      to a legacy caller and keep generic MCFG clean.
>      (this patch might be better at the beginning of the series as
>       it might affect acpi test results, and might need an update to reference tables
>       I don't really sure)
>    * at this point arm and x86 impl. would be the same so
>      a patch to move mcfg build routine to a generic place and replace
>      x86/arm with a single impl.
>    * a patch to convert mcfg build routine to build_append_int_noprefix() API
>      and drop AcpiTableMcfg structure
Ok, I'll build another patch serie for that one then.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 07/24] hw: acpi: Generalize AML build routines
@ 2018-11-21 15:00       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 15:00 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

Hi Igor,

On Fri, Nov 09, 2018 at 02:37:33PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:30 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > From: Yang Zhong <yang.zhong@intel.com>
> > 
> > Most of the AML build routines under acpi-build are not even
> > architecture specific. They can be moved to the more generic hw/acpi
> > folder where they could be shared across machine types and
> > architectures.
> 
> I'd prefer if won't pull into aml-build PCI specific headers,
> Suggest to create hw/acpi/pci.c and move generic PCI related
> code there, with corresponding header the would export API
> (preferably without PCI dependencies in it)
> 
> 
> Also patch is too big and does too much at a time.
> Here I'd suggest to split it in smaller parts to make it more digestible
> 
> 1. split it in 3 parts
>     * MCFG
>     * CRS
>     * PTR
> 2. mcfg between x86 and ARM look pretty much the same with ARM
>    open codding bus number calculation and missing migration hack
>    * a patch to make bus number calculation in ARM the same as x86
>    * a patch to bring migration hack (dummy MCFG table in case it's disabled)
>      it's questionable if we actually need it in generic,
>      we most likely need it for legacy machines that predate
>      resizable MemeoryRegion, but we probably don't need it for
>      later machines as problem doesn't exists there.
>      So it might be better to push hack out from generic code
>      to a legacy caller and keep generic MCFG clean.
>      (this patch might be better at the beginning of the series as
>       it might affect acpi test results, and might need an update to reference tables
>       I don't really sure)
>    * at this point arm and x86 impl. would be the same so
>      a patch to move mcfg build routine to a generic place and replace
>      x86/arm with a single impl.
>    * a patch to convert mcfg build routine to build_append_int_noprefix() API
>      and drop AcpiTableMcfg structure
Ok, I'll build another patch serie for that one then.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters
  2018-11-13 15:59     ` Igor Mammedov
@ 2018-11-21 15:43       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 15:43 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Tue, Nov 13, 2018 at 04:59:18PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:33 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > This is going to be needed by the hardware reduced implementation, so
> > let's export it.
> > Once the ACPI builder methods and getters will be implemented, the
> > acpi_get_pci_host() implementation will become hardware agnostic.
> > 
> > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > ---
> >  include/hw/acpi/aml-build.h |  2 ++
> >  hw/acpi/aml-build.c         | 43 +++++++++++++++++++++++++++++++++
> >  hw/i386/acpi-build.c        | 47 ++-----------------------------------
> >  3 files changed, 47 insertions(+), 45 deletions(-)
> > 
> > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > index c27c0935ae..fde2785b9a 100644
> > --- a/include/hw/acpi/aml-build.h
> > +++ b/include/hw/acpi/aml-build.h
> > @@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
> >               const char *oem_id, const char *oem_table_id);
> >  void *acpi_data_push(GArray *table_data, unsigned size);
> >  unsigned acpi_data_len(GArray *table);
> > +Object *acpi_get_pci_host(void);
> > +void acpi_get_pci_holes(Range *hole, Range *hole64);
> >  /* Align AML blob size to a multiple of 'align' */
> >  void acpi_align_size(GArray *blob, unsigned align);
> >  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > index 2b9a636e75..b8e32f15f7 100644
> > --- a/hw/acpi/aml-build.c
> > +++ b/hw/acpi/aml-build.c
> > @@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
> >      g_array_free(tables->vmgenid, mfre);
> >  }
> 
> > +/*
> > + * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> > + */
> > +Object *acpi_get_pci_host(void)
> > +{
> > +    PCIHostState *host;
> > +
> > +    host = OBJECT_CHECK(PCIHostState,
> > +                        object_resolve_path("/machine/i440fx", NULL),
> > +                        TYPE_PCI_HOST_BRIDGE);
> > +    if (!host) {
> > +        host = OBJECT_CHECK(PCIHostState,
> > +                            object_resolve_path("/machine/q35", NULL),
> > +                            TYPE_PCI_HOST_BRIDGE);
> > +    }
> > +
> > +    return OBJECT(host);
> > +}
> in general aml-build.c is a place to put ACPI spec primitives,
> so I'd suggest to move it somewhere else.
> 
> Considering it's x86 code (so far), maybe move it to something like
> hw/i386/acpi-pci.c
> 
> Also it might be good to get rid of acpi_get_pci_host() and pass
> a pointer to pci_host as acpi_setup() an argument, since it's static
> for life of boar we can keep it in AcpiBuildState, and reuse for
> mfg/pci_hole/pci bus accesses.
That's what I'm trying to do with patches #23 and 24, but through the
ACPI configuration structure. I could try using the build state instead,
as it's platform agnostic as well.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters
@ 2018-11-21 15:43       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 15:43 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Tue, Nov 13, 2018 at 04:59:18PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:33 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > This is going to be needed by the hardware reduced implementation, so
> > let's export it.
> > Once the ACPI builder methods and getters will be implemented, the
> > acpi_get_pci_host() implementation will become hardware agnostic.
> > 
> > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > ---
> >  include/hw/acpi/aml-build.h |  2 ++
> >  hw/acpi/aml-build.c         | 43 +++++++++++++++++++++++++++++++++
> >  hw/i386/acpi-build.c        | 47 ++-----------------------------------
> >  3 files changed, 47 insertions(+), 45 deletions(-)
> > 
> > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > index c27c0935ae..fde2785b9a 100644
> > --- a/include/hw/acpi/aml-build.h
> > +++ b/include/hw/acpi/aml-build.h
> > @@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
> >               const char *oem_id, const char *oem_table_id);
> >  void *acpi_data_push(GArray *table_data, unsigned size);
> >  unsigned acpi_data_len(GArray *table);
> > +Object *acpi_get_pci_host(void);
> > +void acpi_get_pci_holes(Range *hole, Range *hole64);
> >  /* Align AML blob size to a multiple of 'align' */
> >  void acpi_align_size(GArray *blob, unsigned align);
> >  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > index 2b9a636e75..b8e32f15f7 100644
> > --- a/hw/acpi/aml-build.c
> > +++ b/hw/acpi/aml-build.c
> > @@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
> >      g_array_free(tables->vmgenid, mfre);
> >  }
> 
> > +/*
> > + * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> > + */
> > +Object *acpi_get_pci_host(void)
> > +{
> > +    PCIHostState *host;
> > +
> > +    host = OBJECT_CHECK(PCIHostState,
> > +                        object_resolve_path("/machine/i440fx", NULL),
> > +                        TYPE_PCI_HOST_BRIDGE);
> > +    if (!host) {
> > +        host = OBJECT_CHECK(PCIHostState,
> > +                            object_resolve_path("/machine/q35", NULL),
> > +                            TYPE_PCI_HOST_BRIDGE);
> > +    }
> > +
> > +    return OBJECT(host);
> > +}
> in general aml-build.c is a place to put ACPI spec primitives,
> so I'd suggest to move it somewhere else.
> 
> Considering it's x86 code (so far), maybe move it to something like
> hw/i386/acpi-pci.c
> 
> Also it might be good to get rid of acpi_get_pci_host() and pass
> a pointer to pci_host as acpi_setup() an argument, since it's static
> for life of boar we can keep it in AcpiBuildState, and reuse for
> mfg/pci_hole/pci bus accesses.
That's what I'm trying to do with patches #23 and 24, but through the
ACPI configuration structure. I could try using the build state instead,
as it's platform agnostic as well.

Cheers,
Samuel.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API
  2018-11-14 10:55     ` Igor Mammedov
@ 2018-11-21 23:12       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 23:12 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Marcel Apfelbaum, Yang Zhong, Peter Maydell, Stefano Stabellini,
	Eduardo Habkost, Rob Bradford, Michael S. Tsirkin, qemu-devel,
	Shannon Zhao, qemu-arm, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

Hi Igor,

On Wed, Nov 14, 2018 at 11:55:37AM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:34 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > From: Yang Zhong <yang.zhong@intel.com>
> > 
> > The AML build routines for the PCI host bridge and the corresponding
> > DSDT addition are neither x86 nor PC machine type specific.
> > We can move them to the architecture agnostic hw/acpi folder, and by
> > carrying all the needed information through a new AcpiPciBus structure,
> > we can make them PC machine type independent.
> 
> I'm don't know anything about PCI, but functional changes doesn't look
> correct to me.
>
> See more detailed comments below.
> 
> Marcel,
> could you take a look on this patch (in particular main csr changes), pls?
> 
> > 
> > Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> > Signed-off-by: Rob Bradford <robert.bradford@intel.com>
> > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > ---
> >  include/hw/acpi/aml-build.h |   8 ++
> >  hw/acpi/aml-build.c         | 157 ++++++++++++++++++++++++++++++++++++
> >  hw/i386/acpi-build.c        | 115 ++------------------------
> >  3 files changed, 173 insertions(+), 107 deletions(-)
> > 
> > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > index fde2785b9a..1861e37ebf 100644
> > --- a/include/hw/acpi/aml-build.h
> > +++ b/include/hw/acpi/aml-build.h
> > @@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
> >      uint32_t mcfg_size;
> >  } AcpiMcfgInfo;
> >  
> > +typedef struct AcpiPciBus {
> > +    PCIBus *pci_bus;
> > +    Range *pci_hole;
> > +    Range *pci_hole64;
> > +} AcpiPciBus;
> Again, this and all below is not aml-build material.
> Consider adding/using pci specific acpi file for it.
> 
> Also even though pci AML in arm/virt is to a large degree a subset
> of x86 target and it would be much better to unify ARM part with x86,
> it probably will be to big/complex of a change if we take on it in
> one go.
> 
> So not to derail you from the goal too much, we probably should
> generalize this a little bit less, limiting refactoring to x86
> target for now.
So keeping it under i386 means it won't be accessible through hw/acpi/,
which means we won't be able to have a generic hw/acpi/reduced.c
implementation. From our perspective, this is the problem with keeping
things under i386 because we're not sure yet how much generic it is: It
still won't be shareable for a generic hardware-reduced ACPI
implementation which means we'll have to temporarily have yet another
hardware-reduced ACPI implementation under hw/i386 this time.
I guess this is what Michael meant by keeping some parts of the code
duplicated for now.

I feel it'd be easier to move those APIs under a shareable location, to
make it easier for ARM to consume it even if it's not entirely generic yet.
But you guys are the maintainers and if you think we should restric the
generalization to x86 only for now, we can go for it.

> For example, move generic x86 pci parts to hw/i386/acpi-pci.[hc],
> and structure it so that building blocks in acpi-pci.c could be
> reused for x86 reduced profile later.
> Once it's been done, it might be easier and less complex to
> unify a bit more generic code in i386/acpi-pci.c with corresponding
> ARM code.
> 
> Patch is too big and should be split into smaller logical chunks
> and you should separate code movement vs functional changes you're
> a making here.
> 
> Once you split patch properly, it should be easier to assess
> changes.
> 
> >  typedef struct CrsRangeEntry {
> >      uint64_t base;
> >      uint64_t limit;
> > @@ -411,6 +417,8 @@ Aml *build_osc_method(uint32_t value);
> >  void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
> >  Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
> >  Aml *build_prt(bool is_pci0_prt);
> > +void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
> > +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
> >  void crs_range_set_init(CrsRangeSet *range_set);
> >  Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
> >  void crs_replace_with_free_ranges(GPtrArray *ranges,
> > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > index b8e32f15f7..869ed70db3 100644
> > --- a/hw/acpi/aml-build.c
> > +++ b/hw/acpi/aml-build.c
> > @@ -29,6 +29,19 @@
> >  #include "hw/pci/pci_bus.h"
> >  #include "qemu/range.h"
> >  #include "hw/pci/pci_bridge.h"
> > +#include "hw/i386/pc.h"
> > +#include "sysemu/tpm.h"
> > +#include "hw/acpi/tpm.h"
> > +
> > +#define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
> > +#define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
> > +#define PCI_HOST_BRIDGE_IO_0_MAX_ADDR      0x0cf7
> > +#define PCI_HOST_BRIDGE_IO_1_MIN_ADDR      0x0d00
> > +#define PCI_HOST_BRIDGE_IO_1_MAX_ADDR      0xffff
> > +#define PCI_VGA_MEM_BASE_ADDR              0x000a0000
> > +#define PCI_VGA_MEM_MAX_ADDR               0x000bffff
> > +#define IO_0_LEN                           0xcf8
> > +#define VGA_MEM_LEN                        0x20000
> >  
> >  static GArray *build_alloc_array(void)
> >  {
> > @@ -2142,6 +2155,150 @@ Aml *build_prt(bool is_pci0_prt)
> >      return method;
> >  }
> >  
> > +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
> name doesn't reflect exactly what function does,
> it builds device descriptions for expander buses (including their csr)
> and then it builds csr for for main pci host but not pci device description.
> 
> I'd suggest to split out expander buses part into separate function
> that returns an expander bus device description, updates crs_range_set
> and let the caller to enumerate buses and add descriptions to dsdt.
> 
> Then after it we could do a generic csr generation function for the main pci host
> if it's possible at all (main pci host csr seems heavily board depended)
> 
> Instead of taking table and adding stuff directly in to it
> it should be cleaner to take as argument empty csr (crs = aml_resource_template();)
> add stuff to it and let the caller to add/extend csr as/where necessary.
> 
> > +{
> > +    CrsRangeEntry *entry;
> > +    Aml *scope, *dev, *crs;
> > +    CrsRangeSet crs_range_set;
> > +    Range *pci_hole = NULL;
> > +    Range *pci_hole64 = NULL;
> > +    PCIBus *bus = NULL;
> > +    int root_bus_limit = 0xFF;
> > +    int i;
> > +
> > +    bus = pci_host->pci_bus;
> > +    assert(bus);
> > +    pci_hole = pci_host->pci_hole;
> > +    pci_hole64 = pci_host->pci_hole64;
> > +
> > +    crs_range_set_init(&crs_range_set);
> > +    QLIST_FOREACH(bus, &bus->child, sibling) {
> > +        uint8_t bus_num = pci_bus_num(bus);
> > +        uint8_t numa_node = pci_bus_numa_node(bus);
> > +
> > +        /* look only for expander root buses */
> > +        if (!pci_bus_is_root(bus)) {
> > +            continue;
> > +        }
> > +
> > +        if (bus_num < root_bus_limit) {
> > +            root_bus_limit = bus_num - 1;
> > +        }
> > +
> > +        scope = aml_scope("\\_SB");
> > +        dev = aml_device("PC%.02X", bus_num);
> > +        aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> > +        aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
> > +        aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> > +        if (pci_bus_is_express(bus)) {
> > +            aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> > +            aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> > +            aml_append(dev, build_osc_method(0x1F));
> > +        }
> > +        if (numa_node != NUMA_NODE_UNASSIGNED) {
> > +            aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
> > +        }
> > +
> > +        aml_append(dev, build_prt(false));
> > +        crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
> > +        aml_append(dev, aml_name_decl("_CRS", crs));
> > +        aml_append(scope, dev);
> > +        aml_append(table, scope);
> > +    }
> > +    scope = aml_scope("\\_SB.PCI0");
> > +    /* build PCI0._CRS */
> > +    crs = aml_resource_template();
> > +    /* set the pcie bus num */
> > +    aml_append(crs,
> > +        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> > +                            0x0000, 0x0, root_bus_limit,
> > +                            0x0000, root_bus_limit + 1));
> 
> vvvv
> > +    aml_append(crs, aml_io(AML_DECODE16, PCI_HOST_BRIDGE_CONFIG_ADDR,
> > +                           PCI_HOST_BRIDGE_CONFIG_ADDR, 0x01, 0x08));
> > +    /* set the io region 0 in pci host bridge */
> > +    aml_append(crs,
> > +        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> > +                    AML_POS_DECODE, AML_ENTIRE_RANGE,
> > +                    0x0000, PCI_HOST_BRIDGE_IO_0_MIN_ADDR,
> > +                    PCI_HOST_BRIDGE_IO_0_MAX_ADDR, 0x0000, IO_0_LEN));
> > +
> > +    /* set the io region 1 in pci host bridge */
> > +    crs_replace_with_free_ranges(crs_range_set.io_ranges,
> > +                                 PCI_HOST_BRIDGE_IO_1_MIN_ADDR,
> > +                                 PCI_HOST_BRIDGE_IO_1_MAX_ADDR);
> above code doesn't look as just a movement, it's something totally new,
> so it should be in it's own patch with a justification why it's ok
> to replace concrete addresses with some kind of window.
Ah I see your point now. Yes, I agree this should be in a separate
patch.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API
@ 2018-11-21 23:12       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 23:12 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Rob Bradford, Michael S. Tsirkin, qemu-devel, Shannon Zhao,
	qemu-arm, Marcel Apfelbaum, xen-devel, Anthony Perard,
	Paolo Bonzini, Richard Henderson

Hi Igor,

On Wed, Nov 14, 2018 at 11:55:37AM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:34 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > From: Yang Zhong <yang.zhong@intel.com>
> > 
> > The AML build routines for the PCI host bridge and the corresponding
> > DSDT addition are neither x86 nor PC machine type specific.
> > We can move them to the architecture agnostic hw/acpi folder, and by
> > carrying all the needed information through a new AcpiPciBus structure,
> > we can make them PC machine type independent.
> 
> I'm don't know anything about PCI, but functional changes doesn't look
> correct to me.
>
> See more detailed comments below.
> 
> Marcel,
> could you take a look on this patch (in particular main csr changes), pls?
> 
> > 
> > Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> > Signed-off-by: Rob Bradford <robert.bradford@intel.com>
> > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > ---
> >  include/hw/acpi/aml-build.h |   8 ++
> >  hw/acpi/aml-build.c         | 157 ++++++++++++++++++++++++++++++++++++
> >  hw/i386/acpi-build.c        | 115 ++------------------------
> >  3 files changed, 173 insertions(+), 107 deletions(-)
> > 
> > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > index fde2785b9a..1861e37ebf 100644
> > --- a/include/hw/acpi/aml-build.h
> > +++ b/include/hw/acpi/aml-build.h
> > @@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
> >      uint32_t mcfg_size;
> >  } AcpiMcfgInfo;
> >  
> > +typedef struct AcpiPciBus {
> > +    PCIBus *pci_bus;
> > +    Range *pci_hole;
> > +    Range *pci_hole64;
> > +} AcpiPciBus;
> Again, this and all below is not aml-build material.
> Consider adding/using pci specific acpi file for it.
> 
> Also even though pci AML in arm/virt is to a large degree a subset
> of x86 target and it would be much better to unify ARM part with x86,
> it probably will be to big/complex of a change if we take on it in
> one go.
> 
> So not to derail you from the goal too much, we probably should
> generalize this a little bit less, limiting refactoring to x86
> target for now.
So keeping it under i386 means it won't be accessible through hw/acpi/,
which means we won't be able to have a generic hw/acpi/reduced.c
implementation. From our perspective, this is the problem with keeping
things under i386 because we're not sure yet how much generic it is: It
still won't be shareable for a generic hardware-reduced ACPI
implementation which means we'll have to temporarily have yet another
hardware-reduced ACPI implementation under hw/i386 this time.
I guess this is what Michael meant by keeping some parts of the code
duplicated for now.

I feel it'd be easier to move those APIs under a shareable location, to
make it easier for ARM to consume it even if it's not entirely generic yet.
But you guys are the maintainers and if you think we should restric the
generalization to x86 only for now, we can go for it.

> For example, move generic x86 pci parts to hw/i386/acpi-pci.[hc],
> and structure it so that building blocks in acpi-pci.c could be
> reused for x86 reduced profile later.
> Once it's been done, it might be easier and less complex to
> unify a bit more generic code in i386/acpi-pci.c with corresponding
> ARM code.
> 
> Patch is too big and should be split into smaller logical chunks
> and you should separate code movement vs functional changes you're
> a making here.
> 
> Once you split patch properly, it should be easier to assess
> changes.
> 
> >  typedef struct CrsRangeEntry {
> >      uint64_t base;
> >      uint64_t limit;
> > @@ -411,6 +417,8 @@ Aml *build_osc_method(uint32_t value);
> >  void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
> >  Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
> >  Aml *build_prt(bool is_pci0_prt);
> > +void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
> > +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
> >  void crs_range_set_init(CrsRangeSet *range_set);
> >  Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
> >  void crs_replace_with_free_ranges(GPtrArray *ranges,
> > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > index b8e32f15f7..869ed70db3 100644
> > --- a/hw/acpi/aml-build.c
> > +++ b/hw/acpi/aml-build.c
> > @@ -29,6 +29,19 @@
> >  #include "hw/pci/pci_bus.h"
> >  #include "qemu/range.h"
> >  #include "hw/pci/pci_bridge.h"
> > +#include "hw/i386/pc.h"
> > +#include "sysemu/tpm.h"
> > +#include "hw/acpi/tpm.h"
> > +
> > +#define PCI_HOST_BRIDGE_CONFIG_ADDR        0xcf8
> > +#define PCI_HOST_BRIDGE_IO_0_MIN_ADDR      0x0000
> > +#define PCI_HOST_BRIDGE_IO_0_MAX_ADDR      0x0cf7
> > +#define PCI_HOST_BRIDGE_IO_1_MIN_ADDR      0x0d00
> > +#define PCI_HOST_BRIDGE_IO_1_MAX_ADDR      0xffff
> > +#define PCI_VGA_MEM_BASE_ADDR              0x000a0000
> > +#define PCI_VGA_MEM_MAX_ADDR               0x000bffff
> > +#define IO_0_LEN                           0xcf8
> > +#define VGA_MEM_LEN                        0x20000
> >  
> >  static GArray *build_alloc_array(void)
> >  {
> > @@ -2142,6 +2155,150 @@ Aml *build_prt(bool is_pci0_prt)
> >      return method;
> >  }
> >  
> > +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
> name doesn't reflect exactly what function does,
> it builds device descriptions for expander buses (including their csr)
> and then it builds csr for for main pci host but not pci device description.
> 
> I'd suggest to split out expander buses part into separate function
> that returns an expander bus device description, updates crs_range_set
> and let the caller to enumerate buses and add descriptions to dsdt.
> 
> Then after it we could do a generic csr generation function for the main pci host
> if it's possible at all (main pci host csr seems heavily board depended)
> 
> Instead of taking table and adding stuff directly in to it
> it should be cleaner to take as argument empty csr (crs = aml_resource_template();)
> add stuff to it and let the caller to add/extend csr as/where necessary.
> 
> > +{
> > +    CrsRangeEntry *entry;
> > +    Aml *scope, *dev, *crs;
> > +    CrsRangeSet crs_range_set;
> > +    Range *pci_hole = NULL;
> > +    Range *pci_hole64 = NULL;
> > +    PCIBus *bus = NULL;
> > +    int root_bus_limit = 0xFF;
> > +    int i;
> > +
> > +    bus = pci_host->pci_bus;
> > +    assert(bus);
> > +    pci_hole = pci_host->pci_hole;
> > +    pci_hole64 = pci_host->pci_hole64;
> > +
> > +    crs_range_set_init(&crs_range_set);
> > +    QLIST_FOREACH(bus, &bus->child, sibling) {
> > +        uint8_t bus_num = pci_bus_num(bus);
> > +        uint8_t numa_node = pci_bus_numa_node(bus);
> > +
> > +        /* look only for expander root buses */
> > +        if (!pci_bus_is_root(bus)) {
> > +            continue;
> > +        }
> > +
> > +        if (bus_num < root_bus_limit) {
> > +            root_bus_limit = bus_num - 1;
> > +        }
> > +
> > +        scope = aml_scope("\\_SB");
> > +        dev = aml_device("PC%.02X", bus_num);
> > +        aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> > +        aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
> > +        aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> > +        if (pci_bus_is_express(bus)) {
> > +            aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> > +            aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> > +            aml_append(dev, build_osc_method(0x1F));
> > +        }
> > +        if (numa_node != NUMA_NODE_UNASSIGNED) {
> > +            aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
> > +        }
> > +
> > +        aml_append(dev, build_prt(false));
> > +        crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
> > +        aml_append(dev, aml_name_decl("_CRS", crs));
> > +        aml_append(scope, dev);
> > +        aml_append(table, scope);
> > +    }
> > +    scope = aml_scope("\\_SB.PCI0");
> > +    /* build PCI0._CRS */
> > +    crs = aml_resource_template();
> > +    /* set the pcie bus num */
> > +    aml_append(crs,
> > +        aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
> > +                            0x0000, 0x0, root_bus_limit,
> > +                            0x0000, root_bus_limit + 1));
> 
> vvvv
> > +    aml_append(crs, aml_io(AML_DECODE16, PCI_HOST_BRIDGE_CONFIG_ADDR,
> > +                           PCI_HOST_BRIDGE_CONFIG_ADDR, 0x01, 0x08));
> > +    /* set the io region 0 in pci host bridge */
> > +    aml_append(crs,
> > +        aml_word_io(AML_MIN_FIXED, AML_MAX_FIXED,
> > +                    AML_POS_DECODE, AML_ENTIRE_RANGE,
> > +                    0x0000, PCI_HOST_BRIDGE_IO_0_MIN_ADDR,
> > +                    PCI_HOST_BRIDGE_IO_0_MAX_ADDR, 0x0000, IO_0_LEN));
> > +
> > +    /* set the io region 1 in pci host bridge */
> > +    crs_replace_with_free_ranges(crs_range_set.io_ranges,
> > +                                 PCI_HOST_BRIDGE_IO_1_MIN_ADDR,
> > +                                 PCI_HOST_BRIDGE_IO_1_MAX_ADDR);
> above code doesn't look as just a movement, it's something totally new,
> so it should be in it's own patch with a justification why it's ok
> to replace concrete addresses with some kind of window.
Ah I see your point now. Yes, I agree this should be in a separate
patch.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter
  2018-11-15 12:36     ` Igor Mammedov
@ 2018-11-21 23:21       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 23:21 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

Hi Igor,

On Thu, Nov 15, 2018 at 01:36:58PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:35 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > From: Yang Zhong <yang.zhong@intel.com>
> > 
> > The ACPI MCFG getter is not x86 specific and could be called from
> > anywhere within generic ACPI API, so let's export it.
> So far it's x86 or more exactly q35 specific thing,
It's property based, and it's using a generic PCIE property afaict.
So it's up to each machine type to define those properties.
I'm curious here: What's the idiomatic way to define a machine
setting/attribute/property, let each instance define it or not, and
make it available at run time?
Would you be getting the PCI host pointer from the ACPI build state and
getting that information back from there?

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter
@ 2018-11-21 23:21       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 23:21 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

Hi Igor,

On Thu, Nov 15, 2018 at 01:36:58PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:35 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > From: Yang Zhong <yang.zhong@intel.com>
> > 
> > The ACPI MCFG getter is not x86 specific and could be called from
> > anywhere within generic ACPI API, so let's export it.
> So far it's x86 or more exactly q35 specific thing,
It's property based, and it's using a generic PCIE property afaict.
So it's up to each machine type to define those properties.
I'm curious here: What's the idiomatic way to define a machine
setting/attribute/property, let each instance define it or not, and
make it available at run time?
Would you be getting the PCI host pointer from the ACPI build state and
getting that information back from there?

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method
  2018-11-15 13:28     ` Igor Mammedov
@ 2018-11-21 23:27       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 23:27 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Thu, Nov 15, 2018 at 02:28:54PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:38 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > This is the standard way of building SRAT on x86 platfoms. But future
> > machine types could decide to define their own custom SRAT build method
> > through the ACPI builder methods.
> > Moreover, we will also need to reach build_srat() from outside of
> > acpi-build in order to use it as the ACPI builder SRAT build method.
> SRAT is usually highly machine specific (memory holes, layout, guest OS
> specific quirks) so it's hard to generalize it.
Hence the need for an SRAT builder interface.

> I'd  drop SRAT related patches from this series and introduce
> i386/virt specific SRAT when you post patches for it.
virt uses the existing i386 build_srat() routine, there's nothing
special about it. So this would be purely duplicated code.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method
@ 2018-11-21 23:27       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 23:27 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Thu, Nov 15, 2018 at 02:28:54PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:38 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > This is the standard way of building SRAT on x86 platfoms. But future
> > machine types could decide to define their own custom SRAT build method
> > through the ACPI builder methods.
> > Moreover, we will also need to reach build_srat() from outside of
> > acpi-build in order to use it as the ACPI builder SRAT build method.
> SRAT is usually highly machine specific (memory holes, layout, guest OS
> specific quirks) so it's hard to generalize it.
Hence the need for an SRAT builder interface.

> I'd  drop SRAT related patches from this series and introduce
> i386/virt specific SRAT when you post patches for it.
virt uses the existing i386 build_srat() routine, there's nothing
special about it. So this would be purely duplicated code.

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface
  2018-11-16 16:02     ` Igor Mammedov
@ 2018-11-21 23:57       ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 23:57 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Fri, Nov 16, 2018 at 05:02:26PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:43 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > In order to decouple ACPI APIs from specific machine types, we are
> > creating an ACPI builder interface that each ACPI platform can choose to
> > implement.
> > This way, a new machine type can re-use the high level ACPI APIs and
> > define some custom table build methods, without having to duplicate most
> > of the existing implementation only to add small variations to it.
> I'm not sure about motivation behind so high APIs,
> what obvious here is an extra level of indirection for not clear gain.
> 
> Yep using table callbacks, one can attempt to generalize
> acpi_setup() and help boards to decide which tables do not build
> (MCFG comes to the mind). But I'm not convinced that acpi_setup()
> could be cleanly generalized as a whole (probably some parts but
> not everything)
It's more about generalizing acpi_build(), and then having one
acpi_setup for non hardware-reduced ACPI and a acpi_reduced_setup() for
hardware-reduced.

Right now there's no generalization at all but with this patch we could
already use the same acpi_reduced_setup() implementation for both arm
and i386/virt.

> so it's minor benefit for extra headache of
> figuring out what callback will be actually called when reading code.
This is the same complexity that already exists for essentially all
current interfaces.

> However if board needs a slightly different table, it will have to
> duplicate an exiting one and then modify to suit its needs.
> 
> to me it pretty much looks the same as calling build_foo()
> we use now but with an extra indirection level and then
> duplicating the later for usage in another board in slightly
> different manner.
I believe what you're trying to say is that this abstraction may be
useful but you're arguing the granularity is not properly defined? Am I
getting this right?

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface
@ 2018-11-21 23:57       ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-21 23:57 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Fri, Nov 16, 2018 at 05:02:26PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:43 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > In order to decouple ACPI APIs from specific machine types, we are
> > creating an ACPI builder interface that each ACPI platform can choose to
> > implement.
> > This way, a new machine type can re-use the high level ACPI APIs and
> > define some custom table build methods, without having to duplicate most
> > of the existing implementation only to add small variations to it.
> I'm not sure about motivation behind so high APIs,
> what obvious here is an extra level of indirection for not clear gain.
> 
> Yep using table callbacks, one can attempt to generalize
> acpi_setup() and help boards to decide which tables do not build
> (MCFG comes to the mind). But I'm not convinced that acpi_setup()
> could be cleanly generalized as a whole (probably some parts but
> not everything)
It's more about generalizing acpi_build(), and then having one
acpi_setup for non hardware-reduced ACPI and a acpi_reduced_setup() for
hardware-reduced.

Right now there's no generalization at all but with this patch we could
already use the same acpi_reduced_setup() implementation for both arm
and i386/virt.

> so it's minor benefit for extra headache of
> figuring out what callback will be actually called when reading code.
This is the same complexity that already exists for essentially all
current interfaces.

> However if board needs a slightly different table, it will have to
> duplicate an exiting one and then modify to suit its needs.
> 
> to me it pretty much looks the same as calling build_foo()
> we use now but with an extra indirection level and then
> duplicating the later for usage in another board in slightly
> different manner.
I believe what you're trying to say is that this abstraction may be
useful but you're arguing the granularity is not properly defined? Am I
getting this right?

Cheers,
Samuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-21 14:15           ` Igor Mammedov
@ 2018-11-22  0:17             ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-22  0:17 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Michael S. Tsirkin, Peter Maydell, Stefano Stabellini,
	qemu-devel, Shannon Zhao, qemu-arm, xen-devel, Anthony Perard,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost

On Wed, Nov 21, 2018 at 03:15:26PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 07:35:47 -0500
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > >   
> > > > On 16/11/18 17:29, Igor Mammedov wrote:  
> > > > > General suggestions for this series:
> > > > >   1. Preferably don't do multiple changes within a patch
> > > > >      neither post huge patches (unless it's pure code movement).
> > > > >      (it's easy to squash patches later it necessary)
> > > > >   2. Start small, pick a table generalize it and send as
> > > > >      one small patchset. Tables are often independent
> > > > >      and it's much easier on both author/reviewer to agree upon
> > > > >      changes and rewrite it if necessary.    
> > > > 
> > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > most of it is really just code movement.  It's a starting point, having
> > > > a generic ACPI library is way beyond what this is trying to do.  
> > > I've tried to give suggestions how to restructure series
> > > on per patch basis. In my opinion it quite possible to split
> > > series in several smaller ones and it should really help with
> > > making series cleaner and easier/faster to review/amend/merge
> > > vs what we have in v5.
> > > (it's more frustrating to rework large series vs smaller one)
> > > 
> > > If something isn't clear, it's easy to reach out to me here
> > > or directly (email/irc/github) for clarification/feed back.  
> > 
> > I assume the #1 goal is to add reduced HW support.  So another
> > option to speed up merging is to just go ahead and duplicate a
> > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > file.
> > This way it might be easier to see what's common code and what isn't.
> > And I think offline Igor said he might prefer that way. Right Igor?
> You mean probably 'x86 reduced hw' support.
That's what this is going to eventually look like, unfortunately.
And there's no technical reasons why we could not have a arch agnostic
hw-reduced support, so this should only be an intermediate step.

> That's was what I've
> already suggested for PCI AML code during patch review. Just don't
> call it generic when it's not and place code in hw/i386/ directory beside
> acpi-build.c. It might apply to some other tables (i.e. complex cases).
> 
> On per patch review I gave suggestions how to amend series to make
> it acceptable without doing complex refactoring and pointed out
> places we probably shouldn't refactor now and just duplicate as
> it's too complex or not clear how to generalize it yet.
I think I got the idea, and I will try to rework this serie according
to these directions.


> Problem with duplication is that a random contributor is not
> around to clean code up after a feature is merged and we end up
> with a bunch of messy code.
I'd argue that the same could be said of a potential "x86 hw-reduced"
solution. The same random contributor may not be around to push it to
the next step and make it more generic. I'd also argue we're not
planning to be random contributors, dropping code to the mailing list
and leaving.


> A word to the contributors,
> Don't do refactoring in silence, keep discussing approaches here,
> suggest alternatives.
Practically speaking, a large chunk of the NEMU work rely on having a
generic hardware-reduced ACPI implementation. We could not have blocked
the project waiting for an upstream acceptable solution for it and we
had to pick one route.
Retroactively I think we should have gone the self-contained, fully
duplicated route and move on with the rest of the NEMU work. Upstream
discussions could have then happened in parallel without much disruption
for the project.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-22  0:17             ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-22  0:17 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Wed, Nov 21, 2018 at 03:15:26PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 07:35:47 -0500
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > >   
> > > > On 16/11/18 17:29, Igor Mammedov wrote:  
> > > > > General suggestions for this series:
> > > > >   1. Preferably don't do multiple changes within a patch
> > > > >      neither post huge patches (unless it's pure code movement).
> > > > >      (it's easy to squash patches later it necessary)
> > > > >   2. Start small, pick a table generalize it and send as
> > > > >      one small patchset. Tables are often independent
> > > > >      and it's much easier on both author/reviewer to agree upon
> > > > >      changes and rewrite it if necessary.    
> > > > 
> > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > most of it is really just code movement.  It's a starting point, having
> > > > a generic ACPI library is way beyond what this is trying to do.  
> > > I've tried to give suggestions how to restructure series
> > > on per patch basis. In my opinion it quite possible to split
> > > series in several smaller ones and it should really help with
> > > making series cleaner and easier/faster to review/amend/merge
> > > vs what we have in v5.
> > > (it's more frustrating to rework large series vs smaller one)
> > > 
> > > If something isn't clear, it's easy to reach out to me here
> > > or directly (email/irc/github) for clarification/feed back.  
> > 
> > I assume the #1 goal is to add reduced HW support.  So another
> > option to speed up merging is to just go ahead and duplicate a
> > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > file.
> > This way it might be easier to see what's common code and what isn't.
> > And I think offline Igor said he might prefer that way. Right Igor?
> You mean probably 'x86 reduced hw' support.
That's what this is going to eventually look like, unfortunately.
And there's no technical reasons why we could not have a arch agnostic
hw-reduced support, so this should only be an intermediate step.

> That's was what I've
> already suggested for PCI AML code during patch review. Just don't
> call it generic when it's not and place code in hw/i386/ directory beside
> acpi-build.c. It might apply to some other tables (i.e. complex cases).
> 
> On per patch review I gave suggestions how to amend series to make
> it acceptable without doing complex refactoring and pointed out
> places we probably shouldn't refactor now and just duplicate as
> it's too complex or not clear how to generalize it yet.
I think I got the idea, and I will try to rework this serie according
to these directions.


> Problem with duplication is that a random contributor is not
> around to clean code up after a feature is merged and we end up
> with a bunch of messy code.
I'd argue that the same could be said of a potential "x86 hw-reduced"
solution. The same random contributor may not be around to push it to
the next step and make it more generic. I'd also argue we're not
planning to be random contributors, dropping code to the mailing list
and leaving.


> A word to the contributors,
> Don't do refactoring in silence, keep discussing approaches here,
> suggest alternatives.
Practically speaking, a large chunk of the NEMU work rely on having a
generic hardware-reduced ACPI implementation. We could not have blocked
the project waiting for an upstream acceptable solution for it and we
had to pick one route.
Retroactively I think we should have gone the self-contained, fully
duplicated route and move on with the rest of the NEMU work. Upstream
discussions could have then happened in parallel without much disruption
for the project.

Cheers,
Samuel.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
  2018-11-21 14:38             ` Samuel Ortiz
@ 2018-11-22 10:39               ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-22 10:39 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Michael S. Tsirkin, Peter Maydell, Stefano Stabellini,
	qemu-devel, Shannon Zhao, qemu-arm, xen-devel, Anthony Perard,
	Paolo Bonzini, Richard Henderson, Eduardo Habkost

On Wed, 21 Nov 2018 15:38:16 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Igor,
> 
> On Wed, Nov 21, 2018 at 03:15:26PM +0100, Igor Mammedov wrote:
> > On Wed, 21 Nov 2018 07:35:47 -0500
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> >   
> > > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:  
> > > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > > >     
> > > > > On 16/11/18 17:29, Igor Mammedov wrote:    
> > > > > > General suggestions for this series:
> > > > > >   1. Preferably don't do multiple changes within a patch
> > > > > >      neither post huge patches (unless it's pure code movement).
> > > > > >      (it's easy to squash patches later it necessary)
> > > > > >   2. Start small, pick a table generalize it and send as
> > > > > >      one small patchset. Tables are often independent
> > > > > >      and it's much easier on both author/reviewer to agree upon
> > > > > >      changes and rewrite it if necessary.      
> > > > > 
> > > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > > most of it is really just code movement.  It's a starting point, having
> > > > > a generic ACPI library is way beyond what this is trying to do.    
> > > > I've tried to give suggestions how to restructure series
> > > > on per patch basis. In my opinion it quite possible to split
> > > > series in several smaller ones and it should really help with
> > > > making series cleaner and easier/faster to review/amend/merge
> > > > vs what we have in v5.
> > > > (it's more frustrating to rework large series vs smaller one)
> > > > 
> > > > If something isn't clear, it's easy to reach out to me here
> > > > or directly (email/irc/github) for clarification/feed back.    
> > > 
> > > I assume the #1 goal is to add reduced HW support.  So another
> > > option to speed up merging is to just go ahead and duplicate a
> > > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > > file.
> > > This way it might be easier to see what's common code and what isn't.
> > > And I think offline Igor said he might prefer that way. Right Igor?  
> > You mean probably 'x86 reduced hw' support. That's was what I've
> > already suggested for PCI AML code during patch review. Just don't
> > call it generic when it's not and place code in hw/i386/ directory beside
> > acpi-build.c. It might apply to some other tables (i.e. complex cases).
> > 
> > On per patch review I gave suggestions how to amend series to make
> > it acceptable without doing complex refactoring and pointed out
> > places we probably shouldn't refactor now and just duplicate as
> > it's too complex or not clear how to generalize it yet.
> > 
> > Problem with duplication is that a random contributor is not
> > around to clean code up after a feature is merged and we end up
> > with a bunch of messy code.
> > 
> > A word to the contributors,
> > Don't do refactoring in silence, keep discussing approaches here,
> > suggest alternatives. That way it's easier to reach a compromise
> > and merge it with less iterations. And if you do split it in smaller
> > parts, the process should go even faster.
> > 
> > I'll sent a small RSDP refactoring series for reference.  
> I was already working on the RSDP changes. Let me know if I should drop
> that work too.
Go ahead,
you can reuse RSDP fixes I've just posted (you are CCed)
if you haven't written them yet on your own.

 
> Cheers,
> Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition
@ 2018-11-22 10:39               ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-22 10:39 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Wed, 21 Nov 2018 15:38:16 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Igor,
> 
> On Wed, Nov 21, 2018 at 03:15:26PM +0100, Igor Mammedov wrote:
> > On Wed, 21 Nov 2018 07:35:47 -0500
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> >   
> > > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:  
> > > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > > Paolo Bonzini <pbonzini@redhat.com> wrote:
> > > >     
> > > > > On 16/11/18 17:29, Igor Mammedov wrote:    
> > > > > > General suggestions for this series:
> > > > > >   1. Preferably don't do multiple changes within a patch
> > > > > >      neither post huge patches (unless it's pure code movement).
> > > > > >      (it's easy to squash patches later it necessary)
> > > > > >   2. Start small, pick a table generalize it and send as
> > > > > >      one small patchset. Tables are often independent
> > > > > >      and it's much easier on both author/reviewer to agree upon
> > > > > >      changes and rewrite it if necessary.      
> > > > > 
> > > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > > most of it is really just code movement.  It's a starting point, having
> > > > > a generic ACPI library is way beyond what this is trying to do.    
> > > > I've tried to give suggestions how to restructure series
> > > > on per patch basis. In my opinion it quite possible to split
> > > > series in several smaller ones and it should really help with
> > > > making series cleaner and easier/faster to review/amend/merge
> > > > vs what we have in v5.
> > > > (it's more frustrating to rework large series vs smaller one)
> > > > 
> > > > If something isn't clear, it's easy to reach out to me here
> > > > or directly (email/irc/github) for clarification/feed back.    
> > > 
> > > I assume the #1 goal is to add reduced HW support.  So another
> > > option to speed up merging is to just go ahead and duplicate a
> > > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > > file.
> > > This way it might be easier to see what's common code and what isn't.
> > > And I think offline Igor said he might prefer that way. Right Igor?  
> > You mean probably 'x86 reduced hw' support. That's was what I've
> > already suggested for PCI AML code during patch review. Just don't
> > call it generic when it's not and place code in hw/i386/ directory beside
> > acpi-build.c. It might apply to some other tables (i.e. complex cases).
> > 
> > On per patch review I gave suggestions how to amend series to make
> > it acceptable without doing complex refactoring and pointed out
> > places we probably shouldn't refactor now and just duplicate as
> > it's too complex or not clear how to generalize it yet.
> > 
> > Problem with duplication is that a random contributor is not
> > around to clean code up after a feature is merged and we end up
> > with a bunch of messy code.
> > 
> > A word to the contributors,
> > Don't do refactoring in silence, keep discussing approaches here,
> > suggest alternatives. That way it's easier to reach a compromise
> > and merge it with less iterations. And if you do split it in smaller
> > parts, the process should go even faster.
> > 
> > I'll sent a small RSDP refactoring series for reference.  
> I was already working on the RSDP changes. Let me know if I should drop
> that work too.
Go ahead,
you can reuse RSDP fixes I've just posted (you are CCed)
if you haven't written them yet on your own.

 
> Cheers,
> Samuel.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type
  2018-11-21 14:42       ` Samuel Ortiz
@ 2018-11-22 15:13         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-22 15:13 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Wed, 21 Nov 2018 15:42:37 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Hi Igor,
> 
> On Fri, Nov 09, 2018 at 03:23:16PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:24 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > ACPI tables are platform and machine type and even architecture
> > > agnostic, and as such we want to provide an internal ACPI API that
> > > only depends on platform agnostic information.
> > > 
> > > For the x86 architecture, in order to build ACPI tables independently
> > > from the PC or Q35 machine types, we are moving a few MachineState
> > > structure fields into a machine type agnostic structure called
> > > AcpiConfiguration. The structure fields we move are:  
> > 
> > It's not obvious why new structure is needed, especially at
> > the beginning of series. We probably should place this patch
> > much later in the series (if we need it at all) and try
> > generalize a much as possible without using it.  
> Patches order set aside, this new structure is needed to make the
> existing API not completely bound to the pc machine type anymore and
> "Decouple the ACPI build from the PC machine type".
> 
> It was either creating a structure to build ACPI tables in a machine
> type independent fashion, or pass custom structures (or potentially long
> list of arguments) to the existing APIs. See below.
> 
> 
> > And try to come up with an API that doesn't need centralized collection
> > of data somehow related to ACPI (most of the fields here are not generic
> > and applicable to a specific board/target).
> > 
> > For generic API, I'd prefer a separate building blocks
> > like build_fadt()/... that take as an input only parameters
> > necessary to compose a table/aml part with occasional board
> > interface hooks instead of all encompassing AcpiConfiguration
> > and board specific 'acpi_build' that would use them when/if needed.  
> Let's take build_madt as an example. With my patch we define:
> 
> void build_madt(GArray *table_data, BIOSLinker *linker,
>                 MachineState *ms, AcpiConfiguration *conf);
> 
> And you're suggesting we'd define:
> 
> void build_madt(GArray *table_data, BIOSLinker *linker,
>                 MachineState *ms, HotplugHandler *acpi_dev,
>                 bool apic_xrupt_override);
> 
> instead. Is that correct?
in general, yes
and let acpi_build() to fish out that info from board somehow.

In case of build_madt(), I doubt we can generalize it across targets.
What we can do with it though, is to generalize entries that are
placed into it. i.e. create helpers to create them instead of
open-codding them like now, something like:

void build_append_madt_apic(*table, uid, apic_id, flags);
void build_append_madt_x2apic(*table, uid, apic_id, flags);
void build_append_madt_ioapic(*table, io_apic_id, io_apic_addr, interrupt);
...
converting to build_append_int_noprefix() API ioapic entry example:

diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
index af8e023..5911b94 100644
--- a/include/hw/acpi/acpi-defs.h
+++ b/include/hw/acpi/acpi-defs.h
@@ -242,16 +242,6 @@ struct AcpiMadtProcessorApic {
 } QEMU_PACKED;
 typedef struct AcpiMadtProcessorApic AcpiMadtProcessorApic;
 
-struct AcpiMadtIoApic {
-    ACPI_SUB_HEADER_DEF
-    uint8_t  io_apic_id;             /* I/O APIC ID */
-    uint8_t  reserved;               /* Reserved - must be zero */
-    uint32_t address;                /* APIC physical address */
-    uint32_t interrupt;              /* Global system interrupt where INTI
-                                 * lines start */
-} QEMU_PACKED;
-typedef struct AcpiMadtIoApic AcpiMadtIoApic;
-
 struct AcpiMadtIntsrcovr {
     ACPI_SUB_HEADER_DEF
     uint8_t  bus;
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 6c36903..b28c2ce 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -409,6 +409,9 @@ build_append_gas_from_struct(GArray *table, const struct AcpiGenericAddress *s)
                      s->access_width, s->address);
 }
 
+void build_append_madt_ioapic(GArray *tbl, uint8_t id, uint32_t addr,
+                              uint32_t intr);
+
 void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
                        uint64_t len, int node, MemoryAffinityFlags flags);
 
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 1e43cd7..f0445df 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1655,6 +1655,23 @@ void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
 }
 
 /*
+ * ACPI 1.0b: 5.2.8.2 IO APIC
+   <--- earliest spec where structure was introduced, that's the way
+        we avoid writing extra docs for things that are described in spec -->
+ */
+void build_append_madt_ioapic(GArray *tbl, uint8_t id, uint32_t addr,
+                              uint32_t intr)
+{
+    // comments are verbatim copy of the field name from spec
+    build_append_int_noprefix(tbl, 1 /* I/O APIC structure */, 1); /* Type */
+    build_append_int_noprefix(tbl, 12, 1);    /* Length */
+    build_append_int_noprefix(tbl, id, 1);    /* I/O APIC ID */
+    build_append_int_noprefix(tbl, 0, 1);     /* Reserved */
+    build_append_int_noprefix(tbl, addr, 4);  /* I/O APIC Address */
+    build_append_int_noprefix(tbl, intr, 4); /* Global System Interrupt Base */
+}
+
+/*
  * ACPI spec 5.2.17 System Locality Distance Information Table
  * (Revision 2.0 or later)
  */
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 131c565..2f154c9 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -348,7 +348,6 @@ build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
     bool x2apic_mode = false;
 
     AcpiMultipleApicTable *madt;
-    AcpiMadtIoApic *io_apic;
     AcpiMadtIntsrcovr *intsrcovr;
     int i;
 
@@ -363,12 +362,8 @@ build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
         }
     }
 
-    io_apic = acpi_data_push(table_data, sizeof *io_apic);
-    io_apic->type = ACPI_APIC_IO;
-    io_apic->length = sizeof(*io_apic);
-    io_apic->io_apic_id = ACPI_BUILD_IOAPIC_ID;
-    io_apic->address = cpu_to_le32(IO_APIC_DEFAULT_ADDRESS);
-    io_apic->interrupt = cpu_to_le32(0);
+    build_append_madt_ioapic(table_data,
+        ACPI_BUILD_IOAPIC_ID, IO_APIC_DEFAULT_ADDRESS, 0);
 
     if (pcms->apic_xrupt_override) {
         intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);


....

Then build_madt() is reduced to quite readable several lines only
and if it makes sense having a custom board specific one, it is not
an issue as there aren't much duplication.
In case of i386/virt, one probably can throw away all legacy
stuff we have over there and have custom build_madt_virt().

> Pros for the latter is the fact that, as you said, we would not need to
> define a centralized structure holding all possibly needed ACPI related
> fields.
>
> Pros for the former is about defining a pointer to all needed ACPI
> fields once and for all and hiding the details of the API in the AML
> building implementation.

it's hard to create an maintain centralized structure for generic use,
for example this series ended up with a bunch of x86 specific fields there
and misplaced other fields that do not belong acpi but are used by it somehow.
If one would thing about possible future refactorings, then it gets
more difficult as changes to centralized structure affect whole codebase
instead of being localized.

I suggest to keep board and acpi code as separate as possible
and acpi_setup/acpi_build() being a glue that gets and feeds
board/target specific data to an ACPI primitives/table builders.
That way we would have a machine and in-depended acpi code,
and custom glue that ties all together.

One would have to give up on generic acpi_build and AcpiBuilder
callbacks, but acpi_build() is not a lot of code so it is not
an issue to have several different ones (we can generalize parts
of it is necessary).
Essentially this series would change from
----------
pc.c:
AcpiBuilder->build_a = pc_build_a;
AcpiBuilder->build_b = pc_build_b;
...

virt_pc.c:
AcpiBuilder->build_a = virt_pc_build_a;
AcpiBuilder->build_b = virt_pc_build_b;
...

acpi...c:
pc_acpi_build()
   AcpiBuilder->build_a()
   AcpiBuilder->build_b()
   // some legacy hacks here
   ...
virt_pc...c:
virt_acpi_build()
   AcpiBuilder->build_a()
   AcpiBuilder->build_b()
   ...

... same for arm since set of tables is different and board/target depended ...
---------------
acpi...c:
pc_acpi_build()
  pc_build_a()
  pc_build_b()
  ...

virt_pc_..c:
virt_acpi_build()
  virt_pc_build_a()
  virt_pc_build_b()
  ...
...

i.e. about the same  as the former but without indirection and
without headache how to generalize AcpiBuilder->build_foo()
interface to work for every target/board.

Once we have hw/i386/acpi_reduced.c, working and used by i386/virt
it would be easier to assess if we can generalize virt_pc_acpi_build()
to be usable by arm, but right now it looks like premature action.

> > We probably should split series into several smaller
> > (if possible independent) ones, so people won't be scared of
> > its sheer size and run away from reviewing it.  
> I will try to split it in smaller chunks if that helps.
> 
> Cheers,
> Samuel.

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type
@ 2018-11-22 15:13         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-22 15:13 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Wed, 21 Nov 2018 15:42:37 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Hi Igor,
> 
> On Fri, Nov 09, 2018 at 03:23:16PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:24 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > ACPI tables are platform and machine type and even architecture
> > > agnostic, and as such we want to provide an internal ACPI API that
> > > only depends on platform agnostic information.
> > > 
> > > For the x86 architecture, in order to build ACPI tables independently
> > > from the PC or Q35 machine types, we are moving a few MachineState
> > > structure fields into a machine type agnostic structure called
> > > AcpiConfiguration. The structure fields we move are:  
> > 
> > It's not obvious why new structure is needed, especially at
> > the beginning of series. We probably should place this patch
> > much later in the series (if we need it at all) and try
> > generalize a much as possible without using it.  
> Patches order set aside, this new structure is needed to make the
> existing API not completely bound to the pc machine type anymore and
> "Decouple the ACPI build from the PC machine type".
> 
> It was either creating a structure to build ACPI tables in a machine
> type independent fashion, or pass custom structures (or potentially long
> list of arguments) to the existing APIs. See below.
> 
> 
> > And try to come up with an API that doesn't need centralized collection
> > of data somehow related to ACPI (most of the fields here are not generic
> > and applicable to a specific board/target).
> > 
> > For generic API, I'd prefer a separate building blocks
> > like build_fadt()/... that take as an input only parameters
> > necessary to compose a table/aml part with occasional board
> > interface hooks instead of all encompassing AcpiConfiguration
> > and board specific 'acpi_build' that would use them when/if needed.  
> Let's take build_madt as an example. With my patch we define:
> 
> void build_madt(GArray *table_data, BIOSLinker *linker,
>                 MachineState *ms, AcpiConfiguration *conf);
> 
> And you're suggesting we'd define:
> 
> void build_madt(GArray *table_data, BIOSLinker *linker,
>                 MachineState *ms, HotplugHandler *acpi_dev,
>                 bool apic_xrupt_override);
> 
> instead. Is that correct?
in general, yes
and let acpi_build() to fish out that info from board somehow.

In case of build_madt(), I doubt we can generalize it across targets.
What we can do with it though, is to generalize entries that are
placed into it. i.e. create helpers to create them instead of
open-codding them like now, something like:

void build_append_madt_apic(*table, uid, apic_id, flags);
void build_append_madt_x2apic(*table, uid, apic_id, flags);
void build_append_madt_ioapic(*table, io_apic_id, io_apic_addr, interrupt);
...
converting to build_append_int_noprefix() API ioapic entry example:

diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
index af8e023..5911b94 100644
--- a/include/hw/acpi/acpi-defs.h
+++ b/include/hw/acpi/acpi-defs.h
@@ -242,16 +242,6 @@ struct AcpiMadtProcessorApic {
 } QEMU_PACKED;
 typedef struct AcpiMadtProcessorApic AcpiMadtProcessorApic;
 
-struct AcpiMadtIoApic {
-    ACPI_SUB_HEADER_DEF
-    uint8_t  io_apic_id;             /* I/O APIC ID */
-    uint8_t  reserved;               /* Reserved - must be zero */
-    uint32_t address;                /* APIC physical address */
-    uint32_t interrupt;              /* Global system interrupt where INTI
-                                 * lines start */
-} QEMU_PACKED;
-typedef struct AcpiMadtIoApic AcpiMadtIoApic;
-
 struct AcpiMadtIntsrcovr {
     ACPI_SUB_HEADER_DEF
     uint8_t  bus;
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 6c36903..b28c2ce 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -409,6 +409,9 @@ build_append_gas_from_struct(GArray *table, const struct AcpiGenericAddress *s)
                      s->access_width, s->address);
 }
 
+void build_append_madt_ioapic(GArray *tbl, uint8_t id, uint32_t addr,
+                              uint32_t intr);
+
 void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
                        uint64_t len, int node, MemoryAffinityFlags flags);
 
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 1e43cd7..f0445df 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1655,6 +1655,23 @@ void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
 }
 
 /*
+ * ACPI 1.0b: 5.2.8.2 IO APIC
+   <--- earliest spec where structure was introduced, that's the way
+        we avoid writing extra docs for things that are described in spec -->
+ */
+void build_append_madt_ioapic(GArray *tbl, uint8_t id, uint32_t addr,
+                              uint32_t intr)
+{
+    // comments are verbatim copy of the field name from spec
+    build_append_int_noprefix(tbl, 1 /* I/O APIC structure */, 1); /* Type */
+    build_append_int_noprefix(tbl, 12, 1);    /* Length */
+    build_append_int_noprefix(tbl, id, 1);    /* I/O APIC ID */
+    build_append_int_noprefix(tbl, 0, 1);     /* Reserved */
+    build_append_int_noprefix(tbl, addr, 4);  /* I/O APIC Address */
+    build_append_int_noprefix(tbl, intr, 4); /* Global System Interrupt Base */
+}
+
+/*
  * ACPI spec 5.2.17 System Locality Distance Information Table
  * (Revision 2.0 or later)
  */
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 131c565..2f154c9 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -348,7 +348,6 @@ build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
     bool x2apic_mode = false;
 
     AcpiMultipleApicTable *madt;
-    AcpiMadtIoApic *io_apic;
     AcpiMadtIntsrcovr *intsrcovr;
     int i;
 
@@ -363,12 +362,8 @@ build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
         }
     }
 
-    io_apic = acpi_data_push(table_data, sizeof *io_apic);
-    io_apic->type = ACPI_APIC_IO;
-    io_apic->length = sizeof(*io_apic);
-    io_apic->io_apic_id = ACPI_BUILD_IOAPIC_ID;
-    io_apic->address = cpu_to_le32(IO_APIC_DEFAULT_ADDRESS);
-    io_apic->interrupt = cpu_to_le32(0);
+    build_append_madt_ioapic(table_data,
+        ACPI_BUILD_IOAPIC_ID, IO_APIC_DEFAULT_ADDRESS, 0);
 
     if (pcms->apic_xrupt_override) {
         intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);


....

Then build_madt() is reduced to quite readable several lines only
and if it makes sense having a custom board specific one, it is not
an issue as there aren't much duplication.
In case of i386/virt, one probably can throw away all legacy
stuff we have over there and have custom build_madt_virt().

> Pros for the latter is the fact that, as you said, we would not need to
> define a centralized structure holding all possibly needed ACPI related
> fields.
>
> Pros for the former is about defining a pointer to all needed ACPI
> fields once and for all and hiding the details of the API in the AML
> building implementation.

it's hard to create an maintain centralized structure for generic use,
for example this series ended up with a bunch of x86 specific fields there
and misplaced other fields that do not belong acpi but are used by it somehow.
If one would thing about possible future refactorings, then it gets
more difficult as changes to centralized structure affect whole codebase
instead of being localized.

I suggest to keep board and acpi code as separate as possible
and acpi_setup/acpi_build() being a glue that gets and feeds
board/target specific data to an ACPI primitives/table builders.
That way we would have a machine and in-depended acpi code,
and custom glue that ties all together.

One would have to give up on generic acpi_build and AcpiBuilder
callbacks, but acpi_build() is not a lot of code so it is not
an issue to have several different ones (we can generalize parts
of it is necessary).
Essentially this series would change from
----------
pc.c:
AcpiBuilder->build_a = pc_build_a;
AcpiBuilder->build_b = pc_build_b;
...

virt_pc.c:
AcpiBuilder->build_a = virt_pc_build_a;
AcpiBuilder->build_b = virt_pc_build_b;
...

acpi...c:
pc_acpi_build()
   AcpiBuilder->build_a()
   AcpiBuilder->build_b()
   // some legacy hacks here
   ...
virt_pc...c:
virt_acpi_build()
   AcpiBuilder->build_a()
   AcpiBuilder->build_b()
   ...

... same for arm since set of tables is different and board/target depended ...
---------------
acpi...c:
pc_acpi_build()
  pc_build_a()
  pc_build_b()
  ...

virt_pc_..c:
virt_acpi_build()
  virt_pc_build_a()
  virt_pc_build_b()
  ...
...

i.e. about the same  as the former but without indirection and
without headache how to generalize AcpiBuilder->build_foo()
interface to work for every target/board.

Once we have hw/i386/acpi_reduced.c, working and used by i386/virt
it would be easier to assess if we can generalize virt_pc_acpi_build()
to be usable by arm, but right now it looks like premature action.

> > We probably should split series into several smaller
> > (if possible independent) ones, so people won't be scared of
> > its sheer size and run away from reviewing it.  
> I will try to split it in smaller chunks if that helps.
> 
> Cheers,
> Samuel.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-21 14:42       ` Samuel Ortiz
@ 2018-11-22 16:26         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-22 16:26 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Wed, 21 Nov 2018 15:42:11 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Hi Igor,
> 
> On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:28 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > > Description Table). RSDT only allow for 32-bit addressses and have thus
> > > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > > no longer RSDTs, although RSDTs are still supported for backward
> > > compatibility.
> > > 
> > > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > > length and a version field to the table.  
> > 
> > This patch re-implements what arm/virt board already does
> > and fixes checksum bug in the later and at the same time
> > without a user (within the patch).
> > 
> > I'd suggest redo it a way similar to FADT refactoring
> >   patch 1: fix checksum bug in virt/arm
> >   patch 2: update reference tables in test
> >   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
> >              (both arm and x86) wich stores all data in hos byte order
> >   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
> >
> >            ... move out to aml-build.c
> >   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
> >       amending it to generate rev1 variant defined by revision in AcpiRsdpData
> >       (commit dd1b2037a)  
> I agree patches #1, #2 and #5 make sense. 3 and 4 as well, but here
> you're asking about something that's out of scope of the current serie.
/me guilty of that, but I have excuses for doing so:
  * that's the only way to get rid of legacy approach given limited resources.
    So task goes to whomever touches old code. /others and me included/
    I'd be glad if someone would volunteer and do clean ups but in absence
    of such, the victim is interested party.
  * contributor to ACPI part learns how to use preferred approach,
    makes code more robust and clear as it's not possible to make
    endianness mistakes, very simple to review and notice mistakes
    as end result practically matches row by row a table described in spec.
  * there could be exceptions, like acpi/nvdimm.c also contributed by Intel
    whole file written in legacy style (it probably started before I started
    enforcing conversions, anyways it's late now and should be done as whole),
    or odd fixes to existing tables, or too complex case.
    (depending on case I might still ask for conversion)

My ranting aside, conversions I've asked for here are trivial and for
everyone's benefit /QEMU gets more maintainable code, users less bugs,
contributor knows requirements hence his patches go through less iterations,
hopefully if contributor stays around and does contributions/reviews to
acpi code, QEMU could get another co-maintainer for acpi part/

> I'll move those patches from this serie and build a 6 patches new serie
> as suggested.

Thanks!

> 
> Cheers,
> Samuel.
> 

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-22 16:26         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-22 16:26 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Wed, 21 Nov 2018 15:42:11 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Hi Igor,
> 
> On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:28 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > > Description Table). RSDT only allow for 32-bit addressses and have thus
> > > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > > no longer RSDTs, although RSDTs are still supported for backward
> > > compatibility.
> > > 
> > > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > > length and a version field to the table.  
> > 
> > This patch re-implements what arm/virt board already does
> > and fixes checksum bug in the later and at the same time
> > without a user (within the patch).
> > 
> > I'd suggest redo it a way similar to FADT refactoring
> >   patch 1: fix checksum bug in virt/arm
> >   patch 2: update reference tables in test
> >   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
> >              (both arm and x86) wich stores all data in hos byte order
> >   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
> >
> >            ... move out to aml-build.c
> >   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
> >       amending it to generate rev1 variant defined by revision in AcpiRsdpData
> >       (commit dd1b2037a)  
> I agree patches #1, #2 and #5 make sense. 3 and 4 as well, but here
> you're asking about something that's out of scope of the current serie.
/me guilty of that, but I have excuses for doing so:
  * that's the only way to get rid of legacy approach given limited resources.
    So task goes to whomever touches old code. /others and me included/
    I'd be glad if someone would volunteer and do clean ups but in absence
    of such, the victim is interested party.
  * contributor to ACPI part learns how to use preferred approach,
    makes code more robust and clear as it's not possible to make
    endianness mistakes, very simple to review and notice mistakes
    as end result practically matches row by row a table described in spec.
  * there could be exceptions, like acpi/nvdimm.c also contributed by Intel
    whole file written in legacy style (it probably started before I started
    enforcing conversions, anyways it's late now and should be done as whole),
    or odd fixes to existing tables, or too complex case.
    (depending on case I might still ask for conversion)

My ranting aside, conversions I've asked for here are trivial and for
everyone's benefit /QEMU gets more maintainable code, users less bugs,
contributor knows requirements hence his patches go through less iterations,
hopefully if contributor stays around and does contributions/reviews to
acpi code, QEMU could get another co-maintainer for acpi part/

> I'll move those patches from this serie and build a 6 patches new serie
> as suggested.

Thanks!

> 
> Cheers,
> Samuel.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
  2018-11-22 16:26         ` Igor Mammedov
@ 2018-11-23  9:36           ` Samuel Ortiz
  -1 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-23  9:36 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Thu, Nov 22, 2018 at 05:26:52PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 15:42:11 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > Hi Igor,
> > 
> > On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> > > On Mon,  5 Nov 2018 02:40:28 +0100
> > > Samuel Ortiz <sameo@linux.intel.com> wrote:
> > >   
> > > > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > > > Description Table). RSDT only allow for 32-bit addressses and have thus
> > > > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > > > no longer RSDTs, although RSDTs are still supported for backward
> > > > compatibility.
> > > > 
> > > > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > > > length and a version field to the table.  
> > > 
> > > This patch re-implements what arm/virt board already does
> > > and fixes checksum bug in the later and at the same time
> > > without a user (within the patch).
> > > 
> > > I'd suggest redo it a way similar to FADT refactoring
> > >   patch 1: fix checksum bug in virt/arm
> > >   patch 2: update reference tables in test
> > >   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
> > >              (both arm and x86) wich stores all data in hos byte order
> > >   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
> > >
> > >            ... move out to aml-build.c
> > >   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
> > >       amending it to generate rev1 variant defined by revision in AcpiRsdpData
> > >       (commit dd1b2037a)  
> > I agree patches #1, #2 and #5 make sense. 3 and 4 as well, but here
> > you're asking about something that's out of scope of the current serie.
> /me guilty of that, but I have excuses for doing so:
>   * that's the only way to get rid of legacy approach given limited resources.
>     So task goes to whomever touches old code. /others and me included/
>     I'd be glad if someone would volunteer and do clean ups but in absence
>     of such, the victim is interested party.
>   * contributor to ACPI part learns how to use preferred approach,
>     makes code more robust and clear as it's not possible to make
>     endianness mistakes, very simple to review and notice mistakes
>     as end result practically matches row by row a table described in spec.
I understand and agree with that. And to be clear: I'm happy to
contribute and work on that. But I'm also lucky to have an employer
that can afford to let me spend as much time as needed to do this kind
of refactoring/modernizing work. I just want to point out that other
potential newcomers to the project may not have that luxury.
I wonder (I sincerely do, I'm not making any assumptions) how much code
is left unmerged because the original submitter did not have the time or
budget to polish it up to the expected level.

Cheers,
Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP
@ 2018-11-23  9:36           ` Samuel Ortiz
  0 siblings, 0 replies; 170+ messages in thread
From: Samuel Ortiz @ 2018-11-23  9:36 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Thu, Nov 22, 2018 at 05:26:52PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 15:42:11 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > Hi Igor,
> > 
> > On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> > > On Mon,  5 Nov 2018 02:40:28 +0100
> > > Samuel Ortiz <sameo@linux.intel.com> wrote:
> > >   
> > > > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > > > Description Table). RSDT only allow for 32-bit addressses and have thus
> > > > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > > > no longer RSDTs, although RSDTs are still supported for backward
> > > > compatibility.
> > > > 
> > > > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > > > length and a version field to the table.  
> > > 
> > > This patch re-implements what arm/virt board already does
> > > and fixes checksum bug in the later and at the same time
> > > without a user (within the patch).
> > > 
> > > I'd suggest redo it a way similar to FADT refactoring
> > >   patch 1: fix checksum bug in virt/arm
> > >   patch 2: update reference tables in test
> > >   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
> > >              (both arm and x86) wich stores all data in hos byte order
> > >   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 5d7a334f7)
> > >
> > >            ... move out to aml-build.c
> > >   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 specific one
> > >       amending it to generate rev1 variant defined by revision in AcpiRsdpData
> > >       (commit dd1b2037a)  
> > I agree patches #1, #2 and #5 make sense. 3 and 4 as well, but here
> > you're asking about something that's out of scope of the current serie.
> /me guilty of that, but I have excuses for doing so:
>   * that's the only way to get rid of legacy approach given limited resources.
>     So task goes to whomever touches old code. /others and me included/
>     I'd be glad if someone would volunteer and do clean ups but in absence
>     of such, the victim is interested party.
>   * contributor to ACPI part learns how to use preferred approach,
>     makes code more robust and clear as it's not possible to make
>     endianness mistakes, very simple to review and notice mistakes
>     as end result practically matches row by row a table described in spec.
I understand and agree with that. And to be clear: I'm happy to
contribute and work on that. But I'm also lucky to have an employer
that can afford to let me spend as much time as needed to do this kind
of refactoring/modernizing work. I just want to point out that other
potential newcomers to the project may not have that luxury.
I wonder (I sincerely do, I'm not making any assumptions) how much code
is left unmerged because the original submitter did not have the time or
budget to polish it up to the expected level.

Cheers,
Samuel.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters
  2018-11-21 15:43       ` Samuel Ortiz
@ 2018-11-23 10:55         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-23 10:55 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: qemu-devel, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, Shannon Zhao, qemu-arm, Paolo Bonzini,
	Anthony Perard, xen-devel, Richard Henderson

On Wed, 21 Nov 2018 16:43:12 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> On Tue, Nov 13, 2018 at 04:59:18PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:33 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > This is going to be needed by the hardware reduced implementation, so
> > > let's export it.
> > > Once the ACPI builder methods and getters will be implemented, the
> > > acpi_get_pci_host() implementation will become hardware agnostic.
> > > 
> > > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > > ---
> > >  include/hw/acpi/aml-build.h |  2 ++
> > >  hw/acpi/aml-build.c         | 43 +++++++++++++++++++++++++++++++++
> > >  hw/i386/acpi-build.c        | 47 ++-----------------------------------
> > >  3 files changed, 47 insertions(+), 45 deletions(-)
> > > 
> > > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > > index c27c0935ae..fde2785b9a 100644
> > > --- a/include/hw/acpi/aml-build.h
> > > +++ b/include/hw/acpi/aml-build.h
> > > @@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
> > >               const char *oem_id, const char *oem_table_id);
> > >  void *acpi_data_push(GArray *table_data, unsigned size);
> > >  unsigned acpi_data_len(GArray *table);
> > > +Object *acpi_get_pci_host(void);
> > > +void acpi_get_pci_holes(Range *hole, Range *hole64);
> > >  /* Align AML blob size to a multiple of 'align' */
> > >  void acpi_align_size(GArray *blob, unsigned align);
> > >  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> > > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > > index 2b9a636e75..b8e32f15f7 100644
> > > --- a/hw/acpi/aml-build.c
> > > +++ b/hw/acpi/aml-build.c
> > > @@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
> > >      g_array_free(tables->vmgenid, mfre);
> > >  }  
> >   
> > > +/*
> > > + * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> > > + */
> > > +Object *acpi_get_pci_host(void)
> > > +{
> > > +    PCIHostState *host;
> > > +
> > > +    host = OBJECT_CHECK(PCIHostState,
> > > +                        object_resolve_path("/machine/i440fx", NULL),
> > > +                        TYPE_PCI_HOST_BRIDGE);
> > > +    if (!host) {
> > > +        host = OBJECT_CHECK(PCIHostState,
> > > +                            object_resolve_path("/machine/q35", NULL),
> > > +                            TYPE_PCI_HOST_BRIDGE);
> > > +    }
> > > +
> > > +    return OBJECT(host);
> > > +}  
> > in general aml-build.c is a place to put ACPI spec primitives,
> > so I'd suggest to move it somewhere else.
> > 
> > Considering it's x86 code (so far), maybe move it to something like
> > hw/i386/acpi-pci.c
> > 
> > Also it might be good to get rid of acpi_get_pci_host() and pass
> > a pointer to pci_host as acpi_setup() an argument, since it's static
> > for life of boar we can keep it in AcpiBuildState, and reuse for
> > mfg/pci_hole/pci bus accesses.  
> That's what I'm trying to do with patches #23 and 24, but through the
> ACPI configuration structure. I could try using the build state instead,
> as it's platform agnostic as well.
May be it will work.
Note:
try not to pass whole build_state to concrete tables builders and use
arguments/dedicated structures to pass data needed for that concrete
table.

> 
> Cheers,
> Samuel.
> 

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters
@ 2018-11-23 10:55         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-23 10:55 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Wed, 21 Nov 2018 16:43:12 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> On Tue, Nov 13, 2018 at 04:59:18PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:33 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > This is going to be needed by the hardware reduced implementation, so
> > > let's export it.
> > > Once the ACPI builder methods and getters will be implemented, the
> > > acpi_get_pci_host() implementation will become hardware agnostic.
> > > 
> > > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > > ---
> > >  include/hw/acpi/aml-build.h |  2 ++
> > >  hw/acpi/aml-build.c         | 43 +++++++++++++++++++++++++++++++++
> > >  hw/i386/acpi-build.c        | 47 ++-----------------------------------
> > >  3 files changed, 47 insertions(+), 45 deletions(-)
> > > 
> > > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > > index c27c0935ae..fde2785b9a 100644
> > > --- a/include/hw/acpi/aml-build.h
> > > +++ b/include/hw/acpi/aml-build.h
> > > @@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
> > >               const char *oem_id, const char *oem_table_id);
> > >  void *acpi_data_push(GArray *table_data, unsigned size);
> > >  unsigned acpi_data_len(GArray *table);
> > > +Object *acpi_get_pci_host(void);
> > > +void acpi_get_pci_holes(Range *hole, Range *hole64);
> > >  /* Align AML blob size to a multiple of 'align' */
> > >  void acpi_align_size(GArray *blob, unsigned align);
> > >  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> > > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > > index 2b9a636e75..b8e32f15f7 100644
> > > --- a/hw/acpi/aml-build.c
> > > +++ b/hw/acpi/aml-build.c
> > > @@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre)
> > >      g_array_free(tables->vmgenid, mfre);
> > >  }  
> >   
> > > +/*
> > > + * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> > > + */
> > > +Object *acpi_get_pci_host(void)
> > > +{
> > > +    PCIHostState *host;
> > > +
> > > +    host = OBJECT_CHECK(PCIHostState,
> > > +                        object_resolve_path("/machine/i440fx", NULL),
> > > +                        TYPE_PCI_HOST_BRIDGE);
> > > +    if (!host) {
> > > +        host = OBJECT_CHECK(PCIHostState,
> > > +                            object_resolve_path("/machine/q35", NULL),
> > > +                            TYPE_PCI_HOST_BRIDGE);
> > > +    }
> > > +
> > > +    return OBJECT(host);
> > > +}  
> > in general aml-build.c is a place to put ACPI spec primitives,
> > so I'd suggest to move it somewhere else.
> > 
> > Considering it's x86 code (so far), maybe move it to something like
> > hw/i386/acpi-pci.c
> > 
> > Also it might be good to get rid of acpi_get_pci_host() and pass
> > a pointer to pci_host as acpi_setup() an argument, since it's static
> > for life of boar we can keep it in AcpiBuildState, and reuse for
> > mfg/pci_hole/pci bus accesses.  
> That's what I'm trying to do with patches #23 and 24, but through the
> ACPI configuration structure. I could try using the build state instead,
> as it's platform agnostic as well.
May be it will work.
Note:
try not to pass whole build_state to concrete tables builders and use
arguments/dedicated structures to pass data needed for that concrete
table.

> 
> Cheers,
> Samuel.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API
  2018-11-21 23:12       ` Samuel Ortiz
@ 2018-11-23 11:04         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-23 11:04 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Marcel Apfelbaum, Yang Zhong, Peter Maydell, Stefano Stabellini,
	Eduardo Habkost, Rob Bradford, Michael S. Tsirkin, qemu-devel,
	Shannon Zhao, qemu-arm, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Thu, 22 Nov 2018 00:12:17 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Hi Igor,
> 
> On Wed, Nov 14, 2018 at 11:55:37AM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:34 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > From: Yang Zhong <yang.zhong@intel.com>
> > > 
> > > The AML build routines for the PCI host bridge and the corresponding
> > > DSDT addition are neither x86 nor PC machine type specific.
> > > We can move them to the architecture agnostic hw/acpi folder, and by
> > > carrying all the needed information through a new AcpiPciBus structure,
> > > we can make them PC machine type independent.  
> > 
> > I'm don't know anything about PCI, but functional changes doesn't look
> > correct to me.
> >
> > See more detailed comments below.
> > 
> > Marcel,
> > could you take a look on this patch (in particular main csr changes), pls?
> >   
> > > 
> > > Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> > > Signed-off-by: Rob Bradford <robert.bradford@intel.com>
> > > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > > ---
> > >  include/hw/acpi/aml-build.h |   8 ++
> > >  hw/acpi/aml-build.c         | 157 ++++++++++++++++++++++++++++++++++++
> > >  hw/i386/acpi-build.c        | 115 ++------------------------
> > >  3 files changed, 173 insertions(+), 107 deletions(-)
> > > 
> > > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > > index fde2785b9a..1861e37ebf 100644
> > > --- a/include/hw/acpi/aml-build.h
> > > +++ b/include/hw/acpi/aml-build.h
> > > @@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
> > >      uint32_t mcfg_size;
> > >  } AcpiMcfgInfo;
> > >  
> > > +typedef struct AcpiPciBus {
> > > +    PCIBus *pci_bus;
> > > +    Range *pci_hole;
> > > +    Range *pci_hole64;
> > > +} AcpiPciBus;  
> > Again, this and all below is not aml-build material.
> > Consider adding/using pci specific acpi file for it.
> > 
> > Also even though pci AML in arm/virt is to a large degree a subset
> > of x86 target and it would be much better to unify ARM part with x86,
> > it probably will be to big/complex of a change if we take on it in
> > one go.
> > 
> > So not to derail you from the goal too much, we probably should
> > generalize this a little bit less, limiting refactoring to x86
> > target for now.  
> So keeping it under i386 means it won't be accessible through hw/acpi/,
> which means we won't be able to have a generic hw/acpi/reduced.c
> implementation. From our perspective, this is the problem with keeping
> things under i386 because we're not sure yet how much generic it is: It
> still won't be shareable for a generic hardware-reduced ACPI
> implementation which means we'll have to temporarily have yet another
> hardware-reduced ACPI implementation under hw/i386 this time.
> I guess this is what Michael meant by keeping some parts of the code
> duplicated for now.
> 
> I feel it'd be easier to move those APIs under a shareable location, to
> make it easier for ARM to consume it even if it's not entirely generic yet.
> But you guys are the maintainers and if you think we should restric the
> generalization to x86 only for now, we can go for it.
If code is generic (you can reuse it with arm/virt in the same series)
then place it in hw/acpi otherwise if it's semi-generic put to hw/i386.
If it would be a separate file it would be easier to move it to generic
place when we are able to resuse it with arm/virt.

 
> Cheers,
> Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API
@ 2018-11-23 11:04         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-23 11:04 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Rob Bradford, Michael S. Tsirkin, qemu-devel, Shannon Zhao,
	qemu-arm, Marcel Apfelbaum, xen-devel, Anthony Perard,
	Paolo Bonzini, Richard Henderson

On Thu, 22 Nov 2018 00:12:17 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Hi Igor,
> 
> On Wed, Nov 14, 2018 at 11:55:37AM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:34 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > From: Yang Zhong <yang.zhong@intel.com>
> > > 
> > > The AML build routines for the PCI host bridge and the corresponding
> > > DSDT addition are neither x86 nor PC machine type specific.
> > > We can move them to the architecture agnostic hw/acpi folder, and by
> > > carrying all the needed information through a new AcpiPciBus structure,
> > > we can make them PC machine type independent.  
> > 
> > I'm don't know anything about PCI, but functional changes doesn't look
> > correct to me.
> >
> > See more detailed comments below.
> > 
> > Marcel,
> > could you take a look on this patch (in particular main csr changes), pls?
> >   
> > > 
> > > Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> > > Signed-off-by: Rob Bradford <robert.bradford@intel.com>
> > > Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
> > > ---
> > >  include/hw/acpi/aml-build.h |   8 ++
> > >  hw/acpi/aml-build.c         | 157 ++++++++++++++++++++++++++++++++++++
> > >  hw/i386/acpi-build.c        | 115 ++------------------------
> > >  3 files changed, 173 insertions(+), 107 deletions(-)
> > > 
> > > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > > index fde2785b9a..1861e37ebf 100644
> > > --- a/include/hw/acpi/aml-build.h
> > > +++ b/include/hw/acpi/aml-build.h
> > > @@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
> > >      uint32_t mcfg_size;
> > >  } AcpiMcfgInfo;
> > >  
> > > +typedef struct AcpiPciBus {
> > > +    PCIBus *pci_bus;
> > > +    Range *pci_hole;
> > > +    Range *pci_hole64;
> > > +} AcpiPciBus;  
> > Again, this and all below is not aml-build material.
> > Consider adding/using pci specific acpi file for it.
> > 
> > Also even though pci AML in arm/virt is to a large degree a subset
> > of x86 target and it would be much better to unify ARM part with x86,
> > it probably will be to big/complex of a change if we take on it in
> > one go.
> > 
> > So not to derail you from the goal too much, we probably should
> > generalize this a little bit less, limiting refactoring to x86
> > target for now.  
> So keeping it under i386 means it won't be accessible through hw/acpi/,
> which means we won't be able to have a generic hw/acpi/reduced.c
> implementation. From our perspective, this is the problem with keeping
> things under i386 because we're not sure yet how much generic it is: It
> still won't be shareable for a generic hardware-reduced ACPI
> implementation which means we'll have to temporarily have yet another
> hardware-reduced ACPI implementation under hw/i386 this time.
> I guess this is what Michael meant by keeping some parts of the code
> duplicated for now.
> 
> I feel it'd be easier to move those APIs under a shareable location, to
> make it easier for ARM to consume it even if it's not entirely generic yet.
> But you guys are the maintainers and if you think we should restric the
> generalization to x86 only for now, we can go for it.
If code is generic (you can reuse it with arm/virt in the same series)
then place it in hw/acpi otherwise if it's semi-generic put to hw/i386.
If it would be a separate file it would be easier to move it to generic
place when we are able to resuse it with arm/virt.

 
> Cheers,
> Samuel.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method
  2018-11-21 23:27       ` Samuel Ortiz
@ 2018-11-26 15:47         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-26 15:47 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Thu, 22 Nov 2018 00:27:33 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> On Thu, Nov 15, 2018 at 02:28:54PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:38 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > This is the standard way of building SRAT on x86 platfoms. But future
> > > machine types could decide to define their own custom SRAT build method
> > > through the ACPI builder methods.
> > > Moreover, we will also need to reach build_srat() from outside of
> > > acpi-build in order to use it as the ACPI builder SRAT build method.  
> > SRAT is usually highly machine specific (memory holes, layout, guest OS
> > specific quirks) so it's hard to generalize it.  
> Hence the need for an SRAT builder interface.
so far builder interface (trying to generalize acpi_build()) looks
not necessary.
I'd suggest to drop and call build_start() directly.

> > I'd  drop SRAT related patches from this series and introduce
> > i386/virt specific SRAT when you post patches for it.  
> virt uses the existing i386 build_srat() routine, there's nothing
> special about it. So this would be purely duplicated code.
Looking at build_srat(), it has a bunch of code to handle legacy
PC layout. You probably don't need any of it for new is86/virt
machine and can make simpler version of it.

In addition (probably repeating question I've asked elsewhere),
Do you have to use split initial memory model for new machine?
Is it possible to use only pc-dimms both for initial and hotplugged memory
at some address (4Gb?) without cutting out PCI hole or any toher holes in RAM layout?

> Cheers,
> Samuel.
> 

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method
@ 2018-11-26 15:47         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-26 15:47 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Thu, 22 Nov 2018 00:27:33 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> On Thu, Nov 15, 2018 at 02:28:54PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:38 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > This is the standard way of building SRAT on x86 platfoms. But future
> > > machine types could decide to define their own custom SRAT build method
> > > through the ACPI builder methods.
> > > Moreover, we will also need to reach build_srat() from outside of
> > > acpi-build in order to use it as the ACPI builder SRAT build method.  
> > SRAT is usually highly machine specific (memory holes, layout, guest OS
> > specific quirks) so it's hard to generalize it.  
> Hence the need for an SRAT builder interface.
so far builder interface (trying to generalize acpi_build()) looks
not necessary.
I'd suggest to drop and call build_start() directly.

> > I'd  drop SRAT related patches from this series and introduce
> > i386/virt specific SRAT when you post patches for it.  
> virt uses the existing i386 build_srat() routine, there's nothing
> special about it. So this would be purely duplicated code.
Looking at build_srat(), it has a bunch of code to handle legacy
PC layout. You probably don't need any of it for new is86/virt
machine and can make simpler version of it.

In addition (probably repeating question I've asked elsewhere),
Do you have to use split initial memory model for new machine?
Is it possible to use only pc-dimms both for initial and hotplugged memory
at some address (4Gb?) without cutting out PCI hole or any toher holes in RAM layout?

> Cheers,
> Samuel.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter
  2018-11-21 23:21       ` Samuel Ortiz
@ 2018-11-27 13:54         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-27 13:54 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Thu, 22 Nov 2018 00:21:06 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Hi Igor,
> 
> On Thu, Nov 15, 2018 at 01:36:58PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:35 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > From: Yang Zhong <yang.zhong@intel.com>
> > > 
> > > The ACPI MCFG getter is not x86 specific and could be called from
> > > anywhere within generic ACPI API, so let's export it.  
> > So far it's x86 or more exactly q35 specific thing,  
> It's property based, and it's using a generic PCIE property afaict.
> So it's up to each machine type to define those properties.
> I'm curious here: What's the idiomatic way to define a machine
> setting/attribute/property, let each instance define it or not, and
> make it available at run time?
> Would you be getting the PCI host pointer from the ACPI build state and
> getting that information back from there?

Cleaner way would be make arm/virt board set PCIE_HOST_MCFG_BASE/
PCIE_HOST_MCFG_SIZE properties and then use common build_mcfg()(in aml-build.c).
Something like this:
  acpi_setup_reduced()
     AcpiMcfgInfo mcfg_info = {
       .base = object_property_get_uint(pcie, PCIE_HOST_MCFG_BASE, NULL),
       .size = object_property_get_uint(pcie, PCIE_HOST_MCFG_SIZE, NULL)
     };
     acpi_build() {
         build_mcfg("MCFG", &info);
     }
  }
and for legacy q35
  acpi_build() {
     if (pcie) {
        AcpiMcfgInfo mcfg_info = {
          .base = object_property_get_uint(pcie, PCIE_HOST_MCFG_BASE, NULL),
          .size = object_property_get_uint(pcie, PCIE_HOST_MCFG_SIZE, NULL)
        };
        if (mcfg_info.base != PCIE_BASE_ADDR_UNMAPPED)
            build_mcfg("MCFG", &info);
        else
            /* move comment here why we are doing it */
            build_mcfg("QEMU", &info);
     }
  }

The thing I don't like about acpi_get_mcfg() is that it
does lookup acpi_get_i386_pci_host() each time it's called
and judges if it's PCI-E host by presence of properties.

I'd rather be explicit where PCI host be fetched once somewhere
in acpi_setup() or possibly passed down from the board as an argument
and board telling to i386/acpi_setup() if it's PCI or PCI-E host
so we don't have to guess it in acpi code.


> Cheers,
> Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter
@ 2018-11-27 13:54         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-27 13:54 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Thu, 22 Nov 2018 00:21:06 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> Hi Igor,
> 
> On Thu, Nov 15, 2018 at 01:36:58PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:35 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > From: Yang Zhong <yang.zhong@intel.com>
> > > 
> > > The ACPI MCFG getter is not x86 specific and could be called from
> > > anywhere within generic ACPI API, so let's export it.  
> > So far it's x86 or more exactly q35 specific thing,  
> It's property based, and it's using a generic PCIE property afaict.
> So it's up to each machine type to define those properties.
> I'm curious here: What's the idiomatic way to define a machine
> setting/attribute/property, let each instance define it or not, and
> make it available at run time?
> Would you be getting the PCI host pointer from the ACPI build state and
> getting that information back from there?

Cleaner way would be make arm/virt board set PCIE_HOST_MCFG_BASE/
PCIE_HOST_MCFG_SIZE properties and then use common build_mcfg()(in aml-build.c).
Something like this:
  acpi_setup_reduced()
     AcpiMcfgInfo mcfg_info = {
       .base = object_property_get_uint(pcie, PCIE_HOST_MCFG_BASE, NULL),
       .size = object_property_get_uint(pcie, PCIE_HOST_MCFG_SIZE, NULL)
     };
     acpi_build() {
         build_mcfg("MCFG", &info);
     }
  }
and for legacy q35
  acpi_build() {
     if (pcie) {
        AcpiMcfgInfo mcfg_info = {
          .base = object_property_get_uint(pcie, PCIE_HOST_MCFG_BASE, NULL),
          .size = object_property_get_uint(pcie, PCIE_HOST_MCFG_SIZE, NULL)
        };
        if (mcfg_info.base != PCIE_BASE_ADDR_UNMAPPED)
            build_mcfg("MCFG", &info);
        else
            /* move comment here why we are doing it */
            build_mcfg("QEMU", &info);
     }
  }

The thing I don't like about acpi_get_mcfg() is that it
does lookup acpi_get_i386_pci_host() each time it's called
and judges if it's PCI-E host by presence of properties.

I'd rather be explicit where PCI host be fetched once somewhere
in acpi_setup() or possibly passed down from the board as an argument
and board telling to i386/acpi_setup() if it's PCI or PCI-E host
so we don't have to guess it in acpi code.


> Cheers,
> Samuel.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface
  2018-11-21 23:57       ` Samuel Ortiz
@ 2018-11-27 14:08         ` Igor Mammedov
  -1 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-27 14:08 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	xen-devel, Anthony Perard, Paolo Bonzini, Richard Henderson

On Thu, 22 Nov 2018 00:57:21 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> On Fri, Nov 16, 2018 at 05:02:26PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:43 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > In order to decouple ACPI APIs from specific machine types, we are
> > > creating an ACPI builder interface that each ACPI platform can choose to
> > > implement.
> > > This way, a new machine type can re-use the high level ACPI APIs and
> > > define some custom table build methods, without having to duplicate most
> > > of the existing implementation only to add small variations to it.  
> > I'm not sure about motivation behind so high APIs,
> > what obvious here is an extra level of indirection for not clear gain.
> > 
> > Yep using table callbacks, one can attempt to generalize
> > acpi_setup() and help boards to decide which tables do not build
> > (MCFG comes to the mind). But I'm not convinced that acpi_setup()
> > could be cleanly generalized as a whole (probably some parts but
> > not everything)  
> It's more about generalizing acpi_build(), and then having one
> acpi_setup for non hardware-reduced ACPI and a acpi_reduced_setup() for
> hardware-reduced.
> 
> Right now there's no generalization at all but with this patch we could
> already use the same acpi_reduced_setup() implementation for both arm
> and i386/virt.
> 
> > so it's minor benefit for extra headache of
> > figuring out what callback will be actually called when reading code.  
> This is the same complexity that already exists for essentially all
> current interfaces.
in case of callback vs plain function call, I'd choose the later
if it does the job and resort to the former when I have to.
 
> > However if board needs a slightly different table, it will have to
> > duplicate an exiting one and then modify to suit its needs.
> > 
> > to me it pretty much looks the same as calling build_foo()
> > we use now but with an extra indirection level and then
> > duplicating the later for usage in another board in slightly
> > different manner.  
> I believe what you're trying to say is that this abstraction may be
> useful but you're arguing the granularity is not properly defined? Am I
> getting this right?
yep, something along those lines. So far it's not useful much if at all.
So I'll not introduce it for now and try to get by with plain functions
calls. Later we might add fine-grained callbacks on case by case basis
(like 'adevc->madt_cpu') where it's possible to generalize.
 
> Cheers,
> Samuel.

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface
@ 2018-11-27 14:08         ` Igor Mammedov
  0 siblings, 0 replies; 170+ messages in thread
From: Igor Mammedov @ 2018-11-27 14:08 UTC (permalink / raw)
  To: Samuel Ortiz
  Cc: Peter Maydell, Stefano Stabellini, Eduardo Habkost,
	Michael S. Tsirkin, qemu-devel, Shannon Zhao, qemu-arm,
	Paolo Bonzini, Anthony Perard, xen-devel, Richard Henderson

On Thu, 22 Nov 2018 00:57:21 +0100
Samuel Ortiz <sameo@linux.intel.com> wrote:

> On Fri, Nov 16, 2018 at 05:02:26PM +0100, Igor Mammedov wrote:
> > On Mon,  5 Nov 2018 02:40:43 +0100
> > Samuel Ortiz <sameo@linux.intel.com> wrote:
> >   
> > > In order to decouple ACPI APIs from specific machine types, we are
> > > creating an ACPI builder interface that each ACPI platform can choose to
> > > implement.
> > > This way, a new machine type can re-use the high level ACPI APIs and
> > > define some custom table build methods, without having to duplicate most
> > > of the existing implementation only to add small variations to it.  
> > I'm not sure about motivation behind so high APIs,
> > what obvious here is an extra level of indirection for not clear gain.
> > 
> > Yep using table callbacks, one can attempt to generalize
> > acpi_setup() and help boards to decide which tables do not build
> > (MCFG comes to the mind). But I'm not convinced that acpi_setup()
> > could be cleanly generalized as a whole (probably some parts but
> > not everything)  
> It's more about generalizing acpi_build(), and then having one
> acpi_setup for non hardware-reduced ACPI and a acpi_reduced_setup() for
> hardware-reduced.
> 
> Right now there's no generalization at all but with this patch we could
> already use the same acpi_reduced_setup() implementation for both arm
> and i386/virt.
> 
> > so it's minor benefit for extra headache of
> > figuring out what callback will be actually called when reading code.  
> This is the same complexity that already exists for essentially all
> current interfaces.
in case of callback vs plain function call, I'd choose the later
if it does the job and resort to the former when I have to.
 
> > However if board needs a slightly different table, it will have to
> > duplicate an exiting one and then modify to suit its needs.
> > 
> > to me it pretty much looks the same as calling build_foo()
> > we use now but with an extra indirection level and then
> > duplicating the later for usage in another board in slightly
> > different manner.  
> I believe what you're trying to say is that this abstraction may be
> useful but you're arguing the granularity is not properly defined? Am I
> getting this right?
yep, something along those lines. So far it's not useful much if at all.
So I'll not introduce it for now and try to get by with plain functions
calls. Later we might add fine-grained callbacks on case by case basis
(like 'adevc->madt_cpu') where it's possible to generalize.
 
> Cheers,
> Samuel.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [Qemu-devel] [PATCH v5 16/24] hw: acpi: Fix memory hotplug AML generation error
  2018-11-08 14:23     ` Igor Mammedov
@ 2019-01-14 18:35       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2019-01-14 18:35 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Samuel Ortiz, qemu-devel, Shannon Zhao, Stefano Stabellini,
	Anthony Perard, Richard Henderson, Marcel Apfelbaum, xen-devel,
	Paolo Bonzini, qemu-arm, Peter Maydell, Eduardo Habkost,
	Yang Zhong

On Thu, Nov 08, 2018 at 03:23:41PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:39 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > From: Yang Zhong <yang.zhong@intel.com>
> > 
> > When using the generated memory hotplug AML, the iasl
> > compiler would give the following error:
> > 
> > dsdt.dsl 266: Return (MOST (_UID, Arg0, Arg1, Arg2))
> > Error 6080 - Called method returns no value ^
> > 
> > Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> 
> I suggest to put this patch at the beginning of the series
> before reference tables in test are updated.

Samuel how about a separate small series with just bugfixes
for starters?


> > ---
> >  hw/acpi/memory_hotplug.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> > 
> > diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
> > index db2c4df961..893fc2bd27 100644
> > --- a/hw/acpi/memory_hotplug.c
> > +++ b/hw/acpi/memory_hotplug.c
> > @@ -686,15 +686,15 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
> >  
> >              method = aml_method("_OST", 3, AML_NOTSERIALIZED);
> >              s = MEMORY_SLOT_OST_METHOD;
> > -            aml_append(method, aml_return(aml_call4(
> > -                s, aml_name("_UID"), aml_arg(0), aml_arg(1), aml_arg(2)
> > -            )));
> > +            aml_append(method,
> > +                       aml_call4(s, aml_name("_UID"), aml_arg(0),
> > +                                 aml_arg(1), aml_arg(2)));
> >              aml_append(dev, method);
> >  
> >              method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
> >              s = MEMORY_SLOT_EJECT_METHOD;
> > -            aml_append(method, aml_return(aml_call2(
> > -                       s, aml_name("_UID"), aml_arg(0))));
> > +            aml_append(method,
> > +                       aml_call2(s, aml_name("_UID"), aml_arg(0)));
> >              aml_append(dev, method);
> >  
> >              aml_append(dev_container, dev);

^ permalink raw reply	[flat|nested] 170+ messages in thread

* Re: [PATCH v5 16/24] hw: acpi: Fix memory hotplug AML generation error
@ 2019-01-14 18:35       ` Michael S. Tsirkin
  0 siblings, 0 replies; 170+ messages in thread
From: Michael S. Tsirkin @ 2019-01-14 18:35 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Yang Zhong, Peter Maydell, Stefano Stabellini, Samuel Ortiz,
	qemu-devel, Eduardo Habkost, Shannon Zhao, qemu-arm,
	Marcel Apfelbaum, Paolo Bonzini, Anthony Perard, xen-devel,
	Richard Henderson

On Thu, Nov 08, 2018 at 03:23:41PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:39 +0100
> Samuel Ortiz <sameo@linux.intel.com> wrote:
> 
> > From: Yang Zhong <yang.zhong@intel.com>
> > 
> > When using the generated memory hotplug AML, the iasl
> > compiler would give the following error:
> > 
> > dsdt.dsl 266: Return (MOST (_UID, Arg0, Arg1, Arg2))
> > Error 6080 - Called method returns no value ^
> > 
> > Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> 
> I suggest to put this patch at the beginning of the series
> before reference tables in test are updated.

Samuel how about a separate small series with just bugfixes
for starters?


> > ---
> >  hw/acpi/memory_hotplug.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> > 
> > diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
> > index db2c4df961..893fc2bd27 100644
> > --- a/hw/acpi/memory_hotplug.c
> > +++ b/hw/acpi/memory_hotplug.c
> > @@ -686,15 +686,15 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
> >  
> >              method = aml_method("_OST", 3, AML_NOTSERIALIZED);
> >              s = MEMORY_SLOT_OST_METHOD;
> > -            aml_append(method, aml_return(aml_call4(
> > -                s, aml_name("_UID"), aml_arg(0), aml_arg(1), aml_arg(2)
> > -            )));
> > +            aml_append(method,
> > +                       aml_call4(s, aml_name("_UID"), aml_arg(0),
> > +                                 aml_arg(1), aml_arg(2)));
> >              aml_append(dev, method);
> >  
> >              method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
> >              s = MEMORY_SLOT_EJECT_METHOD;
> > -            aml_append(method, aml_return(aml_call2(
> > -                       s, aml_name("_UID"), aml_arg(0))));
> > +            aml_append(method,
> > +                       aml_call2(s, aml_name("_UID"), aml_arg(0)));
> >              aml_append(dev, method);
> >  
> >              aml_append(dev_container, dev);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 170+ messages in thread

end of thread, other threads:[~2019-01-14 18:35 UTC | newest]

Thread overview: 170+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-05  1:40 [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition Samuel Ortiz
2018-11-05  1:40 ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-09 14:23   ` [Qemu-devel] " Igor Mammedov
2018-11-09 14:23     ` Igor Mammedov
2018-11-21 14:42     ` [Qemu-devel] " Samuel Ortiz
2018-11-21 14:42       ` Samuel Ortiz
2018-11-22 15:13       ` Igor Mammedov
2018-11-22 15:13         ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-09 14:27   ` [Qemu-devel] " Igor Mammedov
2018-11-09 14:27     ` Igor Mammedov
2018-11-21 14:42     ` [Qemu-devel] " Samuel Ortiz
2018-11-21 14:42       ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 03/24] hw: acpi: The RSDP build API can return void Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-06 10:23   ` [Qemu-devel] " Paolo Bonzini
2018-11-06 10:23     ` Paolo Bonzini
2018-11-06 10:43     ` [Qemu-devel] " Samuel Ortiz
2018-11-06 10:43       ` Samuel Ortiz
2018-11-08 14:24   ` [Qemu-devel] " Igor Mammedov
2018-11-08 14:24     ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 04/24] hw: acpi: Export the RSDP build API Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-08 14:16   ` [Qemu-devel] " Igor Mammedov
2018-11-08 14:16     ` Igor Mammedov
2018-11-08 14:36     ` [Qemu-devel] " Samuel Ortiz
2018-11-08 14:36       ` Samuel Ortiz
2018-11-08 14:53     ` [Qemu-devel] " Igor Mammedov
2018-11-08 14:53       ` Igor Mammedov
2018-11-19 18:27     ` Michael S. Tsirkin
2018-11-19 18:27       ` Michael S. Tsirkin
2018-11-20  8:23       ` [Qemu-devel] " Igor Mammedov
2018-11-20  8:23         ` Igor Mammedov
2018-11-21 14:42     ` [Qemu-devel] " Samuel Ortiz
2018-11-21 14:42       ` Samuel Ortiz
2018-11-22 16:26       ` Igor Mammedov
2018-11-22 16:26         ` Igor Mammedov
2018-11-23  9:36         ` Samuel Ortiz
2018-11-23  9:36           ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 06/24] hw: acpi: Factorize the RSDP build API implementation Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 07/24] hw: acpi: Generalize AML build routines Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-09 13:37   ` [Qemu-devel] " Igor Mammedov
2018-11-21 15:00     ` Samuel Ortiz
2018-11-21 15:00       ` Samuel Ortiz
2018-11-09 13:37   ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 08/24] hw: acpi: Factorize _OSC AML across architectures Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 09/24] hw: i386: Move PCI host definitions to pci_host.h Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-09 14:30   ` [Qemu-devel] " Igor Mammedov
2018-11-09 14:30     ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-13 15:59   ` [Qemu-devel] " Igor Mammedov
2018-11-13 15:59     ` Igor Mammedov
2018-11-21 15:43     ` Samuel Ortiz
2018-11-21 15:43       ` Samuel Ortiz
2018-11-23 10:55       ` Igor Mammedov
2018-11-23 10:55         ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-14 10:55   ` [Qemu-devel] " Igor Mammedov
2018-11-14 10:55     ` Igor Mammedov
2018-11-21 23:12     ` [Qemu-devel] " Samuel Ortiz
2018-11-21 23:12       ` Samuel Ortiz
2018-11-23 11:04       ` Igor Mammedov
2018-11-23 11:04         ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-15 12:36   ` [Qemu-devel] " Igor Mammedov
2018-11-15 12:36     ` Igor Mammedov
2018-11-21 23:21     ` [Qemu-devel] " Samuel Ortiz
2018-11-21 23:21       ` Samuel Ortiz
2018-11-27 13:54       ` Igor Mammedov
2018-11-27 13:54         ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 13/24] hw: acpi: Do not create hotplug method when handler is not defined Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-09  9:12   ` [Qemu-devel] " Igor Mammedov
2018-11-09  9:12   ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 14/24] hw: i386: Make the hotpluggable memory size property more generic Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-15 12:49   ` [Qemu-devel] " Igor Mammedov
2018-11-15 12:49     ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-15 13:28   ` [Qemu-devel] " Igor Mammedov
2018-11-15 13:28     ` Igor Mammedov
2018-11-21 23:27     ` Samuel Ortiz
2018-11-21 23:27       ` Samuel Ortiz
2018-11-26 15:47       ` Igor Mammedov
2018-11-26 15:47         ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 16/24] hw: acpi: Fix memory hotplug AML generation error Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-08 14:23   ` [Qemu-devel] " Igor Mammedov
2018-11-08 14:23     ` Igor Mammedov
2019-01-14 18:35     ` [Qemu-devel] " Michael S. Tsirkin
2019-01-14 18:35       ` Michael S. Tsirkin
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 17/24] hw: acpi: Export the PCI hotplug API Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 18/24] hw: i386: Export the MADT build method Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-16  9:27   ` [Qemu-devel] " Igor Mammedov
2018-11-16  9:27   ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-16  9:39   ` [Qemu-devel] " Igor Mammedov
2018-11-16  9:39     ` Igor Mammedov
2018-11-16 19:42     ` [Qemu-devel] " Boeuf, Sebastien
2018-11-19 15:37       ` Igor Mammedov
2018-11-19 15:37         ` Igor Mammedov
2018-11-19 18:02         ` [Qemu-devel] " Boeuf, Sebastien
2018-11-19 18:02           ` Boeuf, Sebastien
2018-11-20  8:26           ` [Qemu-devel] " Igor Mammedov
2018-11-20  8:26             ` Igor Mammedov
2018-11-16 19:42     ` Boeuf, Sebastien
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-16 16:02   ` [Qemu-devel] " Igor Mammedov
2018-11-16 16:02     ` Igor Mammedov
2018-11-21 23:57     ` Samuel Ortiz
2018-11-21 23:57       ` Samuel Ortiz
2018-11-27 14:08       ` Igor Mammedov
2018-11-27 14:08         ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 21/24] hw: i386: Implement the ACPI builder interface for PC Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 22/24] hw: pci-host: piix: Return PCI host pointer instead of PCI bus Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-16 11:09   ` [Qemu-devel] " Igor Mammedov
2018-11-16 11:09     ` Igor Mammedov
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 23/24] hw: i386: Set ACPI configuration PCI host pointer Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-05  1:40 ` [Qemu-devel] [PATCH v5 24/24] hw: i386: Refactor PCI host getter Samuel Ortiz
2018-11-05  1:40   ` Samuel Ortiz
2018-11-16 16:29 ` [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition Igor Mammedov
2018-11-16 16:29   ` Igor Mammedov
2018-11-16 16:37   ` Paolo Bonzini
2018-11-16 16:37     ` Paolo Bonzini
2018-11-19 15:31     ` Igor Mammedov
2018-11-19 15:31       ` Igor Mammedov
2018-11-19 17:14       ` Paolo Bonzini
2018-11-19 17:14         ` Paolo Bonzini
2018-11-19 18:14         ` Michael S. Tsirkin
2018-11-19 18:14           ` Michael S. Tsirkin
2018-11-20 21:35           ` Paolo Bonzini
2018-11-20 21:35             ` Paolo Bonzini
2018-11-20 12:57         ` Igor Mammedov
2018-11-20 12:57         ` Igor Mammedov
2018-11-20 21:36           ` Paolo Bonzini
2018-11-20 21:36             ` Paolo Bonzini
2018-11-21 12:35       ` Michael S. Tsirkin
2018-11-21 12:35         ` Michael S. Tsirkin
2018-11-21 13:50         ` Samuel Ortiz
2018-11-21 13:50           ` Samuel Ortiz
2018-11-21 13:57           ` Michael S. Tsirkin
2018-11-21 13:57             ` Michael S. Tsirkin
2018-11-21 14:15         ` Igor Mammedov
2018-11-21 14:15           ` Igor Mammedov
2018-11-21 14:38           ` Samuel Ortiz
2018-11-21 14:38             ` Samuel Ortiz
2018-11-22 10:39             ` Igor Mammedov
2018-11-22 10:39               ` Igor Mammedov
2018-11-22  0:17           ` Samuel Ortiz
2018-11-22  0:17             ` Samuel Ortiz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.