All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
@ 2016-01-04 18:52 ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams,
	kvm, qemu-devel, Xiao Guangrong

This patchset is against commit 5530427f0ca (acpi: extend aml_and() to
accept target argument) on pci branch of Michael's git tree
and can be found at:
      https://github.com/xiaogr/qemu.git nvdimm-acpi-v1

This is the second part of vNVDIMM implementation which implements the
BIOS patched dsm memory and introduces the framework that allows QEMU
to emulate DSM method

Thanks to Michael's idea, we do not reserve any memory for NVDIMM ACPI,
instead we let BIOS allocate the memory and patch the address to the
offset we want

IO port is still enabled as it plays as the way to notify QEMU and pass
the patched dsm memory address, so that IO port region, 0x0a18 - 0xa20,
is reserved and it is divided into two 32 bits ports and used to pass
the low 32 bits and high 32 bits of dsm memory address to QEMU

Thanks Igor's idea, this patchset also extends DSDT/SSDT to revision 2
to apply 64 bit operations, in order to keeping compatibility, old
version (<= 2.5) still uses revision 1. Since 64 bit operations breaks
old guests (such as windows XP), we should keep the 64 bits stuff in
the private place where common ACPI operation does not touch it

Igor Mammedov (1):
  pc: acpi: bump DSDT/SSDT compliance revision to v2

Xiao Guangrong (5):
  nvdimm acpi: initialize the resource used by NVDIMM ACPI
  nvdimm acpi: introduce patched dsm memory
  acpi: allow using acpi named offset for OperationRegion
  nvdimm acpi: let qemu handle _DSM method
  nvdimm acpi: emulate dsm method

 hw/acpi/Makefile.objs       |   2 +-
 hw/acpi/aml-build.c         |  45 +++++++-
 hw/acpi/ich9.c              |  32 +++++
 hw/acpi/nvdimm.c            | 276 ++++++++++++++++++++++++++++++++++++++++++--
 hw/acpi/piix4.c             |   3 +
 hw/i386/acpi-build.c        |  41 ++++---
 hw/i386/pc.c                |   8 +-
 hw/i386/pc_piix.c           |   5 +
 hw/i386/pc_q35.c            |   8 +-
 include/hw/acpi/aml-build.h |   6 +-
 include/hw/acpi/ich9.h      |   2 +
 include/hw/i386/pc.h        |  19 ++-
 include/hw/mem/nvdimm.h     |  44 ++++++-
 13 files changed, 449 insertions(+), 42 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 0/6] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
@ 2016-01-04 18:52 ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

This patchset is against commit 5530427f0ca (acpi: extend aml_and() to
accept target argument) on pci branch of Michael's git tree
and can be found at:
      https://github.com/xiaogr/qemu.git nvdimm-acpi-v1

This is the second part of vNVDIMM implementation which implements the
BIOS patched dsm memory and introduces the framework that allows QEMU
to emulate DSM method

Thanks to Michael's idea, we do not reserve any memory for NVDIMM ACPI,
instead we let BIOS allocate the memory and patch the address to the
offset we want

IO port is still enabled as it plays as the way to notify QEMU and pass
the patched dsm memory address, so that IO port region, 0x0a18 - 0xa20,
is reserved and it is divided into two 32 bits ports and used to pass
the low 32 bits and high 32 bits of dsm memory address to QEMU

Thanks Igor's idea, this patchset also extends DSDT/SSDT to revision 2
to apply 64 bit operations, in order to keeping compatibility, old
version (<= 2.5) still uses revision 1. Since 64 bit operations breaks
old guests (such as windows XP), we should keep the 64 bits stuff in
the private place where common ACPI operation does not touch it

Igor Mammedov (1):
  pc: acpi: bump DSDT/SSDT compliance revision to v2

Xiao Guangrong (5):
  nvdimm acpi: initialize the resource used by NVDIMM ACPI
  nvdimm acpi: introduce patched dsm memory
  acpi: allow using acpi named offset for OperationRegion
  nvdimm acpi: let qemu handle _DSM method
  nvdimm acpi: emulate dsm method

 hw/acpi/Makefile.objs       |   2 +-
 hw/acpi/aml-build.c         |  45 +++++++-
 hw/acpi/ich9.c              |  32 +++++
 hw/acpi/nvdimm.c            | 276 ++++++++++++++++++++++++++++++++++++++++++--
 hw/acpi/piix4.c             |   3 +
 hw/i386/acpi-build.c        |  41 ++++---
 hw/i386/pc.c                |   8 +-
 hw/i386/pc_piix.c           |   5 +
 hw/i386/pc_q35.c            |   8 +-
 include/hw/acpi/aml-build.h |   6 +-
 include/hw/acpi/ich9.h      |   2 +
 include/hw/i386/pc.h        |  19 ++-
 include/hw/mem/nvdimm.h     |  44 ++++++-
 13 files changed, 449 insertions(+), 42 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/6] pc: acpi: bump DSDT/SSDT compliance revision to v2
  2016-01-04 18:52 ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-04 18:52   ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams,
	kvm, qemu-devel, Xiao Guangrong

From: Igor Mammedov <imammedo@redhat.com>

It turns on 64-bit integer handling in OSPM, which will be used for
writing simpler/smaller AML code in following patch.

Tested with Windows XP and Windows Server 2008, Linux:
  * XP doesn't care about revision and continues to use 32 integers
       and boots just fine with this change.
  * WS 2008 and Linux - support rev2 and use 64-bit integers

[
  Xiao: make dsdt/ssdt be v1 in qemu version <= 2.5 to keep
        compatible.
]
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/ich9.c          | 32 ++++++++++++++++++++++++++++++++
 hw/acpi/nvdimm.c        | 10 ++++++----
 hw/acpi/piix4.c         |  3 +++
 hw/i386/acpi-build.c    | 24 +++++++++++++++++-------
 include/hw/acpi/ich9.h  |  2 ++
 include/hw/i386/pc.h    | 14 +++++++++++++-
 include/hw/mem/nvdimm.h |  2 +-
 7 files changed, 74 insertions(+), 13 deletions(-)

diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
index 1c7fcfa..b26f4cc 100644
--- a/hw/acpi/ich9.c
+++ b/hw/acpi/ich9.c
@@ -400,6 +400,33 @@ static void ich9_pm_set_enable_tco(Object *obj, bool value, Error **errp)
     s->pm.enable_tco = value;
 }
 
+static void ich9_pm_get_dsdt_revision(Object *obj, Visitor *v,
+                                      void *opaque, const char *name,
+                                      Error **errp)
+{
+    ICH9LPCPMRegs *pm = opaque;
+    uint8_t value = pm->dsdt_revision;
+
+    visit_type_uint8(v, &value, name, errp);
+}
+
+static void ich9_pm_set_dsdt_revision(Object *obj, Visitor *v,
+                                      void *opaque, const char *name,
+                                      Error **errp)
+{
+    ICH9LPCPMRegs *pm = opaque;
+    Error *local_err = NULL;
+    uint8_t value;
+
+    visit_type_uint8(v, &value, name, &local_err);
+    if (local_err) {
+        goto out;
+    }
+    pm->dsdt_revision = value;
+out:
+    error_propagate(errp, local_err);
+}
+
 void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs *pm, Error **errp)
 {
     static const uint32_t gpe0_len = ICH9_PMIO_GPE0_LEN;
@@ -407,6 +434,7 @@ void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs *pm, Error **errp)
     pm->disable_s3 = 0;
     pm->disable_s4 = 0;
     pm->s4_val = 2;
+    pm->dsdt_revision = 2;
 
     object_property_add_uint32_ptr(obj, ACPI_PM_PROP_PM_IO_BASE,
                                    &pm->pm_io_base, errp);
@@ -435,6 +463,10 @@ void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs *pm, Error **errp)
                              ich9_pm_get_enable_tco,
                              ich9_pm_set_enable_tco,
                              NULL);
+    object_property_add(obj, ACPI_DSDT_REVISION, "uint8",
+                        ich9_pm_get_dsdt_revision,
+                        ich9_pm_set_dsdt_revision,
+                        NULL, pm, NULL);
 }
 
 void ich9_pm_device_plug_cb(ICH9LPCPMRegs *pm, DeviceState *dev, Error **errp)
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 9534418..a2c58dd 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -430,7 +430,8 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev)
 }
 
 static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
-                              GArray *table_data, GArray *linker)
+                              GArray *table_data, GArray *linker,
+                              uint8_t revision)
 {
     Aml *ssdt, *sb_scope, *dev;
 
@@ -468,12 +469,12 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
     g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
     build_header(linker, table_data,
         (void *)(table_data->data + table_data->len - ssdt->buf->len),
-        "SSDT", ssdt->buf->len, 1, "NVDIMM");
+        "SSDT", ssdt->buf->len, revision, "NVDIMM");
     free_aml_allocator();
 }
 
 void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
-                       GArray *linker)
+                       GArray *linker, uint8_t revision)
 {
     GSList *device_list;
 
@@ -483,6 +484,7 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
         return;
     }
     nvdimm_build_nfit(device_list, table_offsets, table_data, linker);
-    nvdimm_build_ssdt(device_list, table_offsets, table_data, linker);
+    nvdimm_build_ssdt(device_list, table_offsets, table_data, linker,
+                      revision);
     g_slist_free(device_list);
 }
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 2cd2fee..9d365f8 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -83,6 +83,8 @@ typedef struct PIIX4PMState {
     uint8_t disable_s4;
     uint8_t s4_val;
 
+    uint8_t dsdt_revision;
+
     AcpiCpuHotplug gpe_cpu;
 
     MemHotplugState acpi_memory_hotplug;
@@ -588,6 +590,7 @@ static Property piix4_pm_properties[] = {
     DEFINE_PROP_UINT8(ACPI_PM_PROP_S3_DISABLED, PIIX4PMState, disable_s3, 0),
     DEFINE_PROP_UINT8(ACPI_PM_PROP_S4_DISABLED, PIIX4PMState, disable_s4, 0),
     DEFINE_PROP_UINT8(ACPI_PM_PROP_S4_VAL, PIIX4PMState, s4_val, 2),
+    DEFINE_PROP_UINT8(ACPI_DSDT_REVISION, PIIX4PMState, dsdt_revision, 2),
     DEFINE_PROP_BOOL("acpi-pci-hotplug-with-bridge-support", PIIX4PMState,
                      use_acpi_pci_hotplug, true),
     DEFINE_PROP_BOOL("memory-hotplug-support", PIIX4PMState,
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 4cc1440..4674461 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -95,6 +95,7 @@ typedef struct AcpiPmInfo {
     uint16_t sci_int;
     uint8_t acpi_enable_cmd;
     uint8_t acpi_disable_cmd;
+    uint8_t dsdt_revision;
     uint32_t gpe0_blk;
     uint32_t gpe0_blk_len;
     uint32_t io_base;
@@ -213,6 +214,13 @@ static void acpi_get_pm_info(AcpiPmInfo *pm)
         pm->s4_val = false;
     }
     qobject_decref(o);
+    o = object_property_get_qobject(obj, ACPI_DSDT_REVISION, NULL);
+    if (o) {
+        pm->dsdt_revision = qint_get_int(qobject_to_qint(o));
+    } else {
+        pm->dsdt_revision = 0x1;
+    }
+    qobject_decref(o);
 
     /* Fill in mandatory properties */
     pm->sci_int = object_property_get_int(obj, ACPI_PM_PROP_SCI_INT, NULL);
@@ -932,7 +940,7 @@ static Aml *build_crs(PCIHostState *host,
 static void
 build_ssdt(GArray *table_data, GArray *linker,
            AcpiCpuInfo *cpu, AcpiPmInfo *pm, AcpiMiscInfo *misc,
-           PcPciInfo *pci, PcGuestInfo *guest_info)
+           PcPciInfo *pci, PcGuestInfo *guest_info, uint8_t revision)
 {
     MachineState *machine = MACHINE(qdev_get_machine());
     uint32_t nr_mem = machine->ram_slots;
@@ -1378,7 +1386,7 @@ build_ssdt(GArray *table_data, GArray *linker,
     g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
     build_header(linker, table_data,
         (void *)(table_data->data + table_data->len - ssdt->buf->len),
-        "SSDT", ssdt->buf->len, 1, NULL);
+        "SSDT", ssdt->buf->len, revision, NULL);
     free_aml_allocator();
 }
 
@@ -1605,7 +1613,8 @@ build_dmar_q35(GArray *table_data, GArray *linker)
 }
 
 static void
-build_dsdt(GArray *table_data, GArray *linker, AcpiMiscInfo *misc)
+build_dsdt(GArray *table_data, GArray *linker, AcpiMiscInfo *misc,
+           uint8_t revision)
 {
     AcpiTableHeader *dsdt;
 
@@ -1616,7 +1625,7 @@ build_dsdt(GArray *table_data, GArray *linker, AcpiMiscInfo *misc)
 
     memset(dsdt, 0, sizeof *dsdt);
     build_header(linker, table_data, dsdt, "DSDT",
-                 misc->dsdt_size, 1, NULL);
+                 misc->dsdt_size, revision, NULL);
 }
 
 static GArray *
@@ -1732,7 +1741,7 @@ void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
 
     /* DSDT is pointed to by FADT */
     dsdt = tables_blob->len;
-    build_dsdt(tables_blob, tables->linker, &misc);
+    build_dsdt(tables_blob, tables->linker, &misc, pm.dsdt_revision);
 
     /* Count the size of the DSDT and SSDT, we will need it for legacy
      * sizing of ACPI tables.
@@ -1746,7 +1755,7 @@ void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
     ssdt = tables_blob->len;
     acpi_add_table(table_offsets, tables_blob);
     build_ssdt(tables_blob, tables->linker, &cpu, &pm, &misc, &pci,
-               guest_info);
+               guest_info, pm.dsdt_revision);
     aml_len += tables_blob->len - ssdt;
 
     acpi_add_table(table_offsets, tables_blob);
@@ -1779,7 +1788,8 @@ void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
     }
 
     if (acpi_has_nvdimm()) {
-        nvdimm_build_acpi(table_offsets, tables_blob, tables->linker);
+        nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
+                          pm.dsdt_revision);
     }
 
     /* Add tables supplied by user (if any) */
diff --git a/include/hw/acpi/ich9.h b/include/hw/acpi/ich9.h
index 345fd8d..fa51e69 100644
--- a/include/hw/acpi/ich9.h
+++ b/include/hw/acpi/ich9.h
@@ -56,6 +56,8 @@ typedef struct ICH9LPCPMRegs {
     uint8_t disable_s4;
     uint8_t s4_val;
     uint8_t smm_enabled;
+    uint8_t dsdt_revision;
+
     bool enable_tco;
     TCOIORegs tco_regs;
 } ICH9LPCPMRegs;
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index b0d6283..3b934be 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -148,6 +148,8 @@ typedef struct PcPciInfo {
 #define ACPI_PM_PROP_GPE0_BLK_LEN "gpe0_blk_len"
 #define ACPI_PM_PROP_TCO_ENABLED "enable_tco"
 
+#define ACPI_DSDT_REVISION "dsdt_revision"
+
 struct PcGuestInfo {
     bool isapc_ram_fw;
     hwaddr ram_size, ram_size_below_4g;
@@ -353,7 +355,17 @@ int e820_get_num_entries(void);
 bool e820_get_entry(int, uint32_t, uint64_t *, uint64_t *);
 
 #define PC_COMPAT_2_5 \
-    HW_COMPAT_2_5
+    HW_COMPAT_2_5 \
+    {\
+        .driver   = "PIIX4_PM",\
+        .property = ACPI_DSDT_REVISION,\
+        .value    = stringify(1),\
+    },\
+    {\
+        .driver   = "ICH9-LPC",\
+        .property = ACPI_DSDT_REVISION,\
+        .value    = stringify(1),\
+    },
 
 #define PC_COMPAT_2_4 \
     PC_COMPAT_2_5 \
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index 49183c1..fd2af14 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -28,5 +28,5 @@
 #define TYPE_NVDIMM      "nvdimm"
 
 void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
-                       GArray *linker);
+                       GArray *linker, uint8_t revision);
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 1/6] pc: acpi: bump DSDT/SSDT compliance revision to v2
@ 2016-01-04 18:52   ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

From: Igor Mammedov <imammedo@redhat.com>

It turns on 64-bit integer handling in OSPM, which will be used for
writing simpler/smaller AML code in following patch.

Tested with Windows XP and Windows Server 2008, Linux:
  * XP doesn't care about revision and continues to use 32 integers
       and boots just fine with this change.
  * WS 2008 and Linux - support rev2 and use 64-bit integers

[
  Xiao: make dsdt/ssdt be v1 in qemu version <= 2.5 to keep
        compatible.
]
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/ich9.c          | 32 ++++++++++++++++++++++++++++++++
 hw/acpi/nvdimm.c        | 10 ++++++----
 hw/acpi/piix4.c         |  3 +++
 hw/i386/acpi-build.c    | 24 +++++++++++++++++-------
 include/hw/acpi/ich9.h  |  2 ++
 include/hw/i386/pc.h    | 14 +++++++++++++-
 include/hw/mem/nvdimm.h |  2 +-
 7 files changed, 74 insertions(+), 13 deletions(-)

diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
index 1c7fcfa..b26f4cc 100644
--- a/hw/acpi/ich9.c
+++ b/hw/acpi/ich9.c
@@ -400,6 +400,33 @@ static void ich9_pm_set_enable_tco(Object *obj, bool value, Error **errp)
     s->pm.enable_tco = value;
 }
 
+static void ich9_pm_get_dsdt_revision(Object *obj, Visitor *v,
+                                      void *opaque, const char *name,
+                                      Error **errp)
+{
+    ICH9LPCPMRegs *pm = opaque;
+    uint8_t value = pm->dsdt_revision;
+
+    visit_type_uint8(v, &value, name, errp);
+}
+
+static void ich9_pm_set_dsdt_revision(Object *obj, Visitor *v,
+                                      void *opaque, const char *name,
+                                      Error **errp)
+{
+    ICH9LPCPMRegs *pm = opaque;
+    Error *local_err = NULL;
+    uint8_t value;
+
+    visit_type_uint8(v, &value, name, &local_err);
+    if (local_err) {
+        goto out;
+    }
+    pm->dsdt_revision = value;
+out:
+    error_propagate(errp, local_err);
+}
+
 void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs *pm, Error **errp)
 {
     static const uint32_t gpe0_len = ICH9_PMIO_GPE0_LEN;
@@ -407,6 +434,7 @@ void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs *pm, Error **errp)
     pm->disable_s3 = 0;
     pm->disable_s4 = 0;
     pm->s4_val = 2;
+    pm->dsdt_revision = 2;
 
     object_property_add_uint32_ptr(obj, ACPI_PM_PROP_PM_IO_BASE,
                                    &pm->pm_io_base, errp);
@@ -435,6 +463,10 @@ void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs *pm, Error **errp)
                              ich9_pm_get_enable_tco,
                              ich9_pm_set_enable_tco,
                              NULL);
+    object_property_add(obj, ACPI_DSDT_REVISION, "uint8",
+                        ich9_pm_get_dsdt_revision,
+                        ich9_pm_set_dsdt_revision,
+                        NULL, pm, NULL);
 }
 
 void ich9_pm_device_plug_cb(ICH9LPCPMRegs *pm, DeviceState *dev, Error **errp)
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 9534418..a2c58dd 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -430,7 +430,8 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev)
 }
 
 static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
-                              GArray *table_data, GArray *linker)
+                              GArray *table_data, GArray *linker,
+                              uint8_t revision)
 {
     Aml *ssdt, *sb_scope, *dev;
 
@@ -468,12 +469,12 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
     g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
     build_header(linker, table_data,
         (void *)(table_data->data + table_data->len - ssdt->buf->len),
-        "SSDT", ssdt->buf->len, 1, "NVDIMM");
+        "SSDT", ssdt->buf->len, revision, "NVDIMM");
     free_aml_allocator();
 }
 
 void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
-                       GArray *linker)
+                       GArray *linker, uint8_t revision)
 {
     GSList *device_list;
 
@@ -483,6 +484,7 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
         return;
     }
     nvdimm_build_nfit(device_list, table_offsets, table_data, linker);
-    nvdimm_build_ssdt(device_list, table_offsets, table_data, linker);
+    nvdimm_build_ssdt(device_list, table_offsets, table_data, linker,
+                      revision);
     g_slist_free(device_list);
 }
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 2cd2fee..9d365f8 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -83,6 +83,8 @@ typedef struct PIIX4PMState {
     uint8_t disable_s4;
     uint8_t s4_val;
 
+    uint8_t dsdt_revision;
+
     AcpiCpuHotplug gpe_cpu;
 
     MemHotplugState acpi_memory_hotplug;
@@ -588,6 +590,7 @@ static Property piix4_pm_properties[] = {
     DEFINE_PROP_UINT8(ACPI_PM_PROP_S3_DISABLED, PIIX4PMState, disable_s3, 0),
     DEFINE_PROP_UINT8(ACPI_PM_PROP_S4_DISABLED, PIIX4PMState, disable_s4, 0),
     DEFINE_PROP_UINT8(ACPI_PM_PROP_S4_VAL, PIIX4PMState, s4_val, 2),
+    DEFINE_PROP_UINT8(ACPI_DSDT_REVISION, PIIX4PMState, dsdt_revision, 2),
     DEFINE_PROP_BOOL("acpi-pci-hotplug-with-bridge-support", PIIX4PMState,
                      use_acpi_pci_hotplug, true),
     DEFINE_PROP_BOOL("memory-hotplug-support", PIIX4PMState,
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 4cc1440..4674461 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -95,6 +95,7 @@ typedef struct AcpiPmInfo {
     uint16_t sci_int;
     uint8_t acpi_enable_cmd;
     uint8_t acpi_disable_cmd;
+    uint8_t dsdt_revision;
     uint32_t gpe0_blk;
     uint32_t gpe0_blk_len;
     uint32_t io_base;
@@ -213,6 +214,13 @@ static void acpi_get_pm_info(AcpiPmInfo *pm)
         pm->s4_val = false;
     }
     qobject_decref(o);
+    o = object_property_get_qobject(obj, ACPI_DSDT_REVISION, NULL);
+    if (o) {
+        pm->dsdt_revision = qint_get_int(qobject_to_qint(o));
+    } else {
+        pm->dsdt_revision = 0x1;
+    }
+    qobject_decref(o);
 
     /* Fill in mandatory properties */
     pm->sci_int = object_property_get_int(obj, ACPI_PM_PROP_SCI_INT, NULL);
@@ -932,7 +940,7 @@ static Aml *build_crs(PCIHostState *host,
 static void
 build_ssdt(GArray *table_data, GArray *linker,
            AcpiCpuInfo *cpu, AcpiPmInfo *pm, AcpiMiscInfo *misc,
-           PcPciInfo *pci, PcGuestInfo *guest_info)
+           PcPciInfo *pci, PcGuestInfo *guest_info, uint8_t revision)
 {
     MachineState *machine = MACHINE(qdev_get_machine());
     uint32_t nr_mem = machine->ram_slots;
@@ -1378,7 +1386,7 @@ build_ssdt(GArray *table_data, GArray *linker,
     g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
     build_header(linker, table_data,
         (void *)(table_data->data + table_data->len - ssdt->buf->len),
-        "SSDT", ssdt->buf->len, 1, NULL);
+        "SSDT", ssdt->buf->len, revision, NULL);
     free_aml_allocator();
 }
 
@@ -1605,7 +1613,8 @@ build_dmar_q35(GArray *table_data, GArray *linker)
 }
 
 static void
-build_dsdt(GArray *table_data, GArray *linker, AcpiMiscInfo *misc)
+build_dsdt(GArray *table_data, GArray *linker, AcpiMiscInfo *misc,
+           uint8_t revision)
 {
     AcpiTableHeader *dsdt;
 
@@ -1616,7 +1625,7 @@ build_dsdt(GArray *table_data, GArray *linker, AcpiMiscInfo *misc)
 
     memset(dsdt, 0, sizeof *dsdt);
     build_header(linker, table_data, dsdt, "DSDT",
-                 misc->dsdt_size, 1, NULL);
+                 misc->dsdt_size, revision, NULL);
 }
 
 static GArray *
@@ -1732,7 +1741,7 @@ void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
 
     /* DSDT is pointed to by FADT */
     dsdt = tables_blob->len;
-    build_dsdt(tables_blob, tables->linker, &misc);
+    build_dsdt(tables_blob, tables->linker, &misc, pm.dsdt_revision);
 
     /* Count the size of the DSDT and SSDT, we will need it for legacy
      * sizing of ACPI tables.
@@ -1746,7 +1755,7 @@ void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
     ssdt = tables_blob->len;
     acpi_add_table(table_offsets, tables_blob);
     build_ssdt(tables_blob, tables->linker, &cpu, &pm, &misc, &pci,
-               guest_info);
+               guest_info, pm.dsdt_revision);
     aml_len += tables_blob->len - ssdt;
 
     acpi_add_table(table_offsets, tables_blob);
@@ -1779,7 +1788,8 @@ void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
     }
 
     if (acpi_has_nvdimm()) {
-        nvdimm_build_acpi(table_offsets, tables_blob, tables->linker);
+        nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
+                          pm.dsdt_revision);
     }
 
     /* Add tables supplied by user (if any) */
diff --git a/include/hw/acpi/ich9.h b/include/hw/acpi/ich9.h
index 345fd8d..fa51e69 100644
--- a/include/hw/acpi/ich9.h
+++ b/include/hw/acpi/ich9.h
@@ -56,6 +56,8 @@ typedef struct ICH9LPCPMRegs {
     uint8_t disable_s4;
     uint8_t s4_val;
     uint8_t smm_enabled;
+    uint8_t dsdt_revision;
+
     bool enable_tco;
     TCOIORegs tco_regs;
 } ICH9LPCPMRegs;
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index b0d6283..3b934be 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -148,6 +148,8 @@ typedef struct PcPciInfo {
 #define ACPI_PM_PROP_GPE0_BLK_LEN "gpe0_blk_len"
 #define ACPI_PM_PROP_TCO_ENABLED "enable_tco"
 
+#define ACPI_DSDT_REVISION "dsdt_revision"
+
 struct PcGuestInfo {
     bool isapc_ram_fw;
     hwaddr ram_size, ram_size_below_4g;
@@ -353,7 +355,17 @@ int e820_get_num_entries(void);
 bool e820_get_entry(int, uint32_t, uint64_t *, uint64_t *);
 
 #define PC_COMPAT_2_5 \
-    HW_COMPAT_2_5
+    HW_COMPAT_2_5 \
+    {\
+        .driver   = "PIIX4_PM",\
+        .property = ACPI_DSDT_REVISION,\
+        .value    = stringify(1),\
+    },\
+    {\
+        .driver   = "ICH9-LPC",\
+        .property = ACPI_DSDT_REVISION,\
+        .value    = stringify(1),\
+    },
 
 #define PC_COMPAT_2_4 \
     PC_COMPAT_2_5 \
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index 49183c1..fd2af14 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -28,5 +28,5 @@
 #define TYPE_NVDIMM      "nvdimm"
 
 void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
-                       GArray *linker);
+                       GArray *linker, uint8_t revision);
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/6] nvdimm acpi: initialize the resource used by NVDIMM ACPI
  2016-01-04 18:52 ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-04 18:52   ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams,
	kvm, qemu-devel, Xiao Guangrong

IO port 0x0a18 - 0x0a20 in guest is reserved for NVDIMM ACPI emulation,
the table, NVDIMM_DSM_MEM_FILE, will be patched into NVDIMM ACPI
binary code

OSPM uses this port to tell QEMU the final address of the DSM memory
and notify QEMU to emulate the DSM method

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/Makefile.objs   |  2 +-
 hw/acpi/nvdimm.c        | 35 +++++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c    | 10 +---------
 hw/i386/pc.c            |  8 +++++---
 hw/i386/pc_piix.c       |  5 +++++
 hw/i386/pc_q35.c        |  8 +++++++-
 include/hw/i386/pc.h    |  5 ++++-
 include/hw/mem/nvdimm.h | 25 ++++++++++++++++++++++++-
 8 files changed, 82 insertions(+), 16 deletions(-)

diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
index 095597f..84c082d 100644
--- a/hw/acpi/Makefile.objs
+++ b/hw/acpi/Makefile.objs
@@ -2,7 +2,7 @@ common-obj-$(CONFIG_ACPI_X86) += core.o piix4.o pcihp.o
 common-obj-$(CONFIG_ACPI_X86_ICH) += ich9.o tco.o
 common-obj-$(CONFIG_ACPI_CPU_HOTPLUG) += cpu_hotplug.o
 common-obj-$(CONFIG_ACPI_MEMORY_HOTPLUG) += memory_hotplug.o
-common-obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
+obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
 common-obj-$(CONFIG_ACPI) += acpi_interface.o
 common-obj-$(CONFIG_ACPI) += bios-linker-loader.o
 common-obj-$(CONFIG_ACPI) += aml-build.o
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index a2c58dd..bc7cd8f 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -28,6 +28,7 @@
 
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/aml-build.h"
+#include "hw/nvram/fw_cfg.h"
 #include "hw/mem/nvdimm.h"
 
 static int nvdimm_plugged_device_list(Object *obj, void *opaque)
@@ -367,6 +368,40 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
     g_array_free(structures, true);
 }
 
+static uint64_t
+nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
+{
+    return 0;
+}
+
+static void
+nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
+{
+}
+
+static const MemoryRegionOps nvdimm_dsm_ops = {
+    .read = nvdimm_dsm_read,
+    .write = nvdimm_dsm_write,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 4,
+        .max_access_size = 4,
+    },
+};
+
+void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
+                            FWCfgState *fw_cfg, Object *owner)
+{
+    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops, state,
+                          "nvdimm-acpi-io", NVDIMM_ACPI_IO_LEN);
+    memory_region_add_subregion(io, NVDIMM_ACPI_IO_BASE, &state->io_mr);
+
+    state->dsm_mem = g_array_new(false, true /* clear */, 1);
+    acpi_data_push(state->dsm_mem, TARGET_PAGE_SIZE);
+    fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
+                    state->dsm_mem->len);
+}
+
 #define NVDIMM_COMMON_DSM      "NCAL"
 
 static void nvdimm_build_common_dsm(Aml *dev)
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 4674461..0836119 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -39,7 +39,6 @@
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
 #include "hw/acpi/memory_hotplug.h"
-#include "hw/mem/nvdimm.h"
 #include "sysemu/tpm.h"
 #include "hw/acpi/tpm.h"
 #include "sysemu/tpm_backend.h"
@@ -1696,13 +1695,6 @@ static bool acpi_has_iommu(void)
     return intel_iommu && !ambiguous;
 }
 
-static bool acpi_has_nvdimm(void)
-{
-    PCMachineState *pcms = PC_MACHINE(qdev_get_machine());
-
-    return pcms->nvdimm;
-}
-
 static
 void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
 {
@@ -1787,7 +1779,7 @@ void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
         build_dmar_q35(tables_blob, tables->linker);
     }
 
-    if (acpi_has_nvdimm()) {
+    if (guest_info->has_nvdimm) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
                           pm.dsdt_revision);
     }
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 459260b..c7819e7 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1220,6 +1220,8 @@ PcGuestInfo *pc_guest_info_init(PCMachineState *pcms)
         }
     }
 
+    guest_info->has_nvdimm = pcms->acpi_nvdimm_state.is_enabled;
+
     guest_info_state->machine_done.notify = pc_guest_info_machine_done;
     qemu_add_machine_init_done_notifier(&guest_info_state->machine_done);
     return guest_info;
@@ -1869,14 +1871,14 @@ static bool pc_machine_get_nvdimm(Object *obj, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
 
-    return pcms->nvdimm;
+    return pcms->acpi_nvdimm_state.is_enabled;
 }
 
 static void pc_machine_set_nvdimm(Object *obj, bool value, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
 
-    pcms->nvdimm = value;
+    pcms->acpi_nvdimm_state.is_enabled = value;
 }
 
 static void pc_machine_initfn(Object *obj)
@@ -1915,7 +1917,7 @@ static void pc_machine_initfn(Object *obj)
                                     &error_abort);
 
     /* nvdimm is disabled on default. */
-    pcms->nvdimm = false;
+    pcms->acpi_nvdimm_state.is_enabled = false;
     object_property_add_bool(obj, PC_MACHINE_NVDIMM, pc_machine_get_nvdimm,
                              pc_machine_set_nvdimm, &error_abort);
 }
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 438cdae..2fee478 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -281,6 +281,11 @@ static void pc_init1(MachineState *machine,
     if (pcmc->pci_enabled) {
         pc_pci_device_init(pci_bus);
     }
+
+    if (guest_info->has_nvdimm) {
+        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
+                               guest_info->fw_cfg, OBJECT(pcms));
+    }
 }
 
 /* Looking for a pc_compat_2_4() function? It doesn't exist.
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 412b3cd..c9334d5 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -60,6 +60,7 @@ static void pc_q35_init(MachineState *machine)
     PCIDevice *lpc;
     BusState *idebus[MAX_SATA_PORTS];
     ISADevice *rtc_state;
+    MemoryRegion *system_io = get_system_io();
     MemoryRegion *pci_memory;
     MemoryRegion *rom_memory;
     MemoryRegion *ram_memory;
@@ -176,7 +177,7 @@ static void pc_q35_init(MachineState *machine)
     q35_host->mch.ram_memory = ram_memory;
     q35_host->mch.pci_address_space = pci_memory;
     q35_host->mch.system_memory = get_system_memory();
-    q35_host->mch.address_space_io = get_system_io();
+    q35_host->mch.address_space_io = system_io;
     q35_host->mch.below_4g_mem_size = pcms->below_4g_mem_size;
     q35_host->mch.above_4g_mem_size = pcms->above_4g_mem_size;
     /* pci */
@@ -267,6 +268,11 @@ static void pc_q35_init(MachineState *machine)
     if (pcmc->pci_enabled) {
         pc_pci_device_init(host_bus);
     }
+
+    if (guest_info->has_nvdimm) {
+        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
+                               guest_info->fw_cfg, OBJECT(pcms));
+    }
 }
 
 /* Looking for a pc_compat_2_4() function? It doesn't exist.
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 3b934be..310c3eb 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -17,6 +17,7 @@
 #include "hw/boards.h"
 #include "hw/compat.h"
 #include "hw/mem/pc-dimm.h"
+#include "hw/mem/nvdimm.h"
 
 #define HPET_INTCAP "hpet-intcap"
 
@@ -55,7 +56,8 @@ struct PCMachineState {
     uint64_t max_ram_below_4g;
     OnOffAuto vmport;
     OnOffAuto smm;
-    bool nvdimm;
+
+    AcpiNVDIMMState acpi_nvdimm_state;
 
     /* RAM information (sizes, addresses, configuration): */
     ram_addr_t below_4g_mem_size, above_4g_mem_size;
@@ -163,6 +165,7 @@ struct PcGuestInfo {
     bool has_acpi_build;
     bool has_reserved_memory;
     bool rsdp_in_ram;
+    bool has_nvdimm;
 };
 
 /* parallel.c */
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index fd2af14..d908d6a 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -25,8 +25,31 @@
 
 #include "hw/mem/pc-dimm.h"
 
-#define TYPE_NVDIMM      "nvdimm"
+#define TYPE_NVDIMM             "nvdimm"
 
+#define NVDIMM_DSM_MEM_FILE     "etc/acpi/nvdimm-mem"
+
+/* IO port 0x0a18 - 0xa20 in guest are reserved for NVDIMM ACPI emulation. */
+#define NVDIMM_ACPI_IO_BASE     0x0a18
+#define NVDIMM_ACPI_IO_LEN      8
+
+/*
+ * AcpiNVDIMMState:
+ * @is_enabled: detect if NVDIMM support is enabled.
+ *
+ * @dsm_mem: the data of the fw_cfg file NVDIMM_DSM_MEM_FILE.
+ * @io_mr: the IO region used by OSPM to transfer control to QEMU.
+ */
+struct AcpiNVDIMMState {
+    bool is_enabled;
+
+    GArray *dsm_mem;
+    MemoryRegion io_mr;
+};
+typedef struct AcpiNVDIMMState AcpiNVDIMMState;
+
+void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
+                            FWCfgState *fw_cfg, Object *owner);
 void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
                        GArray *linker, uint8_t revision);
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 2/6] nvdimm acpi: initialize the resource used by NVDIMM ACPI
@ 2016-01-04 18:52   ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

IO port 0x0a18 - 0x0a20 in guest is reserved for NVDIMM ACPI emulation,
the table, NVDIMM_DSM_MEM_FILE, will be patched into NVDIMM ACPI
binary code

OSPM uses this port to tell QEMU the final address of the DSM memory
and notify QEMU to emulate the DSM method

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/Makefile.objs   |  2 +-
 hw/acpi/nvdimm.c        | 35 +++++++++++++++++++++++++++++++++++
 hw/i386/acpi-build.c    | 10 +---------
 hw/i386/pc.c            |  8 +++++---
 hw/i386/pc_piix.c       |  5 +++++
 hw/i386/pc_q35.c        |  8 +++++++-
 include/hw/i386/pc.h    |  5 ++++-
 include/hw/mem/nvdimm.h | 25 ++++++++++++++++++++++++-
 8 files changed, 82 insertions(+), 16 deletions(-)

diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
index 095597f..84c082d 100644
--- a/hw/acpi/Makefile.objs
+++ b/hw/acpi/Makefile.objs
@@ -2,7 +2,7 @@ common-obj-$(CONFIG_ACPI_X86) += core.o piix4.o pcihp.o
 common-obj-$(CONFIG_ACPI_X86_ICH) += ich9.o tco.o
 common-obj-$(CONFIG_ACPI_CPU_HOTPLUG) += cpu_hotplug.o
 common-obj-$(CONFIG_ACPI_MEMORY_HOTPLUG) += memory_hotplug.o
-common-obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
+obj-$(CONFIG_ACPI_NVDIMM) += nvdimm.o
 common-obj-$(CONFIG_ACPI) += acpi_interface.o
 common-obj-$(CONFIG_ACPI) += bios-linker-loader.o
 common-obj-$(CONFIG_ACPI) += aml-build.o
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index a2c58dd..bc7cd8f 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -28,6 +28,7 @@
 
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/aml-build.h"
+#include "hw/nvram/fw_cfg.h"
 #include "hw/mem/nvdimm.h"
 
 static int nvdimm_plugged_device_list(Object *obj, void *opaque)
@@ -367,6 +368,40 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
     g_array_free(structures, true);
 }
 
+static uint64_t
+nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
+{
+    return 0;
+}
+
+static void
+nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
+{
+}
+
+static const MemoryRegionOps nvdimm_dsm_ops = {
+    .read = nvdimm_dsm_read,
+    .write = nvdimm_dsm_write,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 4,
+        .max_access_size = 4,
+    },
+};
+
+void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
+                            FWCfgState *fw_cfg, Object *owner)
+{
+    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops, state,
+                          "nvdimm-acpi-io", NVDIMM_ACPI_IO_LEN);
+    memory_region_add_subregion(io, NVDIMM_ACPI_IO_BASE, &state->io_mr);
+
+    state->dsm_mem = g_array_new(false, true /* clear */, 1);
+    acpi_data_push(state->dsm_mem, TARGET_PAGE_SIZE);
+    fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
+                    state->dsm_mem->len);
+}
+
 #define NVDIMM_COMMON_DSM      "NCAL"
 
 static void nvdimm_build_common_dsm(Aml *dev)
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 4674461..0836119 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -39,7 +39,6 @@
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
 #include "hw/acpi/memory_hotplug.h"
-#include "hw/mem/nvdimm.h"
 #include "sysemu/tpm.h"
 #include "hw/acpi/tpm.h"
 #include "sysemu/tpm_backend.h"
@@ -1696,13 +1695,6 @@ static bool acpi_has_iommu(void)
     return intel_iommu && !ambiguous;
 }
 
-static bool acpi_has_nvdimm(void)
-{
-    PCMachineState *pcms = PC_MACHINE(qdev_get_machine());
-
-    return pcms->nvdimm;
-}
-
 static
 void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
 {
@@ -1787,7 +1779,7 @@ void acpi_build(PcGuestInfo *guest_info, AcpiBuildTables *tables)
         build_dmar_q35(tables_blob, tables->linker);
     }
 
-    if (acpi_has_nvdimm()) {
+    if (guest_info->has_nvdimm) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
                           pm.dsdt_revision);
     }
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 459260b..c7819e7 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1220,6 +1220,8 @@ PcGuestInfo *pc_guest_info_init(PCMachineState *pcms)
         }
     }
 
+    guest_info->has_nvdimm = pcms->acpi_nvdimm_state.is_enabled;
+
     guest_info_state->machine_done.notify = pc_guest_info_machine_done;
     qemu_add_machine_init_done_notifier(&guest_info_state->machine_done);
     return guest_info;
@@ -1869,14 +1871,14 @@ static bool pc_machine_get_nvdimm(Object *obj, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
 
-    return pcms->nvdimm;
+    return pcms->acpi_nvdimm_state.is_enabled;
 }
 
 static void pc_machine_set_nvdimm(Object *obj, bool value, Error **errp)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
 
-    pcms->nvdimm = value;
+    pcms->acpi_nvdimm_state.is_enabled = value;
 }
 
 static void pc_machine_initfn(Object *obj)
@@ -1915,7 +1917,7 @@ static void pc_machine_initfn(Object *obj)
                                     &error_abort);
 
     /* nvdimm is disabled on default. */
-    pcms->nvdimm = false;
+    pcms->acpi_nvdimm_state.is_enabled = false;
     object_property_add_bool(obj, PC_MACHINE_NVDIMM, pc_machine_get_nvdimm,
                              pc_machine_set_nvdimm, &error_abort);
 }
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 438cdae..2fee478 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -281,6 +281,11 @@ static void pc_init1(MachineState *machine,
     if (pcmc->pci_enabled) {
         pc_pci_device_init(pci_bus);
     }
+
+    if (guest_info->has_nvdimm) {
+        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
+                               guest_info->fw_cfg, OBJECT(pcms));
+    }
 }
 
 /* Looking for a pc_compat_2_4() function? It doesn't exist.
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 412b3cd..c9334d5 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -60,6 +60,7 @@ static void pc_q35_init(MachineState *machine)
     PCIDevice *lpc;
     BusState *idebus[MAX_SATA_PORTS];
     ISADevice *rtc_state;
+    MemoryRegion *system_io = get_system_io();
     MemoryRegion *pci_memory;
     MemoryRegion *rom_memory;
     MemoryRegion *ram_memory;
@@ -176,7 +177,7 @@ static void pc_q35_init(MachineState *machine)
     q35_host->mch.ram_memory = ram_memory;
     q35_host->mch.pci_address_space = pci_memory;
     q35_host->mch.system_memory = get_system_memory();
-    q35_host->mch.address_space_io = get_system_io();
+    q35_host->mch.address_space_io = system_io;
     q35_host->mch.below_4g_mem_size = pcms->below_4g_mem_size;
     q35_host->mch.above_4g_mem_size = pcms->above_4g_mem_size;
     /* pci */
@@ -267,6 +268,11 @@ static void pc_q35_init(MachineState *machine)
     if (pcmc->pci_enabled) {
         pc_pci_device_init(host_bus);
     }
+
+    if (guest_info->has_nvdimm) {
+        nvdimm_init_acpi_state(&pcms->acpi_nvdimm_state, system_io,
+                               guest_info->fw_cfg, OBJECT(pcms));
+    }
 }
 
 /* Looking for a pc_compat_2_4() function? It doesn't exist.
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 3b934be..310c3eb 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -17,6 +17,7 @@
 #include "hw/boards.h"
 #include "hw/compat.h"
 #include "hw/mem/pc-dimm.h"
+#include "hw/mem/nvdimm.h"
 
 #define HPET_INTCAP "hpet-intcap"
 
@@ -55,7 +56,8 @@ struct PCMachineState {
     uint64_t max_ram_below_4g;
     OnOffAuto vmport;
     OnOffAuto smm;
-    bool nvdimm;
+
+    AcpiNVDIMMState acpi_nvdimm_state;
 
     /* RAM information (sizes, addresses, configuration): */
     ram_addr_t below_4g_mem_size, above_4g_mem_size;
@@ -163,6 +165,7 @@ struct PcGuestInfo {
     bool has_acpi_build;
     bool has_reserved_memory;
     bool rsdp_in_ram;
+    bool has_nvdimm;
 };
 
 /* parallel.c */
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index fd2af14..d908d6a 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -25,8 +25,31 @@
 
 #include "hw/mem/pc-dimm.h"
 
-#define TYPE_NVDIMM      "nvdimm"
+#define TYPE_NVDIMM             "nvdimm"
 
+#define NVDIMM_DSM_MEM_FILE     "etc/acpi/nvdimm-mem"
+
+/* IO port 0x0a18 - 0xa20 in guest are reserved for NVDIMM ACPI emulation. */
+#define NVDIMM_ACPI_IO_BASE     0x0a18
+#define NVDIMM_ACPI_IO_LEN      8
+
+/*
+ * AcpiNVDIMMState:
+ * @is_enabled: detect if NVDIMM support is enabled.
+ *
+ * @dsm_mem: the data of the fw_cfg file NVDIMM_DSM_MEM_FILE.
+ * @io_mr: the IO region used by OSPM to transfer control to QEMU.
+ */
+struct AcpiNVDIMMState {
+    bool is_enabled;
+
+    GArray *dsm_mem;
+    MemoryRegion io_mr;
+};
+typedef struct AcpiNVDIMMState AcpiNVDIMMState;
+
+void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
+                            FWCfgState *fw_cfg, Object *owner);
 void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data,
                        GArray *linker, uint8_t revision);
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
  2016-01-04 18:52 ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-04 18:52   ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams,
	kvm, qemu-devel, Xiao Guangrong

The dsm memory is used to save the input parameters and store
the dsm result which is filled by QEMU.

The address of dsm memory is decided by bios and patched into
int64 object returned by "MEMA" method

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/aml-build.c         | 12 ++++++++++++
 hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
 include/hw/acpi/aml-build.h |  1 +
 3 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 78e1290..83eadb3 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
 }
 
 /*
+ * ACPI 1.0b: 16.2.3 Data Objects Encoding:
+ * encode: QWordConst
+ */
+Aml *aml_int64(const uint64_t val)
+{
+    Aml *var = aml_alloc();
+    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
+    build_append_int_noprefix(var->buf, val, 8);
+    return var;
+}
+
+/*
  * helper to construct NameString, which returns Aml object
  * for using with aml_append or other aml_* terms
  */
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index bc7cd8f..a72104c 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -28,6 +28,7 @@
 
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/aml-build.h"
+#include "hw/acpi/bios-linker-loader.h"
 #include "hw/nvram/fw_cfg.h"
 #include "hw/mem/nvdimm.h"
 
@@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
                     state->dsm_mem->len);
 }
 
-#define NVDIMM_COMMON_DSM      "NCAL"
+#define NVDIMM_GET_DSM_MEM      "MEMA"
+#define NVDIMM_COMMON_DSM       "NCAL"
 
 static void nvdimm_build_common_dsm(Aml *dev)
 {
@@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
                               GArray *table_data, GArray *linker,
                               uint8_t revision)
 {
-    Aml *ssdt, *sb_scope, *dev;
+    Aml *ssdt, *sb_scope, *dev, *method;
+    int offset;
 
     acpi_add_table(table_offsets, table_data);
 
@@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
 
     aml_append(sb_scope, dev);
 
+    /*
+     * leave it at the end of ssdt so that we can conveniently get the
+     * offset of int64 object returned by the function which will be
+     * patched with the real address of the dsm memory by BIOS.
+     */
+    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
+    aml_append(method, aml_return(aml_int64(0x0)));
+    aml_append(sb_scope, method);
     aml_append(ssdt, sb_scope);
     /* copy AML table into ACPI tables blob and patch header there */
     g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
+
+    offset = table_data->len - 8;
+
+    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
+                             false /* high memory */);
+    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
+                                   NVDIMM_DSM_MEM_FILE, table_data,
+                                   table_data->data + offset,
+                                   sizeof(uint64_t));
     build_header(linker, table_data,
         (void *)(table_data->data + table_data->len - ssdt->buf->len),
         "SSDT", ssdt->buf->len, revision, "NVDIMM");
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index ef44d02..b4726a4 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -246,6 +246,7 @@ Aml *aml_name(const char *name_format, ...) GCC_FMT_ATTR(1, 2);
 Aml *aml_name_decl(const char *name, Aml *val);
 Aml *aml_return(Aml *val);
 Aml *aml_int(const uint64_t val);
+Aml *aml_int64(const uint64_t val);
 Aml *aml_arg(int pos);
 Aml *aml_to_integer(Aml *arg);
 Aml *aml_to_hexstring(Aml *src, Aml *dst);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
@ 2016-01-04 18:52   ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

The dsm memory is used to save the input parameters and store
the dsm result which is filled by QEMU.

The address of dsm memory is decided by bios and patched into
int64 object returned by "MEMA" method

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/aml-build.c         | 12 ++++++++++++
 hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
 include/hw/acpi/aml-build.h |  1 +
 3 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 78e1290..83eadb3 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
 }
 
 /*
+ * ACPI 1.0b: 16.2.3 Data Objects Encoding:
+ * encode: QWordConst
+ */
+Aml *aml_int64(const uint64_t val)
+{
+    Aml *var = aml_alloc();
+    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
+    build_append_int_noprefix(var->buf, val, 8);
+    return var;
+}
+
+/*
  * helper to construct NameString, which returns Aml object
  * for using with aml_append or other aml_* terms
  */
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index bc7cd8f..a72104c 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -28,6 +28,7 @@
 
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/aml-build.h"
+#include "hw/acpi/bios-linker-loader.h"
 #include "hw/nvram/fw_cfg.h"
 #include "hw/mem/nvdimm.h"
 
@@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
                     state->dsm_mem->len);
 }
 
-#define NVDIMM_COMMON_DSM      "NCAL"
+#define NVDIMM_GET_DSM_MEM      "MEMA"
+#define NVDIMM_COMMON_DSM       "NCAL"
 
 static void nvdimm_build_common_dsm(Aml *dev)
 {
@@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
                               GArray *table_data, GArray *linker,
                               uint8_t revision)
 {
-    Aml *ssdt, *sb_scope, *dev;
+    Aml *ssdt, *sb_scope, *dev, *method;
+    int offset;
 
     acpi_add_table(table_offsets, table_data);
 
@@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
 
     aml_append(sb_scope, dev);
 
+    /*
+     * leave it at the end of ssdt so that we can conveniently get the
+     * offset of int64 object returned by the function which will be
+     * patched with the real address of the dsm memory by BIOS.
+     */
+    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
+    aml_append(method, aml_return(aml_int64(0x0)));
+    aml_append(sb_scope, method);
     aml_append(ssdt, sb_scope);
     /* copy AML table into ACPI tables blob and patch header there */
     g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
+
+    offset = table_data->len - 8;
+
+    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
+                             false /* high memory */);
+    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
+                                   NVDIMM_DSM_MEM_FILE, table_data,
+                                   table_data->data + offset,
+                                   sizeof(uint64_t));
     build_header(linker, table_data,
         (void *)(table_data->data + table_data->len - ssdt->buf->len),
         "SSDT", ssdt->buf->len, revision, "NVDIMM");
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index ef44d02..b4726a4 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -246,6 +246,7 @@ Aml *aml_name(const char *name_format, ...) GCC_FMT_ATTR(1, 2);
 Aml *aml_name_decl(const char *name, Aml *val);
 Aml *aml_return(Aml *val);
 Aml *aml_int(const uint64_t val);
+Aml *aml_int64(const uint64_t val);
 Aml *aml_arg(int pos);
 Aml *aml_to_integer(Aml *arg);
 Aml *aml_to_hexstring(Aml *src, Aml *dst);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/6] acpi: allow using acpi named offset for OperationRegion
  2016-01-04 18:52 ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-04 18:52   ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams,
	kvm, qemu-devel, Xiao Guangrong

Extend aml_operation_region() to use named object

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/aml-build.c         | 4 ++--
 hw/i386/acpi-build.c        | 7 ++++---
 include/hw/acpi/aml-build.h | 2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 83eadb3..677c1a6 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -958,14 +958,14 @@ Aml *aml_package(uint8_t num_elements)
 
 /* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefOpRegion */
 Aml *aml_operation_region(const char *name, AmlRegionSpace rs,
-                          uint32_t offset, uint32_t len)
+                          Aml *offset, uint32_t len)
 {
     Aml *var = aml_alloc();
     build_append_byte(var->buf, 0x5B); /* ExtOpPrefix */
     build_append_byte(var->buf, 0x80); /* OpRegionOp */
     build_append_namestring(var->buf, "%s", name);
     build_append_byte(var->buf, rs);
-    build_append_int(var->buf, offset);
+    aml_append(var, offset);
     build_append_int(var->buf, len);
     return var;
 }
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 0836119..ad10c48 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1139,7 +1139,7 @@ build_ssdt(GArray *table_data, GArray *linker,
         aml_append(dev, aml_name_decl("_CRS", crs));
 
         aml_append(dev, aml_operation_region("PEOR", AML_SYSTEM_IO,
-                                              misc->pvpanic_port, 1));
+                                              aml_int(misc->pvpanic_port), 1));
         field = aml_field("PEOR", AML_BYTE_ACC, AML_NOLOCK, AML_PRESERVE);
         aml_append(field, aml_named_field("PEPT", 8));
         aml_append(dev, field);
@@ -1179,7 +1179,8 @@ build_ssdt(GArray *table_data, GArray *linker,
         aml_append(sb_scope, dev);
         /* declare CPU hotplug MMIO region and PRS field to access it */
         aml_append(sb_scope, aml_operation_region(
-            "PRST", AML_SYSTEM_IO, pm->cpu_hp_io_base, pm->cpu_hp_io_len));
+            "PRST", AML_SYSTEM_IO, aml_int(pm->cpu_hp_io_base),
+            pm->cpu_hp_io_len));
         field = aml_field("PRST", AML_BYTE_ACC, AML_NOLOCK, AML_PRESERVE);
         aml_append(field, aml_named_field("PRS", 256));
         aml_append(sb_scope, field);
@@ -1251,7 +1252,7 @@ build_ssdt(GArray *table_data, GArray *linker,
 
         aml_append(scope, aml_operation_region(
             stringify(MEMORY_HOTPLUG_IO_REGION), AML_SYSTEM_IO,
-            pm->mem_hp_io_base, pm->mem_hp_io_len)
+            aml_int(pm->mem_hp_io_base), pm->mem_hp_io_len)
         );
 
         field = aml_field(stringify(MEMORY_HOTPLUG_IO_REGION), AML_DWORD_ACC,
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index b4726a4..a8d8f3b 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -285,7 +285,7 @@ Aml *aml_interrupt(AmlConsumerAndProducer con_and_pro,
 Aml *aml_io(AmlIODecode dec, uint16_t min_base, uint16_t max_base,
             uint8_t aln, uint8_t len);
 Aml *aml_operation_region(const char *name, AmlRegionSpace rs,
-                          uint32_t offset, uint32_t len);
+                          Aml *offset, uint32_t len);
 Aml *aml_irq_no_flags(uint8_t irq);
 Aml *aml_named_field(const char *name, unsigned length);
 Aml *aml_reserved_field(unsigned length);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 4/6] acpi: allow using acpi named offset for OperationRegion
@ 2016-01-04 18:52   ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

Extend aml_operation_region() to use named object

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/aml-build.c         | 4 ++--
 hw/i386/acpi-build.c        | 7 ++++---
 include/hw/acpi/aml-build.h | 2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 83eadb3..677c1a6 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -958,14 +958,14 @@ Aml *aml_package(uint8_t num_elements)
 
 /* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefOpRegion */
 Aml *aml_operation_region(const char *name, AmlRegionSpace rs,
-                          uint32_t offset, uint32_t len)
+                          Aml *offset, uint32_t len)
 {
     Aml *var = aml_alloc();
     build_append_byte(var->buf, 0x5B); /* ExtOpPrefix */
     build_append_byte(var->buf, 0x80); /* OpRegionOp */
     build_append_namestring(var->buf, "%s", name);
     build_append_byte(var->buf, rs);
-    build_append_int(var->buf, offset);
+    aml_append(var, offset);
     build_append_int(var->buf, len);
     return var;
 }
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 0836119..ad10c48 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1139,7 +1139,7 @@ build_ssdt(GArray *table_data, GArray *linker,
         aml_append(dev, aml_name_decl("_CRS", crs));
 
         aml_append(dev, aml_operation_region("PEOR", AML_SYSTEM_IO,
-                                              misc->pvpanic_port, 1));
+                                              aml_int(misc->pvpanic_port), 1));
         field = aml_field("PEOR", AML_BYTE_ACC, AML_NOLOCK, AML_PRESERVE);
         aml_append(field, aml_named_field("PEPT", 8));
         aml_append(dev, field);
@@ -1179,7 +1179,8 @@ build_ssdt(GArray *table_data, GArray *linker,
         aml_append(sb_scope, dev);
         /* declare CPU hotplug MMIO region and PRS field to access it */
         aml_append(sb_scope, aml_operation_region(
-            "PRST", AML_SYSTEM_IO, pm->cpu_hp_io_base, pm->cpu_hp_io_len));
+            "PRST", AML_SYSTEM_IO, aml_int(pm->cpu_hp_io_base),
+            pm->cpu_hp_io_len));
         field = aml_field("PRST", AML_BYTE_ACC, AML_NOLOCK, AML_PRESERVE);
         aml_append(field, aml_named_field("PRS", 256));
         aml_append(sb_scope, field);
@@ -1251,7 +1252,7 @@ build_ssdt(GArray *table_data, GArray *linker,
 
         aml_append(scope, aml_operation_region(
             stringify(MEMORY_HOTPLUG_IO_REGION), AML_SYSTEM_IO,
-            pm->mem_hp_io_base, pm->mem_hp_io_len)
+            aml_int(pm->mem_hp_io_base), pm->mem_hp_io_len)
         );
 
         field = aml_field(stringify(MEMORY_HOTPLUG_IO_REGION), AML_DWORD_ACC,
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index b4726a4..a8d8f3b 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -285,7 +285,7 @@ Aml *aml_interrupt(AmlConsumerAndProducer con_and_pro,
 Aml *aml_io(AmlIODecode dec, uint16_t min_base, uint16_t max_base,
             uint8_t aln, uint8_t len);
 Aml *aml_operation_region(const char *name, AmlRegionSpace rs,
-                          uint32_t offset, uint32_t len);
+                          Aml *offset, uint32_t len);
 Aml *aml_irq_no_flags(uint8_t irq);
 Aml *aml_named_field(const char *name, unsigned length);
 Aml *aml_reserved_field(unsigned length);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method
  2016-01-04 18:52 ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-04 18:52   ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams,
	kvm, qemu-devel, Xiao Guangrong

If dsm memory is successfully patched, we let qemu fully emulate
the dsm method

This patch saves _DSM input parameters into dsm memory, tell dsm
memory address to QEMU, then fetch the result from the dsm memory

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/aml-build.c         |  27 ++++++++++
 hw/acpi/nvdimm.c            | 124 ++++++++++++++++++++++++++++++++++++++++++--
 include/hw/acpi/aml-build.h |   2 +
 3 files changed, 150 insertions(+), 3 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 677c1a6..e65171f 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1013,6 +1013,19 @@ Aml *create_field_common(int opcode, Aml *srcbuf, Aml *index, const char *name)
     return var;
 }
 
+/* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefCreateField */
+Aml *aml_create_field(Aml *srcbuf, Aml *index, Aml *len, const char *name)
+{
+    Aml *var = aml_alloc();
+    build_append_byte(var->buf, 0x5B); /* ExtOpPrefix */
+    build_append_byte(var->buf, 0x13); /* CreateFieldOp */
+    aml_append(var, srcbuf);
+    aml_append(var, index);
+    aml_append(var, len);
+    build_append_namestring(var->buf, "%s", name);
+    return var;
+}
+
 /* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefCreateDWordField */
 Aml *aml_create_dword_field(Aml *srcbuf, Aml *index, const char *name)
 {
@@ -1439,6 +1452,20 @@ Aml *aml_alias(const char *source_object, const char *alias_object)
     return var;
 }
 
+/* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefConcat */
+Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target)
+{
+    Aml *var = aml_opcode(0x73 /* ConcatOp */);
+    aml_append(var, source1);
+    aml_append(var, source2);
+
+    if (target) {
+        aml_append(var, target);
+    }
+
+    return var;
+}
+
 void
 build_header(GArray *linker, GArray *table_data,
              AcpiTableHeader *h, const char *sig, int len, uint8_t rev,
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index a72104c..dfccbc0 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -369,6 +369,24 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
     g_array_free(structures, true);
 }
 
+struct NvdimmDsmIn {
+    uint32_t handle;
+    uint32_t revision;
+    uint32_t function;
+   /* the remaining size in the page is used by arg3. */
+    union {
+        uint8_t arg3[0];
+    };
+} QEMU_PACKED;
+typedef struct NvdimmDsmIn NvdimmDsmIn;
+
+struct NvdimmDsmOut {
+    /* the size of buffer filled by QEMU. */
+    uint32_t len;
+    uint8_t data[0];
+} QEMU_PACKED;
+typedef struct NvdimmDsmOut NvdimmDsmOut;
+
 static uint64_t
 nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
 {
@@ -408,11 +426,21 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
 
 static void nvdimm_build_common_dsm(Aml *dev)
 {
-    Aml *method, *ifctx, *function;
+    Aml *method, *ifctx, *function, *unpatched, *field, *high_dsm_mem;
+    Aml *result_size, *dsm_mem;
     uint8_t byte_list[1];
 
     method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
     function = aml_arg(2);
+    dsm_mem = aml_arg(3);
+
+    aml_append(method, aml_store(aml_call0(NVDIMM_GET_DSM_MEM), dsm_mem));
+
+    /*
+     * do not support any method if DSM memory address has not been
+     * patched.
+     */
+    unpatched = aml_if(aml_equal(dsm_mem, aml_int64(0x0)));
 
     /*
      * function 0 is called to inquire what functions are supported by
@@ -421,12 +449,102 @@ static void nvdimm_build_common_dsm(Aml *dev)
     ifctx = aml_if(aml_equal(function, aml_int(0)));
     byte_list[0] = 0 /* No function Supported */;
     aml_append(ifctx, aml_return(aml_buffer(1, byte_list)));
-    aml_append(method, ifctx);
+    aml_append(unpatched, ifctx);
 
     /* No function is supported yet. */
     byte_list[0] = 1 /* Not Supported */;
-    aml_append(method, aml_return(aml_buffer(1, byte_list)));
+    aml_append(unpatched, aml_return(aml_buffer(1, byte_list)));
+    aml_append(method, unpatched);
+
+    /* map DSM memory and IO into ACPI namespace. */
+    aml_append(method, aml_operation_region("NPIO", AML_SYSTEM_IO,
+               aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
+    aml_append(method, aml_operation_region("NRAM", AML_SYSTEM_MEMORY,
+                                            dsm_mem, TARGET_PAGE_SIZE));
+
+    /*
+     * DSM notifier:
+     * LNTF: write the low 32 bits of DSM memory.
+     * HNTF: write the high 32 bits of DSM memory and notify QEMU to
+     *       emulate the access.
+     *
+     * They are IO ports so that accessing them will cause VM-exit, the
+     * control will be transferred to QEMU.
+     */
+    field = aml_field("NPIO", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("LNTF",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("HNTF",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(method, field);
 
+    /*
+     * DSM input:
+     * @HDLE: store device's handle, it's zero if the _DSM call happens
+     *        on NVDIMM Root Device.
+     * @REVS: store the Arg1 of _DSM call.
+     * @FUNC: store the Arg2 of _DSM call.
+     * @ARG3: store the Arg3 of _DSM call.
+     *
+     * They are RAM mapping on host so that these accesses never cause
+     * VM-EXIT.
+     */
+    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("HDLE",
+               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("REVS",
+               sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("FUNC",
+               sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("ARG3",
+               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) *
+                     BITS_PER_BYTE));
+    aml_append(method, field);
+
+    /*
+     * DSM output:
+     * @RLEN: the size of the buffer filled by QEMU.
+     * @ODAT: the buffer QEMU uses to store the result.
+     *
+     * Since the page is reused by both input and out, the input data
+     * will be lost after storing new result into @ODAT.
+    */
+    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("RLEN",
+               sizeof(typeof_field(NvdimmDsmOut, len)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("ODAT",
+               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmOut, data)) *
+                     BITS_PER_BYTE));
+    aml_append(method, field);
+
+    /*
+     * Currently no function is supported for both root device and NVDIMM
+     * devices, let's temporarily set handle to 0x0 at this time.
+     */
+    aml_append(method, aml_store(aml_int(0x0), aml_name("HDLE")));
+    aml_append(method, aml_store(aml_arg(1), aml_name("REVS")));
+    aml_append(method, aml_store(aml_arg(2), aml_name("FUNC")));
+
+    /*
+     * tell QEMU about the real address of DSM memory, then QEMU begins
+     * to emulate the method and fills the result to DSM memory.
+     */
+    aml_append(method, aml_store(dsm_mem, aml_name("LNTF")));
+    high_dsm_mem = aml_shiftright(dsm_mem,
+                                  aml_int(sizeof(uint32_t) * BITS_PER_BYTE),
+                                  NULL);
+    aml_append(method, aml_store(high_dsm_mem, aml_name("HNTF")));
+
+    result_size = aml_local(1);
+    aml_append(method, aml_store(aml_name("RLEN"), result_size));
+    aml_append(method, aml_store(aml_shiftleft(result_size, aml_int(3)),
+                                 result_size));
+    aml_append(method, aml_create_field(aml_name("ODAT"), aml_int(0),
+                                        result_size, "OBUF"));
+    aml_append(method, aml_name_decl("ZBUF", aml_buffer(0, NULL)));
+    aml_append(method, aml_concatenate(aml_name("ZBUF"), aml_name("OBUF"),
+                                       aml_arg(6)));
+    aml_append(method, aml_return(aml_arg(6)));
     aml_append(dev, method);
 }
 
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index a8d8f3b..6c1816e 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -344,6 +344,7 @@ Aml *aml_mutex(const char *name, uint8_t sync_level);
 Aml *aml_acquire(Aml *mutex, uint16_t timeout);
 Aml *aml_release(Aml *mutex);
 Aml *aml_alias(const char *source_object, const char *alias_object);
+Aml *aml_create_field(Aml *srcbuf, Aml *index, Aml *len, const char *name);
 Aml *aml_create_dword_field(Aml *srcbuf, Aml *index, const char *name);
 Aml *aml_create_qword_field(Aml *srcbuf, Aml *index, const char *name);
 Aml *aml_varpackage(uint32_t num_elements);
@@ -351,6 +352,7 @@ Aml *aml_touuid(const char *uuid);
 Aml *aml_unicode(const char *str);
 Aml *aml_derefof(Aml *arg);
 Aml *aml_sizeof(Aml *arg);
+Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target);
 
 void
 build_header(GArray *linker, GArray *table_data,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method
@ 2016-01-04 18:52   ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

If dsm memory is successfully patched, we let qemu fully emulate
the dsm method

This patch saves _DSM input parameters into dsm memory, tell dsm
memory address to QEMU, then fetch the result from the dsm memory

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/aml-build.c         |  27 ++++++++++
 hw/acpi/nvdimm.c            | 124 ++++++++++++++++++++++++++++++++++++++++++--
 include/hw/acpi/aml-build.h |   2 +
 3 files changed, 150 insertions(+), 3 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 677c1a6..e65171f 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1013,6 +1013,19 @@ Aml *create_field_common(int opcode, Aml *srcbuf, Aml *index, const char *name)
     return var;
 }
 
+/* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefCreateField */
+Aml *aml_create_field(Aml *srcbuf, Aml *index, Aml *len, const char *name)
+{
+    Aml *var = aml_alloc();
+    build_append_byte(var->buf, 0x5B); /* ExtOpPrefix */
+    build_append_byte(var->buf, 0x13); /* CreateFieldOp */
+    aml_append(var, srcbuf);
+    aml_append(var, index);
+    aml_append(var, len);
+    build_append_namestring(var->buf, "%s", name);
+    return var;
+}
+
 /* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefCreateDWordField */
 Aml *aml_create_dword_field(Aml *srcbuf, Aml *index, const char *name)
 {
@@ -1439,6 +1452,20 @@ Aml *aml_alias(const char *source_object, const char *alias_object)
     return var;
 }
 
+/* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefConcat */
+Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target)
+{
+    Aml *var = aml_opcode(0x73 /* ConcatOp */);
+    aml_append(var, source1);
+    aml_append(var, source2);
+
+    if (target) {
+        aml_append(var, target);
+    }
+
+    return var;
+}
+
 void
 build_header(GArray *linker, GArray *table_data,
              AcpiTableHeader *h, const char *sig, int len, uint8_t rev,
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index a72104c..dfccbc0 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -369,6 +369,24 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
     g_array_free(structures, true);
 }
 
+struct NvdimmDsmIn {
+    uint32_t handle;
+    uint32_t revision;
+    uint32_t function;
+   /* the remaining size in the page is used by arg3. */
+    union {
+        uint8_t arg3[0];
+    };
+} QEMU_PACKED;
+typedef struct NvdimmDsmIn NvdimmDsmIn;
+
+struct NvdimmDsmOut {
+    /* the size of buffer filled by QEMU. */
+    uint32_t len;
+    uint8_t data[0];
+} QEMU_PACKED;
+typedef struct NvdimmDsmOut NvdimmDsmOut;
+
 static uint64_t
 nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
 {
@@ -408,11 +426,21 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
 
 static void nvdimm_build_common_dsm(Aml *dev)
 {
-    Aml *method, *ifctx, *function;
+    Aml *method, *ifctx, *function, *unpatched, *field, *high_dsm_mem;
+    Aml *result_size, *dsm_mem;
     uint8_t byte_list[1];
 
     method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
     function = aml_arg(2);
+    dsm_mem = aml_arg(3);
+
+    aml_append(method, aml_store(aml_call0(NVDIMM_GET_DSM_MEM), dsm_mem));
+
+    /*
+     * do not support any method if DSM memory address has not been
+     * patched.
+     */
+    unpatched = aml_if(aml_equal(dsm_mem, aml_int64(0x0)));
 
     /*
      * function 0 is called to inquire what functions are supported by
@@ -421,12 +449,102 @@ static void nvdimm_build_common_dsm(Aml *dev)
     ifctx = aml_if(aml_equal(function, aml_int(0)));
     byte_list[0] = 0 /* No function Supported */;
     aml_append(ifctx, aml_return(aml_buffer(1, byte_list)));
-    aml_append(method, ifctx);
+    aml_append(unpatched, ifctx);
 
     /* No function is supported yet. */
     byte_list[0] = 1 /* Not Supported */;
-    aml_append(method, aml_return(aml_buffer(1, byte_list)));
+    aml_append(unpatched, aml_return(aml_buffer(1, byte_list)));
+    aml_append(method, unpatched);
+
+    /* map DSM memory and IO into ACPI namespace. */
+    aml_append(method, aml_operation_region("NPIO", AML_SYSTEM_IO,
+               aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
+    aml_append(method, aml_operation_region("NRAM", AML_SYSTEM_MEMORY,
+                                            dsm_mem, TARGET_PAGE_SIZE));
+
+    /*
+     * DSM notifier:
+     * LNTF: write the low 32 bits of DSM memory.
+     * HNTF: write the high 32 bits of DSM memory and notify QEMU to
+     *       emulate the access.
+     *
+     * They are IO ports so that accessing them will cause VM-exit, the
+     * control will be transferred to QEMU.
+     */
+    field = aml_field("NPIO", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("LNTF",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("HNTF",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(method, field);
 
+    /*
+     * DSM input:
+     * @HDLE: store device's handle, it's zero if the _DSM call happens
+     *        on NVDIMM Root Device.
+     * @REVS: store the Arg1 of _DSM call.
+     * @FUNC: store the Arg2 of _DSM call.
+     * @ARG3: store the Arg3 of _DSM call.
+     *
+     * They are RAM mapping on host so that these accesses never cause
+     * VM-EXIT.
+     */
+    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("HDLE",
+               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("REVS",
+               sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("FUNC",
+               sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("ARG3",
+               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) *
+                     BITS_PER_BYTE));
+    aml_append(method, field);
+
+    /*
+     * DSM output:
+     * @RLEN: the size of the buffer filled by QEMU.
+     * @ODAT: the buffer QEMU uses to store the result.
+     *
+     * Since the page is reused by both input and out, the input data
+     * will be lost after storing new result into @ODAT.
+    */
+    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("RLEN",
+               sizeof(typeof_field(NvdimmDsmOut, len)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("ODAT",
+               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmOut, data)) *
+                     BITS_PER_BYTE));
+    aml_append(method, field);
+
+    /*
+     * Currently no function is supported for both root device and NVDIMM
+     * devices, let's temporarily set handle to 0x0 at this time.
+     */
+    aml_append(method, aml_store(aml_int(0x0), aml_name("HDLE")));
+    aml_append(method, aml_store(aml_arg(1), aml_name("REVS")));
+    aml_append(method, aml_store(aml_arg(2), aml_name("FUNC")));
+
+    /*
+     * tell QEMU about the real address of DSM memory, then QEMU begins
+     * to emulate the method and fills the result to DSM memory.
+     */
+    aml_append(method, aml_store(dsm_mem, aml_name("LNTF")));
+    high_dsm_mem = aml_shiftright(dsm_mem,
+                                  aml_int(sizeof(uint32_t) * BITS_PER_BYTE),
+                                  NULL);
+    aml_append(method, aml_store(high_dsm_mem, aml_name("HNTF")));
+
+    result_size = aml_local(1);
+    aml_append(method, aml_store(aml_name("RLEN"), result_size));
+    aml_append(method, aml_store(aml_shiftleft(result_size, aml_int(3)),
+                                 result_size));
+    aml_append(method, aml_create_field(aml_name("ODAT"), aml_int(0),
+                                        result_size, "OBUF"));
+    aml_append(method, aml_name_decl("ZBUF", aml_buffer(0, NULL)));
+    aml_append(method, aml_concatenate(aml_name("ZBUF"), aml_name("OBUF"),
+                                       aml_arg(6)));
+    aml_append(method, aml_return(aml_arg(6)));
     aml_append(dev, method);
 }
 
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index a8d8f3b..6c1816e 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -344,6 +344,7 @@ Aml *aml_mutex(const char *name, uint8_t sync_level);
 Aml *aml_acquire(Aml *mutex, uint16_t timeout);
 Aml *aml_release(Aml *mutex);
 Aml *aml_alias(const char *source_object, const char *alias_object);
+Aml *aml_create_field(Aml *srcbuf, Aml *index, Aml *len, const char *name);
 Aml *aml_create_dword_field(Aml *srcbuf, Aml *index, const char *name);
 Aml *aml_create_qword_field(Aml *srcbuf, Aml *index, const char *name);
 Aml *aml_varpackage(uint32_t num_elements);
@@ -351,6 +352,7 @@ Aml *aml_touuid(const char *uuid);
 Aml *aml_unicode(const char *str);
 Aml *aml_derefof(Aml *arg);
 Aml *aml_sizeof(Aml *arg);
+Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target);
 
 void
 build_header(GArray *linker, GArray *table_data,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 6/6] nvdimm acpi: emulate dsm method
  2016-01-04 18:52 ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-04 18:52   ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: gleb, mtosatti, stefanha, mst, rth, ehabkost, dan.j.williams,
	kvm, qemu-devel, Xiao Guangrong

Emulate dsm method after IO VM-exit

Currently, we only introduce the framework and no function is actually
supported

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/aml-build.c         |  2 +-
 hw/acpi/nvdimm.c            | 83 ++++++++++++++++++++++++++++++++++++++++++++-
 include/hw/acpi/aml-build.h |  1 +
 include/hw/mem/nvdimm.h     | 17 ++++++++++
 4 files changed, 101 insertions(+), 2 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index e65171f..5a7644a 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -231,7 +231,7 @@ static void build_extop_package(GArray *package, uint8_t op)
     build_prepend_byte(package, 0x5B); /* ExtOpPrefix */
 }
 
-static void build_append_int_noprefix(GArray *table, uint64_t value, int size)
+void build_append_int_noprefix(GArray *table, uint64_t value, int size)
 {
     int i;
 
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index dfccbc0..7be9857 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -390,12 +390,80 @@ typedef struct NvdimmDsmOut NvdimmDsmOut;
 static uint64_t
 nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
 {
+    fprintf(stderr, "BUG: we never read _DSM IO Port.\n");
     return 0;
 }
 
 static void
 nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
 {
+    AcpiNVDIMMState *state = opaque;
+    NvdimmDsmIn *in;
+    hwaddr dsm_mem_addr;
+    GArray *out;
+    uint32_t buf_size;
+
+    nvdimm_debug("write address %#lx value %#lx.\n", addr, val);
+
+    if (size != sizeof(uint32_t)) {
+        fprintf(stderr, "BUG: invalid IO bit width %#x.\n", size);
+        return;
+    }
+
+    switch (addr) {
+    case 0:
+        state->low_dsm_mem_addr = val;
+        return;
+    case sizeof(uint32_t):
+        state->high_dsm_mem_addr = val;
+        break;
+    default:
+        fprintf(stderr, "BUG: IO access address %#lx is not dword"
+                " aligned.\n", addr);
+        return;
+    };
+
+    dsm_mem_addr = state->low_dsm_mem_addr;
+    dsm_mem_addr |= (hwaddr)state->high_dsm_mem_addr << (sizeof(uint32_t) *
+                                                        BITS_PER_BYTE);
+    nvdimm_debug("dsm address %#lx\n", dsm_mem_addr);
+
+    /*
+     * The DSM memory is mapped to guest address space so an evil guest
+     * can change its content while we are doing DSM emulation. Avoid
+     * this by copying DSM memory to QEMU local memory.
+     */
+    in = g_malloc(TARGET_PAGE_SIZE);
+    cpu_physical_memory_read(dsm_mem_addr, in, TARGET_PAGE_SIZE);
+
+    le32_to_cpus(&in->revision);
+    le32_to_cpus(&in->function);
+    le32_to_cpus(&in->handle);
+
+    nvdimm_debug("Revision %#x Handler %#x Function %#x.\n", in->revision,
+                 in->handle, in->function);
+
+    out = g_array_new(false, true /* clear */, 1);
+
+    /*
+     * function 0 is called to inquire what functions are supported by
+     * OSPM
+     */
+    if (in->function == 0) {
+        build_append_int_noprefix(out, 0 /* No function Supported */,
+                                  sizeof(uint8_t));
+    } else {
+        /* No function is supported yet. */
+        build_append_int_noprefix(out, 1 /* Not Supported */,
+                                  sizeof(uint8_t));
+    }
+
+    buf_size = cpu_to_le32(out->len);
+    cpu_physical_memory_write(dsm_mem_addr, &buf_size, sizeof(buf_size));
+    cpu_physical_memory_write(dsm_mem_addr + sizeof(buf_size), out->data,
+                              out->len);
+    g_free(in);
+    g_array_free(out, true);
 }
 
 static const MemoryRegionOps nvdimm_dsm_ops = {
@@ -408,6 +476,17 @@ static const MemoryRegionOps nvdimm_dsm_ops = {
     },
 };
 
+static const VMStateDescription nvdimm_acpi_vmstate = {
+    .name = "nvdimm_acpi_vmstate",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT32(low_dsm_mem_addr, AcpiNVDIMMState),
+        VMSTATE_UINT32(high_dsm_mem_addr, AcpiNVDIMMState),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
                             FWCfgState *fw_cfg, Object *owner)
 {
@@ -419,6 +498,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
     acpi_data_push(state->dsm_mem, TARGET_PAGE_SIZE);
     fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
                     state->dsm_mem->len);
+
+    vmstate_register(NULL, 0, &nvdimm_acpi_vmstate, state);
 }
 
 #define NVDIMM_GET_DSM_MEM      "MEMA"
@@ -430,7 +511,7 @@ static void nvdimm_build_common_dsm(Aml *dev)
     Aml *result_size, *dsm_mem;
     uint8_t byte_list[1];
 
-    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
+    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_SERIALIZED);
     function = aml_arg(2);
     dsm_mem = aml_arg(3);
 
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 6c1816e..2fa8daa 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -354,6 +354,7 @@ Aml *aml_derefof(Aml *arg);
 Aml *aml_sizeof(Aml *arg);
 Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target);
 
+void build_append_int_noprefix(GArray *table, uint64_t value, int size);
 void
 build_header(GArray *linker, GArray *table_data,
              AcpiTableHeader *h, const char *sig, int len, uint8_t rev,
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index d908d6a..5f312a1 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -25,6 +25,14 @@
 
 #include "hw/mem/pc-dimm.h"
 
+#define NVDIMM_DEBUG 0
+#define nvdimm_debug(fmt, ...)                                \
+    do {                                                      \
+        if (NVDIMM_DEBUG) {                                   \
+            fprintf(stderr, "nvdimm: " fmt, ## __VA_ARGS__);  \
+        }                                                     \
+    } while (0)
+
 #define TYPE_NVDIMM             "nvdimm"
 
 #define NVDIMM_DSM_MEM_FILE     "etc/acpi/nvdimm-mem"
@@ -38,12 +46,21 @@
  * @is_enabled: detect if NVDIMM support is enabled.
  *
  * @dsm_mem: the data of the fw_cfg file NVDIMM_DSM_MEM_FILE.
+ *
+ * The dsm memory is allocated by BIOS and patched into ACPI binary code.
+ * @low_dsm_mem_addr: the low 32 bits of DSM memory.
+ * @high_dsm_mem_addr: the high 32 bits of DSM memory.
+ *
  * @io_mr: the IO region used by OSPM to transfer control to QEMU.
  */
 struct AcpiNVDIMMState {
     bool is_enabled;
 
     GArray *dsm_mem;
+
+    uint32_t low_dsm_mem_addr;
+    uint32_t high_dsm_mem_addr;
+
     MemoryRegion io_mr;
 };
 typedef struct AcpiNVDIMMState AcpiNVDIMMState;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 6/6] nvdimm acpi: emulate dsm method
@ 2016-01-04 18:52   ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-04 18:52 UTC (permalink / raw)
  To: pbonzini, imammedo
  Cc: Xiao Guangrong, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

Emulate dsm method after IO VM-exit

Currently, we only introduce the framework and no function is actually
supported

Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
---
 hw/acpi/aml-build.c         |  2 +-
 hw/acpi/nvdimm.c            | 83 ++++++++++++++++++++++++++++++++++++++++++++-
 include/hw/acpi/aml-build.h |  1 +
 include/hw/mem/nvdimm.h     | 17 ++++++++++
 4 files changed, 101 insertions(+), 2 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index e65171f..5a7644a 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -231,7 +231,7 @@ static void build_extop_package(GArray *package, uint8_t op)
     build_prepend_byte(package, 0x5B); /* ExtOpPrefix */
 }
 
-static void build_append_int_noprefix(GArray *table, uint64_t value, int size)
+void build_append_int_noprefix(GArray *table, uint64_t value, int size)
 {
     int i;
 
diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index dfccbc0..7be9857 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -390,12 +390,80 @@ typedef struct NvdimmDsmOut NvdimmDsmOut;
 static uint64_t
 nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
 {
+    fprintf(stderr, "BUG: we never read _DSM IO Port.\n");
     return 0;
 }
 
 static void
 nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
 {
+    AcpiNVDIMMState *state = opaque;
+    NvdimmDsmIn *in;
+    hwaddr dsm_mem_addr;
+    GArray *out;
+    uint32_t buf_size;
+
+    nvdimm_debug("write address %#lx value %#lx.\n", addr, val);
+
+    if (size != sizeof(uint32_t)) {
+        fprintf(stderr, "BUG: invalid IO bit width %#x.\n", size);
+        return;
+    }
+
+    switch (addr) {
+    case 0:
+        state->low_dsm_mem_addr = val;
+        return;
+    case sizeof(uint32_t):
+        state->high_dsm_mem_addr = val;
+        break;
+    default:
+        fprintf(stderr, "BUG: IO access address %#lx is not dword"
+                " aligned.\n", addr);
+        return;
+    };
+
+    dsm_mem_addr = state->low_dsm_mem_addr;
+    dsm_mem_addr |= (hwaddr)state->high_dsm_mem_addr << (sizeof(uint32_t) *
+                                                        BITS_PER_BYTE);
+    nvdimm_debug("dsm address %#lx\n", dsm_mem_addr);
+
+    /*
+     * The DSM memory is mapped to guest address space so an evil guest
+     * can change its content while we are doing DSM emulation. Avoid
+     * this by copying DSM memory to QEMU local memory.
+     */
+    in = g_malloc(TARGET_PAGE_SIZE);
+    cpu_physical_memory_read(dsm_mem_addr, in, TARGET_PAGE_SIZE);
+
+    le32_to_cpus(&in->revision);
+    le32_to_cpus(&in->function);
+    le32_to_cpus(&in->handle);
+
+    nvdimm_debug("Revision %#x Handler %#x Function %#x.\n", in->revision,
+                 in->handle, in->function);
+
+    out = g_array_new(false, true /* clear */, 1);
+
+    /*
+     * function 0 is called to inquire what functions are supported by
+     * OSPM
+     */
+    if (in->function == 0) {
+        build_append_int_noprefix(out, 0 /* No function Supported */,
+                                  sizeof(uint8_t));
+    } else {
+        /* No function is supported yet. */
+        build_append_int_noprefix(out, 1 /* Not Supported */,
+                                  sizeof(uint8_t));
+    }
+
+    buf_size = cpu_to_le32(out->len);
+    cpu_physical_memory_write(dsm_mem_addr, &buf_size, sizeof(buf_size));
+    cpu_physical_memory_write(dsm_mem_addr + sizeof(buf_size), out->data,
+                              out->len);
+    g_free(in);
+    g_array_free(out, true);
 }
 
 static const MemoryRegionOps nvdimm_dsm_ops = {
@@ -408,6 +476,17 @@ static const MemoryRegionOps nvdimm_dsm_ops = {
     },
 };
 
+static const VMStateDescription nvdimm_acpi_vmstate = {
+    .name = "nvdimm_acpi_vmstate",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT32(low_dsm_mem_addr, AcpiNVDIMMState),
+        VMSTATE_UINT32(high_dsm_mem_addr, AcpiNVDIMMState),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
                             FWCfgState *fw_cfg, Object *owner)
 {
@@ -419,6 +498,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
     acpi_data_push(state->dsm_mem, TARGET_PAGE_SIZE);
     fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
                     state->dsm_mem->len);
+
+    vmstate_register(NULL, 0, &nvdimm_acpi_vmstate, state);
 }
 
 #define NVDIMM_GET_DSM_MEM      "MEMA"
@@ -430,7 +511,7 @@ static void nvdimm_build_common_dsm(Aml *dev)
     Aml *result_size, *dsm_mem;
     uint8_t byte_list[1];
 
-    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
+    method = aml_method(NVDIMM_COMMON_DSM, 4, AML_SERIALIZED);
     function = aml_arg(2);
     dsm_mem = aml_arg(3);
 
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 6c1816e..2fa8daa 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -354,6 +354,7 @@ Aml *aml_derefof(Aml *arg);
 Aml *aml_sizeof(Aml *arg);
 Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target);
 
+void build_append_int_noprefix(GArray *table, uint64_t value, int size);
 void
 build_header(GArray *linker, GArray *table_data,
              AcpiTableHeader *h, const char *sig, int len, uint8_t rev,
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index d908d6a..5f312a1 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -25,6 +25,14 @@
 
 #include "hw/mem/pc-dimm.h"
 
+#define NVDIMM_DEBUG 0
+#define nvdimm_debug(fmt, ...)                                \
+    do {                                                      \
+        if (NVDIMM_DEBUG) {                                   \
+            fprintf(stderr, "nvdimm: " fmt, ## __VA_ARGS__);  \
+        }                                                     \
+    } while (0)
+
 #define TYPE_NVDIMM             "nvdimm"
 
 #define NVDIMM_DSM_MEM_FILE     "etc/acpi/nvdimm-mem"
@@ -38,12 +46,21 @@
  * @is_enabled: detect if NVDIMM support is enabled.
  *
  * @dsm_mem: the data of the fw_cfg file NVDIMM_DSM_MEM_FILE.
+ *
+ * The dsm memory is allocated by BIOS and patched into ACPI binary code.
+ * @low_dsm_mem_addr: the low 32 bits of DSM memory.
+ * @high_dsm_mem_addr: the high 32 bits of DSM memory.
+ *
  * @io_mr: the IO region used by OSPM to transfer control to QEMU.
  */
 struct AcpiNVDIMMState {
     bool is_enabled;
 
     GArray *dsm_mem;
+
+    uint32_t low_dsm_mem_addr;
+    uint32_t high_dsm_mem_addr;
+
     MemoryRegion io_mr;
 };
 typedef struct AcpiNVDIMMState AcpiNVDIMMState;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
  2016-01-04 18:52   ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-06 15:23     ` Igor Mammedov
  -1 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-06 15:23 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth

On Tue,  5 Jan 2016 02:52:05 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> The dsm memory is used to save the input parameters and store
> the dsm result which is filled by QEMU.
> 
> The address of dsm memory is decided by bios and patched into
> int64 object returned by "MEMA" method
> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
>  hw/acpi/aml-build.c         | 12 ++++++++++++
>  hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
>  include/hw/acpi/aml-build.h |  1 +
>  3 files changed, 35 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 78e1290..83eadb3 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
>  }
>  
>  /*
> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
> + * encode: QWordConst
> + */
> +Aml *aml_int64(const uint64_t val)
> +{
> +    Aml *var = aml_alloc();
> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
> +    build_append_int_noprefix(var->buf, val, 8);
> +    return var;
> +}
> +
> +/*
>   * helper to construct NameString, which returns Aml object
>   * for using with aml_append or other aml_* terms
>   */
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index bc7cd8f..a72104c 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -28,6 +28,7 @@
>  
>  #include "hw/acpi/acpi.h"
>  #include "hw/acpi/aml-build.h"
> +#include "hw/acpi/bios-linker-loader.h"
>  #include "hw/nvram/fw_cfg.h"
>  #include "hw/mem/nvdimm.h"
>  
> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>                      state->dsm_mem->len);
>  }
>  
> -#define NVDIMM_COMMON_DSM      "NCAL"
> +#define NVDIMM_GET_DSM_MEM      "MEMA"
> +#define NVDIMM_COMMON_DSM       "NCAL"
>  
>  static void nvdimm_build_common_dsm(Aml *dev)
>  {
> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>                                GArray *table_data, GArray *linker,
>                                uint8_t revision)
>  {
> -    Aml *ssdt, *sb_scope, *dev;
> +    Aml *ssdt, *sb_scope, *dev, *method;
> +    int offset;
>  
>      acpi_add_table(table_offsets, table_data);
>  
> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>  
>      aml_append(sb_scope, dev);
>  
> +    /*
> +     * leave it at the end of ssdt so that we can conveniently get the
> +     * offset of int64 object returned by the function which will be
> +     * patched with the real address of the dsm memory by BIOS.
> +     */
> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
> +    aml_append(method, aml_return(aml_int64(0x0)));
there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick

> +    aml_append(sb_scope, method);
>      aml_append(ssdt, sb_scope);
>      /* copy AML table into ACPI tables blob and patch header there */
>      g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
> +
> +    offset = table_data->len - 8;
> +
> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
> +                             false /* high memory */);
> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
> +                                   NVDIMM_DSM_MEM_FILE, table_data,
> +                                   table_data->data + offset,
> +                                   sizeof(uint64_t));
this offset magic will break badly as soon as someone add something
to the end of SSDT.


>      build_header(linker, table_data,
>          (void *)(table_data->data + table_data->len - ssdt->buf->len),
>          "SSDT", ssdt->buf->len, revision, "NVDIMM");
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index ef44d02..b4726a4 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -246,6 +246,7 @@ Aml *aml_name(const char *name_format, ...) GCC_FMT_ATTR(1, 2);
>  Aml *aml_name_decl(const char *name, Aml *val);
>  Aml *aml_return(Aml *val);
>  Aml *aml_int(const uint64_t val);
> +Aml *aml_int64(const uint64_t val);
>  Aml *aml_arg(int pos);
>  Aml *aml_to_integer(Aml *arg);
>  Aml *aml_to_hexstring(Aml *src, Aml *dst);

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
@ 2016-01-06 15:23     ` Igor Mammedov
  0 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-06 15:23 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth

On Tue,  5 Jan 2016 02:52:05 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> The dsm memory is used to save the input parameters and store
> the dsm result which is filled by QEMU.
> 
> The address of dsm memory is decided by bios and patched into
> int64 object returned by "MEMA" method
> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
>  hw/acpi/aml-build.c         | 12 ++++++++++++
>  hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
>  include/hw/acpi/aml-build.h |  1 +
>  3 files changed, 35 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 78e1290..83eadb3 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
>  }
>  
>  /*
> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
> + * encode: QWordConst
> + */
> +Aml *aml_int64(const uint64_t val)
> +{
> +    Aml *var = aml_alloc();
> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
> +    build_append_int_noprefix(var->buf, val, 8);
> +    return var;
> +}
> +
> +/*
>   * helper to construct NameString, which returns Aml object
>   * for using with aml_append or other aml_* terms
>   */
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index bc7cd8f..a72104c 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -28,6 +28,7 @@
>  
>  #include "hw/acpi/acpi.h"
>  #include "hw/acpi/aml-build.h"
> +#include "hw/acpi/bios-linker-loader.h"
>  #include "hw/nvram/fw_cfg.h"
>  #include "hw/mem/nvdimm.h"
>  
> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>                      state->dsm_mem->len);
>  }
>  
> -#define NVDIMM_COMMON_DSM      "NCAL"
> +#define NVDIMM_GET_DSM_MEM      "MEMA"
> +#define NVDIMM_COMMON_DSM       "NCAL"
>  
>  static void nvdimm_build_common_dsm(Aml *dev)
>  {
> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>                                GArray *table_data, GArray *linker,
>                                uint8_t revision)
>  {
> -    Aml *ssdt, *sb_scope, *dev;
> +    Aml *ssdt, *sb_scope, *dev, *method;
> +    int offset;
>  
>      acpi_add_table(table_offsets, table_data);
>  
> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>  
>      aml_append(sb_scope, dev);
>  
> +    /*
> +     * leave it at the end of ssdt so that we can conveniently get the
> +     * offset of int64 object returned by the function which will be
> +     * patched with the real address of the dsm memory by BIOS.
> +     */
> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
> +    aml_append(method, aml_return(aml_int64(0x0)));
there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick

> +    aml_append(sb_scope, method);
>      aml_append(ssdt, sb_scope);
>      /* copy AML table into ACPI tables blob and patch header there */
>      g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
> +
> +    offset = table_data->len - 8;
> +
> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
> +                             false /* high memory */);
> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
> +                                   NVDIMM_DSM_MEM_FILE, table_data,
> +                                   table_data->data + offset,
> +                                   sizeof(uint64_t));
this offset magic will break badly as soon as someone add something
to the end of SSDT.


>      build_header(linker, table_data,
>          (void *)(table_data->data + table_data->len - ssdt->buf->len),
>          "SSDT", ssdt->buf->len, revision, "NVDIMM");
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index ef44d02..b4726a4 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -246,6 +246,7 @@ Aml *aml_name(const char *name_format, ...) GCC_FMT_ATTR(1, 2);
>  Aml *aml_name_decl(const char *name, Aml *val);
>  Aml *aml_return(Aml *val);
>  Aml *aml_int(const uint64_t val);
> +Aml *aml_int64(const uint64_t val);
>  Aml *aml_arg(int pos);
>  Aml *aml_to_integer(Aml *arg);
>  Aml *aml_to_hexstring(Aml *src, Aml *dst);

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
  2016-01-06 15:23     ` [Qemu-devel] " Igor Mammedov
@ 2016-01-06 15:39       ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-06 15:39 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: pbonzini, gleb, mtosatti, stefanha, mst, rth, ehabkost,
	dan.j.williams, kvm, qemu-devel



On 01/06/2016 11:23 PM, Igor Mammedov wrote:
> On Tue,  5 Jan 2016 02:52:05 +0800
> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>
>> The dsm memory is used to save the input parameters and store
>> the dsm result which is filled by QEMU.
>>
>> The address of dsm memory is decided by bios and patched into
>> int64 object returned by "MEMA" method
>>
>> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
>> ---
>>   hw/acpi/aml-build.c         | 12 ++++++++++++
>>   hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
>>   include/hw/acpi/aml-build.h |  1 +
>>   3 files changed, 35 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
>> index 78e1290..83eadb3 100644
>> --- a/hw/acpi/aml-build.c
>> +++ b/hw/acpi/aml-build.c
>> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
>>   }
>>
>>   /*
>> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
>> + * encode: QWordConst
>> + */
>> +Aml *aml_int64(const uint64_t val)
>> +{
>> +    Aml *var = aml_alloc();
>> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
>> +    build_append_int_noprefix(var->buf, val, 8);
>> +    return var;
>> +}
>> +
>> +/*
>>    * helper to construct NameString, which returns Aml object
>>    * for using with aml_append or other aml_* terms
>>    */
>> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
>> index bc7cd8f..a72104c 100644
>> --- a/hw/acpi/nvdimm.c
>> +++ b/hw/acpi/nvdimm.c
>> @@ -28,6 +28,7 @@
>>
>>   #include "hw/acpi/acpi.h"
>>   #include "hw/acpi/aml-build.h"
>> +#include "hw/acpi/bios-linker-loader.h"
>>   #include "hw/nvram/fw_cfg.h"
>>   #include "hw/mem/nvdimm.h"
>>
>> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>>                       state->dsm_mem->len);
>>   }
>>
>> -#define NVDIMM_COMMON_DSM      "NCAL"
>> +#define NVDIMM_GET_DSM_MEM      "MEMA"
>> +#define NVDIMM_COMMON_DSM       "NCAL"
>>
>>   static void nvdimm_build_common_dsm(Aml *dev)
>>   {
>> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>                                 GArray *table_data, GArray *linker,
>>                                 uint8_t revision)
>>   {
>> -    Aml *ssdt, *sb_scope, *dev;
>> +    Aml *ssdt, *sb_scope, *dev, *method;
>> +    int offset;
>>
>>       acpi_add_table(table_offsets, table_data);
>>
>> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>
>>       aml_append(sb_scope, dev);
>>
>> +    /*
>> +     * leave it at the end of ssdt so that we can conveniently get the
>> +     * offset of int64 object returned by the function which will be
>> +     * patched with the real address of the dsm memory by BIOS.
>> +     */
>> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
>> +    aml_append(method, aml_return(aml_int64(0x0)));
> there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick

We can not do this due to the trick in  bios_linker_loader_add_pointer() which will
issue a COMMAND_ADD_POINTER to BIOS, however, this request does:
         /*
          * COMMAND_ADD_POINTER - patch the table (originating from
          * @dest_file) at @pointer.offset, by adding a pointer to the table
          * originating from @src_file. 1,2,4 or 8 byte unsigned
          * addition is used depending on @pointer.size.
          */

that means the new-offset = old-offset + the address of the new table allocated by BIOS.

So we expect 0 offset here.

>
>> +    aml_append(sb_scope, method);
>>       aml_append(ssdt, sb_scope);
>>       /* copy AML table into ACPI tables blob and patch header there */
>>       g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
>> +
>> +    offset = table_data->len - 8;
>> +
>> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
>> +                             false /* high memory */);
>> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
>> +                                   NVDIMM_DSM_MEM_FILE, table_data,
>> +                                   table_data->data + offset,
>> +                                   sizeof(uint64_t));
> this offset magic will break badly as soon as someone add something
> to the end of SSDT.
>

Yes, it is, so don't do that, :) and this is why we made the comment here:
  +    /*
  +     * leave it at the end of ssdt so that we can conveniently get the
  +     * offset of int64 object returned by the function which will be
  +     * patched with the real address of the dsm memory by BIOS.
  +     */



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
@ 2016-01-06 15:39       ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-06 15:39 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth



On 01/06/2016 11:23 PM, Igor Mammedov wrote:
> On Tue,  5 Jan 2016 02:52:05 +0800
> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>
>> The dsm memory is used to save the input parameters and store
>> the dsm result which is filled by QEMU.
>>
>> The address of dsm memory is decided by bios and patched into
>> int64 object returned by "MEMA" method
>>
>> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
>> ---
>>   hw/acpi/aml-build.c         | 12 ++++++++++++
>>   hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
>>   include/hw/acpi/aml-build.h |  1 +
>>   3 files changed, 35 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
>> index 78e1290..83eadb3 100644
>> --- a/hw/acpi/aml-build.c
>> +++ b/hw/acpi/aml-build.c
>> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
>>   }
>>
>>   /*
>> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
>> + * encode: QWordConst
>> + */
>> +Aml *aml_int64(const uint64_t val)
>> +{
>> +    Aml *var = aml_alloc();
>> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
>> +    build_append_int_noprefix(var->buf, val, 8);
>> +    return var;
>> +}
>> +
>> +/*
>>    * helper to construct NameString, which returns Aml object
>>    * for using with aml_append or other aml_* terms
>>    */
>> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
>> index bc7cd8f..a72104c 100644
>> --- a/hw/acpi/nvdimm.c
>> +++ b/hw/acpi/nvdimm.c
>> @@ -28,6 +28,7 @@
>>
>>   #include "hw/acpi/acpi.h"
>>   #include "hw/acpi/aml-build.h"
>> +#include "hw/acpi/bios-linker-loader.h"
>>   #include "hw/nvram/fw_cfg.h"
>>   #include "hw/mem/nvdimm.h"
>>
>> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>>                       state->dsm_mem->len);
>>   }
>>
>> -#define NVDIMM_COMMON_DSM      "NCAL"
>> +#define NVDIMM_GET_DSM_MEM      "MEMA"
>> +#define NVDIMM_COMMON_DSM       "NCAL"
>>
>>   static void nvdimm_build_common_dsm(Aml *dev)
>>   {
>> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>                                 GArray *table_data, GArray *linker,
>>                                 uint8_t revision)
>>   {
>> -    Aml *ssdt, *sb_scope, *dev;
>> +    Aml *ssdt, *sb_scope, *dev, *method;
>> +    int offset;
>>
>>       acpi_add_table(table_offsets, table_data);
>>
>> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>
>>       aml_append(sb_scope, dev);
>>
>> +    /*
>> +     * leave it at the end of ssdt so that we can conveniently get the
>> +     * offset of int64 object returned by the function which will be
>> +     * patched with the real address of the dsm memory by BIOS.
>> +     */
>> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
>> +    aml_append(method, aml_return(aml_int64(0x0)));
> there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick

We can not do this due to the trick in  bios_linker_loader_add_pointer() which will
issue a COMMAND_ADD_POINTER to BIOS, however, this request does:
         /*
          * COMMAND_ADD_POINTER - patch the table (originating from
          * @dest_file) at @pointer.offset, by adding a pointer to the table
          * originating from @src_file. 1,2,4 or 8 byte unsigned
          * addition is used depending on @pointer.size.
          */

that means the new-offset = old-offset + the address of the new table allocated by BIOS.

So we expect 0 offset here.

>
>> +    aml_append(sb_scope, method);
>>       aml_append(ssdt, sb_scope);
>>       /* copy AML table into ACPI tables blob and patch header there */
>>       g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
>> +
>> +    offset = table_data->len - 8;
>> +
>> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
>> +                             false /* high memory */);
>> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
>> +                                   NVDIMM_DSM_MEM_FILE, table_data,
>> +                                   table_data->data + offset,
>> +                                   sizeof(uint64_t));
> this offset magic will break badly as soon as someone add something
> to the end of SSDT.
>

Yes, it is, so don't do that, :) and this is why we made the comment here:
  +    /*
  +     * leave it at the end of ssdt so that we can conveniently get the
  +     * offset of int64 object returned by the function which will be
  +     * patched with the real address of the dsm memory by BIOS.
  +     */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
  2016-01-06 15:39       ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-07 11:04         ` Igor Mammedov
  -1 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-07 11:04 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, gleb, mtosatti, stefanha, mst, rth, ehabkost,
	dan.j.williams, kvm, qemu-devel

On Wed, 6 Jan 2016 23:39:04 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> On 01/06/2016 11:23 PM, Igor Mammedov wrote:
> > On Tue,  5 Jan 2016 02:52:05 +0800
> > Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> >  
> >> The dsm memory is used to save the input parameters and store
> >> the dsm result which is filled by QEMU.
> >>
> >> The address of dsm memory is decided by bios and patched into
> >> int64 object returned by "MEMA" method
> >>
> >> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> >> ---
> >>   hw/acpi/aml-build.c         | 12 ++++++++++++
> >>   hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
> >>   include/hw/acpi/aml-build.h |  1 +
> >>   3 files changed, 35 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> >> index 78e1290..83eadb3 100644
> >> --- a/hw/acpi/aml-build.c
> >> +++ b/hw/acpi/aml-build.c
> >> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
> >>   }
> >>
> >>   /*
> >> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
> >> + * encode: QWordConst
> >> + */
> >> +Aml *aml_int64(const uint64_t val)
> >> +{
> >> +    Aml *var = aml_alloc();
> >> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
> >> +    build_append_int_noprefix(var->buf, val, 8);
> >> +    return var;
> >> +}
> >> +
> >> +/*
> >>    * helper to construct NameString, which returns Aml object
> >>    * for using with aml_append or other aml_* terms
> >>    */
> >> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> >> index bc7cd8f..a72104c 100644
> >> --- a/hw/acpi/nvdimm.c
> >> +++ b/hw/acpi/nvdimm.c
> >> @@ -28,6 +28,7 @@
> >>
> >>   #include "hw/acpi/acpi.h"
> >>   #include "hw/acpi/aml-build.h"
> >> +#include "hw/acpi/bios-linker-loader.h"
> >>   #include "hw/nvram/fw_cfg.h"
> >>   #include "hw/mem/nvdimm.h"
> >>
> >> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> >>                       state->dsm_mem->len);
> >>   }
> >>
> >> -#define NVDIMM_COMMON_DSM      "NCAL"
> >> +#define NVDIMM_GET_DSM_MEM      "MEMA"
> >> +#define NVDIMM_COMMON_DSM       "NCAL"
> >>
> >>   static void nvdimm_build_common_dsm(Aml *dev)
> >>   {
> >> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
> >>                                 GArray *table_data, GArray *linker,
> >>                                 uint8_t revision)
> >>   {
> >> -    Aml *ssdt, *sb_scope, *dev;
> >> +    Aml *ssdt, *sb_scope, *dev, *method;
> >> +    int offset;
> >>
> >>       acpi_add_table(table_offsets, table_data);
> >>
> >> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
> >>
> >>       aml_append(sb_scope, dev);
> >>
> >> +    /*
> >> +     * leave it at the end of ssdt so that we can conveniently get the
> >> +     * offset of int64 object returned by the function which will be
> >> +     * patched with the real address of the dsm memory by BIOS.
> >> +     */
> >> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
> >> +    aml_append(method, aml_return(aml_int64(0x0)));  
> > there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick  
> 
> We can not do this due to the trick in  bios_linker_loader_add_pointer() which will
> issue a COMMAND_ADD_POINTER to BIOS, however, this request does:
>          /*
>           * COMMAND_ADD_POINTER - patch the table (originating from
>           * @dest_file) at @pointer.offset, by adding a pointer to the table
>           * originating from @src_file. 1,2,4 or 8 byte unsigned
>           * addition is used depending on @pointer.size.
>           */
> 
> that means the new-offset = old-offset + the address of the new table allocated by BIOS.
> 
> So we expect 0 offset here.
perhaps I'm blind but I don't see bios_linker_loader_add_pointer() using
value stored in aml_int() in any way, see below.

> 
> >  
> >> +    aml_append(sb_scope, method);
> >>       aml_append(ssdt, sb_scope);
> >>       /* copy AML table into ACPI tables blob and patch header there */
> >>       g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
> >> +
> >> +    offset = table_data->len - 8;
> >> +
> >> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
> >> +                             false /* high memory */);
> >> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
> >> +                                   NVDIMM_DSM_MEM_FILE, table_data,
> >> +                                   table_data->data + offset,
here table_data->data + offset is a pointer to the first byte of aml_int() value.

then bios_linker_loader_add_pointer(GArray *linker,
                                    const char *dest_file,
                                    const char *src_file,
                                    GArray *table, void *pointer,
                                    uint8_t pointer_size)
{
    ...
    size_t offset = (gchar *)pointer - table->data; 
    ...
    entry.pointer.offset = cpu_to_le32(offset);
    ...
}

'pointer' argument that is passed to bios_linker_loader_add_pointer()
isn't dereferenced to access aml_int() value.

> >> +                                   sizeof(uint64_t));
also it looks like there is bug somewhere in linker as it patches
only lower 32 bits of pointed value even though it has been told that
pointer size is 64bit.

  
> > this offset magic will break badly as soon as someone add something
> > to the end of SSDT.
> >  
> 
> Yes, it is, so don't do that, :) and this is why we made the comment here:
>   +    /*
>   +     * leave it at the end of ssdt so that we can conveniently get the
>   +     * offset of int64 object returned by the function which will be
>   +     * patched with the real address of the dsm memory by BIOS.
>   +     */
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
@ 2016-01-07 11:04         ` Igor Mammedov
  0 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-07 11:04 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth

On Wed, 6 Jan 2016 23:39:04 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> On 01/06/2016 11:23 PM, Igor Mammedov wrote:
> > On Tue,  5 Jan 2016 02:52:05 +0800
> > Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> >  
> >> The dsm memory is used to save the input parameters and store
> >> the dsm result which is filled by QEMU.
> >>
> >> The address of dsm memory is decided by bios and patched into
> >> int64 object returned by "MEMA" method
> >>
> >> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> >> ---
> >>   hw/acpi/aml-build.c         | 12 ++++++++++++
> >>   hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
> >>   include/hw/acpi/aml-build.h |  1 +
> >>   3 files changed, 35 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> >> index 78e1290..83eadb3 100644
> >> --- a/hw/acpi/aml-build.c
> >> +++ b/hw/acpi/aml-build.c
> >> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
> >>   }
> >>
> >>   /*
> >> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
> >> + * encode: QWordConst
> >> + */
> >> +Aml *aml_int64(const uint64_t val)
> >> +{
> >> +    Aml *var = aml_alloc();
> >> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
> >> +    build_append_int_noprefix(var->buf, val, 8);
> >> +    return var;
> >> +}
> >> +
> >> +/*
> >>    * helper to construct NameString, which returns Aml object
> >>    * for using with aml_append or other aml_* terms
> >>    */
> >> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> >> index bc7cd8f..a72104c 100644
> >> --- a/hw/acpi/nvdimm.c
> >> +++ b/hw/acpi/nvdimm.c
> >> @@ -28,6 +28,7 @@
> >>
> >>   #include "hw/acpi/acpi.h"
> >>   #include "hw/acpi/aml-build.h"
> >> +#include "hw/acpi/bios-linker-loader.h"
> >>   #include "hw/nvram/fw_cfg.h"
> >>   #include "hw/mem/nvdimm.h"
> >>
> >> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> >>                       state->dsm_mem->len);
> >>   }
> >>
> >> -#define NVDIMM_COMMON_DSM      "NCAL"
> >> +#define NVDIMM_GET_DSM_MEM      "MEMA"
> >> +#define NVDIMM_COMMON_DSM       "NCAL"
> >>
> >>   static void nvdimm_build_common_dsm(Aml *dev)
> >>   {
> >> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
> >>                                 GArray *table_data, GArray *linker,
> >>                                 uint8_t revision)
> >>   {
> >> -    Aml *ssdt, *sb_scope, *dev;
> >> +    Aml *ssdt, *sb_scope, *dev, *method;
> >> +    int offset;
> >>
> >>       acpi_add_table(table_offsets, table_data);
> >>
> >> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
> >>
> >>       aml_append(sb_scope, dev);
> >>
> >> +    /*
> >> +     * leave it at the end of ssdt so that we can conveniently get the
> >> +     * offset of int64 object returned by the function which will be
> >> +     * patched with the real address of the dsm memory by BIOS.
> >> +     */
> >> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
> >> +    aml_append(method, aml_return(aml_int64(0x0)));  
> > there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick  
> 
> We can not do this due to the trick in  bios_linker_loader_add_pointer() which will
> issue a COMMAND_ADD_POINTER to BIOS, however, this request does:
>          /*
>           * COMMAND_ADD_POINTER - patch the table (originating from
>           * @dest_file) at @pointer.offset, by adding a pointer to the table
>           * originating from @src_file. 1,2,4 or 8 byte unsigned
>           * addition is used depending on @pointer.size.
>           */
> 
> that means the new-offset = old-offset + the address of the new table allocated by BIOS.
> 
> So we expect 0 offset here.
perhaps I'm blind but I don't see bios_linker_loader_add_pointer() using
value stored in aml_int() in any way, see below.

> 
> >  
> >> +    aml_append(sb_scope, method);
> >>       aml_append(ssdt, sb_scope);
> >>       /* copy AML table into ACPI tables blob and patch header there */
> >>       g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
> >> +
> >> +    offset = table_data->len - 8;
> >> +
> >> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
> >> +                             false /* high memory */);
> >> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
> >> +                                   NVDIMM_DSM_MEM_FILE, table_data,
> >> +                                   table_data->data + offset,
here table_data->data + offset is a pointer to the first byte of aml_int() value.

then bios_linker_loader_add_pointer(GArray *linker,
                                    const char *dest_file,
                                    const char *src_file,
                                    GArray *table, void *pointer,
                                    uint8_t pointer_size)
{
    ...
    size_t offset = (gchar *)pointer - table->data; 
    ...
    entry.pointer.offset = cpu_to_le32(offset);
    ...
}

'pointer' argument that is passed to bios_linker_loader_add_pointer()
isn't dereferenced to access aml_int() value.

> >> +                                   sizeof(uint64_t));
also it looks like there is bug somewhere in linker as it patches
only lower 32 bits of pointed value even though it has been told that
pointer size is 64bit.

  
> > this offset magic will break badly as soon as someone add something
> > to the end of SSDT.
> >  
> 
> Yes, it is, so don't do that, :) and this is why we made the comment here:
>   +    /*
>   +     * leave it at the end of ssdt so that we can conveniently get the
>   +     * offset of int64 object returned by the function which will be
>   +     * patched with the real address of the dsm memory by BIOS.
>   +     */
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
  2016-01-04 18:52 ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-07 14:13   ` Igor Mammedov
  -1 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-07 14:13 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth

On Tue,  5 Jan 2016 02:52:02 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> This patchset is against commit 5530427f0ca (acpi: extend aml_and() to
> accept target argument) on pci branch of Michael's git tree
> and can be found at:
>       https://github.com/xiaogr/qemu.git nvdimm-acpi-v1
> 
> This is the second part of vNVDIMM implementation which implements the
> BIOS patched dsm memory and introduces the framework that allows QEMU
> to emulate DSM method
> 
> Thanks to Michael's idea, we do not reserve any memory for NVDIMM ACPI,
> instead we let BIOS allocate the memory and patch the address to the
> offset we want
> 
> IO port is still enabled as it plays as the way to notify QEMU and pass
> the patched dsm memory address, so that IO port region, 0x0a18 - 0xa20,
> is reserved and it is divided into two 32 bits ports and used to pass
> the low 32 bits and high 32 bits of dsm memory address to QEMU
> 
> Thanks Igor's idea, this patchset also extends DSDT/SSDT to revision 2
> to apply 64 bit operations, in order to keeping compatibility, old
> version (<= 2.5) still uses revision 1. Since 64 bit operations breaks
> old guests (such as windows XP), we should keep the 64 bits stuff in
> the private place where common ACPI operation does not touch it
> 

general notes:
1. could you split out AML API additions/changes into separate patches?
   even if series nvdims patches couldn't be accepted on next respin,
   AML API patches could be good and we could pick them up just
   for API completeness. That also would make them easier to review
   and reduces count of patches you'd need to respin.
2. add test case for NVDIMM table blob, see tests/bios-tables-test.c
   at the beginning of series.
3. make V=1 check would show you ASL diff your patches are introducing,
   it will save you from booting real guest and dumping/decompiling
   tables manually.
4. at the end of series add NVDIMM table test blob with new table.
   you can use tests/acpi-test-data/rebuild-expected-aml.sh to make it
5. if make check by some miracle passes with these patches,
   dump NVDIMM table in guest and try to decompile and then compile it
   back with IASL, it will show you what needs to be fixed.
   
PS:
 under NVDIMM table I mean SSDT NVMDIM table.

> Igor Mammedov (1):
>   pc: acpi: bump DSDT/SSDT compliance revision to v2
> 
> Xiao Guangrong (5):
>   nvdimm acpi: initialize the resource used by NVDIMM ACPI
>   nvdimm acpi: introduce patched dsm memory
>   acpi: allow using acpi named offset for OperationRegion
>   nvdimm acpi: let qemu handle _DSM method
>   nvdimm acpi: emulate dsm method
> 
>  hw/acpi/Makefile.objs       |   2 +-
>  hw/acpi/aml-build.c         |  45 +++++++-
>  hw/acpi/ich9.c              |  32 +++++
>  hw/acpi/nvdimm.c            | 276 ++++++++++++++++++++++++++++++++++++++++++--
>  hw/acpi/piix4.c             |   3 +
>  hw/i386/acpi-build.c        |  41 ++++---
>  hw/i386/pc.c                |   8 +-
>  hw/i386/pc_piix.c           |   5 +
>  hw/i386/pc_q35.c            |   8 +-
>  include/hw/acpi/aml-build.h |   6 +-
>  include/hw/acpi/ich9.h      |   2 +
>  include/hw/i386/pc.h        |  19 ++-
>  include/hw/mem/nvdimm.h     |  44 ++++++-
>  13 files changed, 449 insertions(+), 42 deletions(-)
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
@ 2016-01-07 14:13   ` Igor Mammedov
  0 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-07 14:13 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth

On Tue,  5 Jan 2016 02:52:02 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> This patchset is against commit 5530427f0ca (acpi: extend aml_and() to
> accept target argument) on pci branch of Michael's git tree
> and can be found at:
>       https://github.com/xiaogr/qemu.git nvdimm-acpi-v1
> 
> This is the second part of vNVDIMM implementation which implements the
> BIOS patched dsm memory and introduces the framework that allows QEMU
> to emulate DSM method
> 
> Thanks to Michael's idea, we do not reserve any memory for NVDIMM ACPI,
> instead we let BIOS allocate the memory and patch the address to the
> offset we want
> 
> IO port is still enabled as it plays as the way to notify QEMU and pass
> the patched dsm memory address, so that IO port region, 0x0a18 - 0xa20,
> is reserved and it is divided into two 32 bits ports and used to pass
> the low 32 bits and high 32 bits of dsm memory address to QEMU
> 
> Thanks Igor's idea, this patchset also extends DSDT/SSDT to revision 2
> to apply 64 bit operations, in order to keeping compatibility, old
> version (<= 2.5) still uses revision 1. Since 64 bit operations breaks
> old guests (such as windows XP), we should keep the 64 bits stuff in
> the private place where common ACPI operation does not touch it
> 

general notes:
1. could you split out AML API additions/changes into separate patches?
   even if series nvdims patches couldn't be accepted on next respin,
   AML API patches could be good and we could pick them up just
   for API completeness. That also would make them easier to review
   and reduces count of patches you'd need to respin.
2. add test case for NVDIMM table blob, see tests/bios-tables-test.c
   at the beginning of series.
3. make V=1 check would show you ASL diff your patches are introducing,
   it will save you from booting real guest and dumping/decompiling
   tables manually.
4. at the end of series add NVDIMM table test blob with new table.
   you can use tests/acpi-test-data/rebuild-expected-aml.sh to make it
5. if make check by some miracle passes with these patches,
   dump NVDIMM table in guest and try to decompile and then compile it
   back with IASL, it will show you what needs to be fixed.
   
PS:
 under NVDIMM table I mean SSDT NVMDIM table.

> Igor Mammedov (1):
>   pc: acpi: bump DSDT/SSDT compliance revision to v2
> 
> Xiao Guangrong (5):
>   nvdimm acpi: initialize the resource used by NVDIMM ACPI
>   nvdimm acpi: introduce patched dsm memory
>   acpi: allow using acpi named offset for OperationRegion
>   nvdimm acpi: let qemu handle _DSM method
>   nvdimm acpi: emulate dsm method
> 
>  hw/acpi/Makefile.objs       |   2 +-
>  hw/acpi/aml-build.c         |  45 +++++++-
>  hw/acpi/ich9.c              |  32 +++++
>  hw/acpi/nvdimm.c            | 276 ++++++++++++++++++++++++++++++++++++++++++--
>  hw/acpi/piix4.c             |   3 +
>  hw/i386/acpi-build.c        |  41 ++++---
>  hw/i386/pc.c                |   8 +-
>  hw/i386/pc_piix.c           |   5 +
>  hw/i386/pc_q35.c            |   8 +-
>  include/hw/acpi/aml-build.h |   6 +-
>  include/hw/acpi/ich9.h      |   2 +
>  include/hw/i386/pc.h        |  19 ++-
>  include/hw/mem/nvdimm.h     |  44 ++++++-
>  13 files changed, 449 insertions(+), 42 deletions(-)
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method
  2016-01-04 18:52   ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-07 14:22     ` Igor Mammedov
  -1 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-07 14:22 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, gleb, mtosatti, stefanha, mst, rth, ehabkost,
	dan.j.williams, kvm, qemu-devel

On Tue,  5 Jan 2016 02:52:07 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> If dsm memory is successfully patched, we let qemu fully emulate
> the dsm method
> 
> This patch saves _DSM input parameters into dsm memory, tell dsm
> memory address to QEMU, then fetch the result from the dsm memory
you also need to add NVDR._CRS method that would report
resources used by operation regions.

NVDIMM_COMMON_DSM - probably should be serialized, otherwise
there is a race risk, when several callers would write to
control region.


> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
>  hw/acpi/aml-build.c         |  27 ++++++++++
>  hw/acpi/nvdimm.c            | 124 ++++++++++++++++++++++++++++++++++++++++++--
>  include/hw/acpi/aml-build.h |   2 +
>  3 files changed, 150 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 677c1a6..e65171f 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1013,6 +1013,19 @@ Aml *create_field_common(int opcode, Aml *srcbuf, Aml *index, const char *name)
>      return var;
>  }
>  
> +/* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefCreateField */
> +Aml *aml_create_field(Aml *srcbuf, Aml *index, Aml *len, const char *name)
> +{
> +    Aml *var = aml_alloc();
> +    build_append_byte(var->buf, 0x5B); /* ExtOpPrefix */
> +    build_append_byte(var->buf, 0x13); /* CreateFieldOp */
> +    aml_append(var, srcbuf);
> +    aml_append(var, index);
> +    aml_append(var, len);
> +    build_append_namestring(var->buf, "%s", name);
> +    return var;
> +}
> +
>  /* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefCreateDWordField */
>  Aml *aml_create_dword_field(Aml *srcbuf, Aml *index, const char *name)
>  {
> @@ -1439,6 +1452,20 @@ Aml *aml_alias(const char *source_object, const char *alias_object)
>      return var;
>  }
>  
> +/* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefConcat */
> +Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target)
> +{
> +    Aml *var = aml_opcode(0x73 /* ConcatOp */);
> +    aml_append(var, source1);
> +    aml_append(var, source2);
> +
> +    if (target) {
> +        aml_append(var, target);
> +    }
> +
> +    return var;
> +}
> +
>  void
>  build_header(GArray *linker, GArray *table_data,
>               AcpiTableHeader *h, const char *sig, int len, uint8_t rev,
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index a72104c..dfccbc0 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -369,6 +369,24 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
>      g_array_free(structures, true);
>  }
>  
> +struct NvdimmDsmIn {
> +    uint32_t handle;
> +    uint32_t revision;
> +    uint32_t function;
> +   /* the remaining size in the page is used by arg3. */
> +    union {
> +        uint8_t arg3[0];
> +    };
> +} QEMU_PACKED;
> +typedef struct NvdimmDsmIn NvdimmDsmIn;
> +
> +struct NvdimmDsmOut {
> +    /* the size of buffer filled by QEMU. */
> +    uint32_t len;
> +    uint8_t data[0];
> +} QEMU_PACKED;
> +typedef struct NvdimmDsmOut NvdimmDsmOut;
> +
>  static uint64_t
>  nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
>  {
> @@ -408,11 +426,21 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>  
>  static void nvdimm_build_common_dsm(Aml *dev)
>  {
> -    Aml *method, *ifctx, *function;
> +    Aml *method, *ifctx, *function, *unpatched, *field, *high_dsm_mem;
> +    Aml *result_size, *dsm_mem;
>      uint8_t byte_list[1];
>  
>      method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
>      function = aml_arg(2);
> +    dsm_mem = aml_arg(3);
> +
> +    aml_append(method, aml_store(aml_call0(NVDIMM_GET_DSM_MEM), dsm_mem));
> +
> +    /*
> +     * do not support any method if DSM memory address has not been
> +     * patched.
> +     */
> +    unpatched = aml_if(aml_equal(dsm_mem, aml_int64(0x0)));
>  
>      /*
>       * function 0 is called to inquire what functions are supported by
> @@ -421,12 +449,102 @@ static void nvdimm_build_common_dsm(Aml *dev)
>      ifctx = aml_if(aml_equal(function, aml_int(0)));
>      byte_list[0] = 0 /* No function Supported */;
>      aml_append(ifctx, aml_return(aml_buffer(1, byte_list)));
> -    aml_append(method, ifctx);
> +    aml_append(unpatched, ifctx);
>  
>      /* No function is supported yet. */
>      byte_list[0] = 1 /* Not Supported */;
> -    aml_append(method, aml_return(aml_buffer(1, byte_list)));
> +    aml_append(unpatched, aml_return(aml_buffer(1, byte_list)));
> +    aml_append(method, unpatched);
> +
> +    /* map DSM memory and IO into ACPI namespace. */
> +    aml_append(method, aml_operation_region("NPIO", AML_SYSTEM_IO,
> +               aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
> +    aml_append(method, aml_operation_region("NRAM", AML_SYSTEM_MEMORY,
> +                                            dsm_mem, TARGET_PAGE_SIZE));
> +
> +    /*
> +     * DSM notifier:
> +     * LNTF: write the low 32 bits of DSM memory.
> +     * HNTF: write the high 32 bits of DSM memory and notify QEMU to
> +     *       emulate the access.
> +     *
> +     * They are IO ports so that accessing them will cause VM-exit, the
> +     * control will be transferred to QEMU.
> +     */
> +    field = aml_field("NPIO", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("LNTF",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("HNTF",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(method, field);
>  
> +    /*
> +     * DSM input:
> +     * @HDLE: store device's handle, it's zero if the _DSM call happens
> +     *        on NVDIMM Root Device.
> +     * @REVS: store the Arg1 of _DSM call.
> +     * @FUNC: store the Arg2 of _DSM call.
> +     * @ARG3: store the Arg3 of _DSM call.
> +     *
> +     * They are RAM mapping on host so that these accesses never cause
> +     * VM-EXIT.
> +     */
> +    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("HDLE",
> +               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("REVS",
> +               sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("FUNC",
> +               sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("ARG3",
> +               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) *
> +                     BITS_PER_BYTE));
> +    aml_append(method, field);
> +
> +    /*
> +     * DSM output:
> +     * @RLEN: the size of the buffer filled by QEMU.
> +     * @ODAT: the buffer QEMU uses to store the result.
> +     *
> +     * Since the page is reused by both input and out, the input data
> +     * will be lost after storing new result into @ODAT.
> +    */
> +    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("RLEN",
> +               sizeof(typeof_field(NvdimmDsmOut, len)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("ODAT",
> +               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmOut, data)) *
> +                     BITS_PER_BYTE));
> +    aml_append(method, field);
> +
> +    /*
> +     * Currently no function is supported for both root device and NVDIMM
> +     * devices, let's temporarily set handle to 0x0 at this time.
> +     */
> +    aml_append(method, aml_store(aml_int(0x0), aml_name("HDLE")));
> +    aml_append(method, aml_store(aml_arg(1), aml_name("REVS")));
> +    aml_append(method, aml_store(aml_arg(2), aml_name("FUNC")));
> +
> +    /*
> +     * tell QEMU about the real address of DSM memory, then QEMU begins
> +     * to emulate the method and fills the result to DSM memory.
> +     */
> +    aml_append(method, aml_store(dsm_mem, aml_name("LNTF")));
> +    high_dsm_mem = aml_shiftright(dsm_mem,
> +                                  aml_int(sizeof(uint32_t) * BITS_PER_BYTE),
> +                                  NULL);
> +    aml_append(method, aml_store(high_dsm_mem, aml_name("HNTF")));
> +
> +    result_size = aml_local(1);
> +    aml_append(method, aml_store(aml_name("RLEN"), result_size));
> +    aml_append(method, aml_store(aml_shiftleft(result_size, aml_int(3)),
> +                                 result_size));
> +    aml_append(method, aml_create_field(aml_name("ODAT"), aml_int(0),
> +                                        result_size, "OBUF"));
> +    aml_append(method, aml_name_decl("ZBUF", aml_buffer(0, NULL)));
> +    aml_append(method, aml_concatenate(aml_name("ZBUF"), aml_name("OBUF"),
> +                                       aml_arg(6)));
> +    aml_append(method, aml_return(aml_arg(6)));
>      aml_append(dev, method);
>  }
>  
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index a8d8f3b..6c1816e 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -344,6 +344,7 @@ Aml *aml_mutex(const char *name, uint8_t sync_level);
>  Aml *aml_acquire(Aml *mutex, uint16_t timeout);
>  Aml *aml_release(Aml *mutex);
>  Aml *aml_alias(const char *source_object, const char *alias_object);
> +Aml *aml_create_field(Aml *srcbuf, Aml *index, Aml *len, const char *name);
>  Aml *aml_create_dword_field(Aml *srcbuf, Aml *index, const char *name);
>  Aml *aml_create_qword_field(Aml *srcbuf, Aml *index, const char *name);
>  Aml *aml_varpackage(uint32_t num_elements);
> @@ -351,6 +352,7 @@ Aml *aml_touuid(const char *uuid);
>  Aml *aml_unicode(const char *str);
>  Aml *aml_derefof(Aml *arg);
>  Aml *aml_sizeof(Aml *arg);
> +Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target);
>  
>  void
>  build_header(GArray *linker, GArray *table_data,


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method
@ 2016-01-07 14:22     ` Igor Mammedov
  0 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-07 14:22 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth

On Tue,  5 Jan 2016 02:52:07 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> If dsm memory is successfully patched, we let qemu fully emulate
> the dsm method
> 
> This patch saves _DSM input parameters into dsm memory, tell dsm
> memory address to QEMU, then fetch the result from the dsm memory
you also need to add NVDR._CRS method that would report
resources used by operation regions.

NVDIMM_COMMON_DSM - probably should be serialized, otherwise
there is a race risk, when several callers would write to
control region.


> 
> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> ---
>  hw/acpi/aml-build.c         |  27 ++++++++++
>  hw/acpi/nvdimm.c            | 124 ++++++++++++++++++++++++++++++++++++++++++--
>  include/hw/acpi/aml-build.h |   2 +
>  3 files changed, 150 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 677c1a6..e65171f 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -1013,6 +1013,19 @@ Aml *create_field_common(int opcode, Aml *srcbuf, Aml *index, const char *name)
>      return var;
>  }
>  
> +/* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefCreateField */
> +Aml *aml_create_field(Aml *srcbuf, Aml *index, Aml *len, const char *name)
> +{
> +    Aml *var = aml_alloc();
> +    build_append_byte(var->buf, 0x5B); /* ExtOpPrefix */
> +    build_append_byte(var->buf, 0x13); /* CreateFieldOp */
> +    aml_append(var, srcbuf);
> +    aml_append(var, index);
> +    aml_append(var, len);
> +    build_append_namestring(var->buf, "%s", name);
> +    return var;
> +}
> +
>  /* ACPI 1.0b: 16.2.5.2 Named Objects Encoding: DefCreateDWordField */
>  Aml *aml_create_dword_field(Aml *srcbuf, Aml *index, const char *name)
>  {
> @@ -1439,6 +1452,20 @@ Aml *aml_alias(const char *source_object, const char *alias_object)
>      return var;
>  }
>  
> +/* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefConcat */
> +Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target)
> +{
> +    Aml *var = aml_opcode(0x73 /* ConcatOp */);
> +    aml_append(var, source1);
> +    aml_append(var, source2);
> +
> +    if (target) {
> +        aml_append(var, target);
> +    }
> +
> +    return var;
> +}
> +
>  void
>  build_header(GArray *linker, GArray *table_data,
>               AcpiTableHeader *h, const char *sig, int len, uint8_t rev,
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index a72104c..dfccbc0 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -369,6 +369,24 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets,
>      g_array_free(structures, true);
>  }
>  
> +struct NvdimmDsmIn {
> +    uint32_t handle;
> +    uint32_t revision;
> +    uint32_t function;
> +   /* the remaining size in the page is used by arg3. */
> +    union {
> +        uint8_t arg3[0];
> +    };
> +} QEMU_PACKED;
> +typedef struct NvdimmDsmIn NvdimmDsmIn;
> +
> +struct NvdimmDsmOut {
> +    /* the size of buffer filled by QEMU. */
> +    uint32_t len;
> +    uint8_t data[0];
> +} QEMU_PACKED;
> +typedef struct NvdimmDsmOut NvdimmDsmOut;
> +
>  static uint64_t
>  nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
>  {
> @@ -408,11 +426,21 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>  
>  static void nvdimm_build_common_dsm(Aml *dev)
>  {
> -    Aml *method, *ifctx, *function;
> +    Aml *method, *ifctx, *function, *unpatched, *field, *high_dsm_mem;
> +    Aml *result_size, *dsm_mem;
>      uint8_t byte_list[1];
>  
>      method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED);
>      function = aml_arg(2);
> +    dsm_mem = aml_arg(3);
> +
> +    aml_append(method, aml_store(aml_call0(NVDIMM_GET_DSM_MEM), dsm_mem));
> +
> +    /*
> +     * do not support any method if DSM memory address has not been
> +     * patched.
> +     */
> +    unpatched = aml_if(aml_equal(dsm_mem, aml_int64(0x0)));
>  
>      /*
>       * function 0 is called to inquire what functions are supported by
> @@ -421,12 +449,102 @@ static void nvdimm_build_common_dsm(Aml *dev)
>      ifctx = aml_if(aml_equal(function, aml_int(0)));
>      byte_list[0] = 0 /* No function Supported */;
>      aml_append(ifctx, aml_return(aml_buffer(1, byte_list)));
> -    aml_append(method, ifctx);
> +    aml_append(unpatched, ifctx);
>  
>      /* No function is supported yet. */
>      byte_list[0] = 1 /* Not Supported */;
> -    aml_append(method, aml_return(aml_buffer(1, byte_list)));
> +    aml_append(unpatched, aml_return(aml_buffer(1, byte_list)));
> +    aml_append(method, unpatched);
> +
> +    /* map DSM memory and IO into ACPI namespace. */
> +    aml_append(method, aml_operation_region("NPIO", AML_SYSTEM_IO,
> +               aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN));
> +    aml_append(method, aml_operation_region("NRAM", AML_SYSTEM_MEMORY,
> +                                            dsm_mem, TARGET_PAGE_SIZE));
> +
> +    /*
> +     * DSM notifier:
> +     * LNTF: write the low 32 bits of DSM memory.
> +     * HNTF: write the high 32 bits of DSM memory and notify QEMU to
> +     *       emulate the access.
> +     *
> +     * They are IO ports so that accessing them will cause VM-exit, the
> +     * control will be transferred to QEMU.
> +     */
> +    field = aml_field("NPIO", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("LNTF",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("HNTF",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(method, field);
>  
> +    /*
> +     * DSM input:
> +     * @HDLE: store device's handle, it's zero if the _DSM call happens
> +     *        on NVDIMM Root Device.
> +     * @REVS: store the Arg1 of _DSM call.
> +     * @FUNC: store the Arg2 of _DSM call.
> +     * @ARG3: store the Arg3 of _DSM call.
> +     *
> +     * They are RAM mapping on host so that these accesses never cause
> +     * VM-EXIT.
> +     */
> +    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("HDLE",
> +               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("REVS",
> +               sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("FUNC",
> +               sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("ARG3",
> +               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) *
> +                     BITS_PER_BYTE));
> +    aml_append(method, field);
> +
> +    /*
> +     * DSM output:
> +     * @RLEN: the size of the buffer filled by QEMU.
> +     * @ODAT: the buffer QEMU uses to store the result.
> +     *
> +     * Since the page is reused by both input and out, the input data
> +     * will be lost after storing new result into @ODAT.
> +    */
> +    field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("RLEN",
> +               sizeof(typeof_field(NvdimmDsmOut, len)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("ODAT",
> +               (TARGET_PAGE_SIZE - offsetof(NvdimmDsmOut, data)) *
> +                     BITS_PER_BYTE));
> +    aml_append(method, field);
> +
> +    /*
> +     * Currently no function is supported for both root device and NVDIMM
> +     * devices, let's temporarily set handle to 0x0 at this time.
> +     */
> +    aml_append(method, aml_store(aml_int(0x0), aml_name("HDLE")));
> +    aml_append(method, aml_store(aml_arg(1), aml_name("REVS")));
> +    aml_append(method, aml_store(aml_arg(2), aml_name("FUNC")));
> +
> +    /*
> +     * tell QEMU about the real address of DSM memory, then QEMU begins
> +     * to emulate the method and fills the result to DSM memory.
> +     */
> +    aml_append(method, aml_store(dsm_mem, aml_name("LNTF")));
> +    high_dsm_mem = aml_shiftright(dsm_mem,
> +                                  aml_int(sizeof(uint32_t) * BITS_PER_BYTE),
> +                                  NULL);
> +    aml_append(method, aml_store(high_dsm_mem, aml_name("HNTF")));
> +
> +    result_size = aml_local(1);
> +    aml_append(method, aml_store(aml_name("RLEN"), result_size));
> +    aml_append(method, aml_store(aml_shiftleft(result_size, aml_int(3)),
> +                                 result_size));
> +    aml_append(method, aml_create_field(aml_name("ODAT"), aml_int(0),
> +                                        result_size, "OBUF"));
> +    aml_append(method, aml_name_decl("ZBUF", aml_buffer(0, NULL)));
> +    aml_append(method, aml_concatenate(aml_name("ZBUF"), aml_name("OBUF"),
> +                                       aml_arg(6)));
> +    aml_append(method, aml_return(aml_arg(6)));
>      aml_append(dev, method);
>  }
>  
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index a8d8f3b..6c1816e 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -344,6 +344,7 @@ Aml *aml_mutex(const char *name, uint8_t sync_level);
>  Aml *aml_acquire(Aml *mutex, uint16_t timeout);
>  Aml *aml_release(Aml *mutex);
>  Aml *aml_alias(const char *source_object, const char *alias_object);
> +Aml *aml_create_field(Aml *srcbuf, Aml *index, Aml *len, const char *name);
>  Aml *aml_create_dword_field(Aml *srcbuf, Aml *index, const char *name);
>  Aml *aml_create_qword_field(Aml *srcbuf, Aml *index, const char *name);
>  Aml *aml_varpackage(uint32_t num_elements);
> @@ -351,6 +352,7 @@ Aml *aml_touuid(const char *uuid);
>  Aml *aml_unicode(const char *str);
>  Aml *aml_derefof(Aml *arg);
>  Aml *aml_sizeof(Aml *arg);
> +Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target);
>  
>  void
>  build_header(GArray *linker, GArray *table_data,

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
  2016-01-07 11:04         ` [Qemu-devel] " Igor Mammedov
@ 2016-01-08  3:40           ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-08  3:40 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: pbonzini, gleb, mtosatti, stefanha, mst, rth, ehabkost,
	dan.j.williams, kvm, qemu-devel



On 01/07/2016 07:04 PM, Igor Mammedov wrote:
> On Wed, 6 Jan 2016 23:39:04 +0800
> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>
>> On 01/06/2016 11:23 PM, Igor Mammedov wrote:
>>> On Tue,  5 Jan 2016 02:52:05 +0800
>>> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>>>
>>>> The dsm memory is used to save the input parameters and store
>>>> the dsm result which is filled by QEMU.
>>>>
>>>> The address of dsm memory is decided by bios and patched into
>>>> int64 object returned by "MEMA" method
>>>>
>>>> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
>>>> ---
>>>>    hw/acpi/aml-build.c         | 12 ++++++++++++
>>>>    hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
>>>>    include/hw/acpi/aml-build.h |  1 +
>>>>    3 files changed, 35 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
>>>> index 78e1290..83eadb3 100644
>>>> --- a/hw/acpi/aml-build.c
>>>> +++ b/hw/acpi/aml-build.c
>>>> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
>>>>    }
>>>>
>>>>    /*
>>>> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
>>>> + * encode: QWordConst
>>>> + */
>>>> +Aml *aml_int64(const uint64_t val)
>>>> +{
>>>> +    Aml *var = aml_alloc();
>>>> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
>>>> +    build_append_int_noprefix(var->buf, val, 8);
>>>> +    return var;
>>>> +}
>>>> +
>>>> +/*
>>>>     * helper to construct NameString, which returns Aml object
>>>>     * for using with aml_append or other aml_* terms
>>>>     */
>>>> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
>>>> index bc7cd8f..a72104c 100644
>>>> --- a/hw/acpi/nvdimm.c
>>>> +++ b/hw/acpi/nvdimm.c
>>>> @@ -28,6 +28,7 @@
>>>>
>>>>    #include "hw/acpi/acpi.h"
>>>>    #include "hw/acpi/aml-build.h"
>>>> +#include "hw/acpi/bios-linker-loader.h"
>>>>    #include "hw/nvram/fw_cfg.h"
>>>>    #include "hw/mem/nvdimm.h"
>>>>
>>>> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>>>>                        state->dsm_mem->len);
>>>>    }
>>>>
>>>> -#define NVDIMM_COMMON_DSM      "NCAL"
>>>> +#define NVDIMM_GET_DSM_MEM      "MEMA"
>>>> +#define NVDIMM_COMMON_DSM       "NCAL"
>>>>
>>>>    static void nvdimm_build_common_dsm(Aml *dev)
>>>>    {
>>>> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>>>                                  GArray *table_data, GArray *linker,
>>>>                                  uint8_t revision)
>>>>    {
>>>> -    Aml *ssdt, *sb_scope, *dev;
>>>> +    Aml *ssdt, *sb_scope, *dev, *method;
>>>> +    int offset;
>>>>
>>>>        acpi_add_table(table_offsets, table_data);
>>>>
>>>> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>>>
>>>>        aml_append(sb_scope, dev);
>>>>
>>>> +    /*
>>>> +     * leave it at the end of ssdt so that we can conveniently get the
>>>> +     * offset of int64 object returned by the function which will be
>>>> +     * patched with the real address of the dsm memory by BIOS.
>>>> +     */
>>>> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
>>>> +    aml_append(method, aml_return(aml_int64(0x0)));
>>> there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick
>>
>> We can not do this due to the trick in  bios_linker_loader_add_pointer() which will
>> issue a COMMAND_ADD_POINTER to BIOS, however, this request does:
>>           /*
>>            * COMMAND_ADD_POINTER - patch the table (originating from
>>            * @dest_file) at @pointer.offset, by adding a pointer to the table
>>            * originating from @src_file. 1,2,4 or 8 byte unsigned
>>            * addition is used depending on @pointer.size.
>>            */
>>
>> that means the new-offset = old-offset + the address of the new table allocated by BIOS.
>>
>> So we expect 0 offset here.
> perhaps I'm blind but I don't see bios_linker_loader_add_pointer() using
> value stored in aml_int() in any way, see below.
>
>>
>>>
>>>> +    aml_append(sb_scope, method);
>>>>        aml_append(ssdt, sb_scope);
>>>>        /* copy AML table into ACPI tables blob and patch header there */
>>>>        g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
>>>> +
>>>> +    offset = table_data->len - 8;
>>>> +
>>>> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
>>>> +                             false /* high memory */);
>>>> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
>>>> +                                   NVDIMM_DSM_MEM_FILE, table_data,
>>>> +                                   table_data->data + offset,
> here table_data->data + offset is a pointer to the first byte of aml_int() value.
>
> then bios_linker_loader_add_pointer(GArray *linker,
>                                      const char *dest_file,
>                                      const char *src_file,
>                                      GArray *table, void *pointer,
>                                      uint8_t pointer_size)
> {
>      ...
>      size_t offset = (gchar *)pointer - table->data;
>      ...
>      entry.pointer.offset = cpu_to_le32(offset);
>      ...
> }
>
> 'pointer' argument that is passed to bios_linker_loader_add_pointer()
> isn't dereferenced to access aml_int() value.

The story is in BIOS handling, please refer the function, romfile_loader_add_pointer()
in src/fw/romfile_loader.c of seabios:

     memcpy(&pointer, dest_file->data + offset, entry->pointer_size);
     pointer = le64_to_cpu(pointer);
     pointer += (unsigned long)src_file->data;
     pointer = cpu_to_le64(pointer);
     memcpy(dest_file->data + offset, &pointer, entry->pointer_size);

>
>>>> +                                   sizeof(uint64_t));
> also it looks like there is bug somewhere in linker as it patches
> only lower 32 bits of pointed value even though it has been told that
> pointer size is 64bit.

Do you mean that the BIOS allocated memory is always below 4G?
Yes, it is true in current QEMU as it is using u32 to represent memory
address, however i did not check the implementation in OVMF.

Can we assume that BIOS allocatedc address is always 32 bits in QEMU? I
see this is no spec/comment guarantees this and this is completely
depended on BIOS's implementation, so i made the QEMU be 64bit address
aware.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
@ 2016-01-08  3:40           ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-08  3:40 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth



On 01/07/2016 07:04 PM, Igor Mammedov wrote:
> On Wed, 6 Jan 2016 23:39:04 +0800
> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>
>> On 01/06/2016 11:23 PM, Igor Mammedov wrote:
>>> On Tue,  5 Jan 2016 02:52:05 +0800
>>> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>>>
>>>> The dsm memory is used to save the input parameters and store
>>>> the dsm result which is filled by QEMU.
>>>>
>>>> The address of dsm memory is decided by bios and patched into
>>>> int64 object returned by "MEMA" method
>>>>
>>>> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
>>>> ---
>>>>    hw/acpi/aml-build.c         | 12 ++++++++++++
>>>>    hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
>>>>    include/hw/acpi/aml-build.h |  1 +
>>>>    3 files changed, 35 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
>>>> index 78e1290..83eadb3 100644
>>>> --- a/hw/acpi/aml-build.c
>>>> +++ b/hw/acpi/aml-build.c
>>>> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
>>>>    }
>>>>
>>>>    /*
>>>> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
>>>> + * encode: QWordConst
>>>> + */
>>>> +Aml *aml_int64(const uint64_t val)
>>>> +{
>>>> +    Aml *var = aml_alloc();
>>>> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
>>>> +    build_append_int_noprefix(var->buf, val, 8);
>>>> +    return var;
>>>> +}
>>>> +
>>>> +/*
>>>>     * helper to construct NameString, which returns Aml object
>>>>     * for using with aml_append or other aml_* terms
>>>>     */
>>>> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
>>>> index bc7cd8f..a72104c 100644
>>>> --- a/hw/acpi/nvdimm.c
>>>> +++ b/hw/acpi/nvdimm.c
>>>> @@ -28,6 +28,7 @@
>>>>
>>>>    #include "hw/acpi/acpi.h"
>>>>    #include "hw/acpi/aml-build.h"
>>>> +#include "hw/acpi/bios-linker-loader.h"
>>>>    #include "hw/nvram/fw_cfg.h"
>>>>    #include "hw/mem/nvdimm.h"
>>>>
>>>> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
>>>>                        state->dsm_mem->len);
>>>>    }
>>>>
>>>> -#define NVDIMM_COMMON_DSM      "NCAL"
>>>> +#define NVDIMM_GET_DSM_MEM      "MEMA"
>>>> +#define NVDIMM_COMMON_DSM       "NCAL"
>>>>
>>>>    static void nvdimm_build_common_dsm(Aml *dev)
>>>>    {
>>>> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>>>                                  GArray *table_data, GArray *linker,
>>>>                                  uint8_t revision)
>>>>    {
>>>> -    Aml *ssdt, *sb_scope, *dev;
>>>> +    Aml *ssdt, *sb_scope, *dev, *method;
>>>> +    int offset;
>>>>
>>>>        acpi_add_table(table_offsets, table_data);
>>>>
>>>> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
>>>>
>>>>        aml_append(sb_scope, dev);
>>>>
>>>> +    /*
>>>> +     * leave it at the end of ssdt so that we can conveniently get the
>>>> +     * offset of int64 object returned by the function which will be
>>>> +     * patched with the real address of the dsm memory by BIOS.
>>>> +     */
>>>> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
>>>> +    aml_append(method, aml_return(aml_int64(0x0)));
>>> there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick
>>
>> We can not do this due to the trick in  bios_linker_loader_add_pointer() which will
>> issue a COMMAND_ADD_POINTER to BIOS, however, this request does:
>>           /*
>>            * COMMAND_ADD_POINTER - patch the table (originating from
>>            * @dest_file) at @pointer.offset, by adding a pointer to the table
>>            * originating from @src_file. 1,2,4 or 8 byte unsigned
>>            * addition is used depending on @pointer.size.
>>            */
>>
>> that means the new-offset = old-offset + the address of the new table allocated by BIOS.
>>
>> So we expect 0 offset here.
> perhaps I'm blind but I don't see bios_linker_loader_add_pointer() using
> value stored in aml_int() in any way, see below.
>
>>
>>>
>>>> +    aml_append(sb_scope, method);
>>>>        aml_append(ssdt, sb_scope);
>>>>        /* copy AML table into ACPI tables blob and patch header there */
>>>>        g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
>>>> +
>>>> +    offset = table_data->len - 8;
>>>> +
>>>> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
>>>> +                             false /* high memory */);
>>>> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
>>>> +                                   NVDIMM_DSM_MEM_FILE, table_data,
>>>> +                                   table_data->data + offset,
> here table_data->data + offset is a pointer to the first byte of aml_int() value.
>
> then bios_linker_loader_add_pointer(GArray *linker,
>                                      const char *dest_file,
>                                      const char *src_file,
>                                      GArray *table, void *pointer,
>                                      uint8_t pointer_size)
> {
>      ...
>      size_t offset = (gchar *)pointer - table->data;
>      ...
>      entry.pointer.offset = cpu_to_le32(offset);
>      ...
> }
>
> 'pointer' argument that is passed to bios_linker_loader_add_pointer()
> isn't dereferenced to access aml_int() value.

The story is in BIOS handling, please refer the function, romfile_loader_add_pointer()
in src/fw/romfile_loader.c of seabios:

     memcpy(&pointer, dest_file->data + offset, entry->pointer_size);
     pointer = le64_to_cpu(pointer);
     pointer += (unsigned long)src_file->data;
     pointer = cpu_to_le64(pointer);
     memcpy(dest_file->data + offset, &pointer, entry->pointer_size);

>
>>>> +                                   sizeof(uint64_t));
> also it looks like there is bug somewhere in linker as it patches
> only lower 32 bits of pointed value even though it has been told that
> pointer size is 64bit.

Do you mean that the BIOS allocated memory is always below 4G?
Yes, it is true in current QEMU as it is using u32 to represent memory
address, however i did not check the implementation in OVMF.

Can we assume that BIOS allocatedc address is always 32 bits in QEMU? I
see this is no spec/comment guarantees this and this is completely
depended on BIOS's implementation, so i made the QEMU be 64bit address
aware.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method
  2016-01-07 14:22     ` [Qemu-devel] " Igor Mammedov
@ 2016-01-08  4:01       ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-08  4:01 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth



On 01/07/2016 10:22 PM, Igor Mammedov wrote:
> On Tue,  5 Jan 2016 02:52:07 +0800
> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>
>> If dsm memory is successfully patched, we let qemu fully emulate
>> the dsm method
>>
>> This patch saves _DSM input parameters into dsm memory, tell dsm
>> memory address to QEMU, then fetch the result from the dsm memory
> you also need to add NVDR._CRS method that would report
> resources used by operation regions.

I can not understand this point, why we need to report the resource
of OperationRegion? It is ACPI internally used anyway.

>
> NVDIMM_COMMON_DSM - probably should be serialized, otherwise
> there is a race risk, when several callers would write to
> control region.

Yes, i did it in patch 6/6, but definitely i should more it to here.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method
@ 2016-01-08  4:01       ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-08  4:01 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth



On 01/07/2016 10:22 PM, Igor Mammedov wrote:
> On Tue,  5 Jan 2016 02:52:07 +0800
> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>
>> If dsm memory is successfully patched, we let qemu fully emulate
>> the dsm method
>>
>> This patch saves _DSM input parameters into dsm memory, tell dsm
>> memory address to QEMU, then fetch the result from the dsm memory
> you also need to add NVDR._CRS method that would report
> resources used by operation regions.

I can not understand this point, why we need to report the resource
of OperationRegion? It is ACPI internally used anyway.

>
> NVDIMM_COMMON_DSM - probably should be serialized, otherwise
> there is a race risk, when several callers would write to
> control region.

Yes, i did it in patch 6/6, but definitely i should more it to here.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method
  2016-01-08  4:01       ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-08 16:08         ` Igor Mammedov
  -1 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-08 16:08 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, gleb, mtosatti, stefanha, mst, rth, ehabkost,
	dan.j.williams, kvm, qemu-devel

On Fri, 8 Jan 2016 12:01:54 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> On 01/07/2016 10:22 PM, Igor Mammedov wrote:
> > On Tue,  5 Jan 2016 02:52:07 +0800
> > Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> >  
> >> If dsm memory is successfully patched, we let qemu fully emulate
> >> the dsm method
> >>
> >> This patch saves _DSM input parameters into dsm memory, tell dsm
> >> memory address to QEMU, then fetch the result from the dsm memory  
> > you also need to add NVDR._CRS method that would report
> > resources used by operation regions.  
> 
> I can not understand this point, why we need to report the resource
> of OperationRegion? It is ACPI internally used anyway.
so that OSPM could see that particular range is in use
and be able to notice conflicts if it happens some day.

> 
> >
> > NVDIMM_COMMON_DSM - probably should be serialized, otherwise
> > there is a race risk, when several callers would write to
> > control region.  
> 
> Yes, i did it in patch 6/6, but definitely i should more it to here.
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method
@ 2016-01-08 16:08         ` Igor Mammedov
  0 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-08 16:08 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth

On Fri, 8 Jan 2016 12:01:54 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> On 01/07/2016 10:22 PM, Igor Mammedov wrote:
> > On Tue,  5 Jan 2016 02:52:07 +0800
> > Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> >  
> >> If dsm memory is successfully patched, we let qemu fully emulate
> >> the dsm method
> >>
> >> This patch saves _DSM input parameters into dsm memory, tell dsm
> >> memory address to QEMU, then fetch the result from the dsm memory  
> > you also need to add NVDR._CRS method that would report
> > resources used by operation regions.  
> 
> I can not understand this point, why we need to report the resource
> of OperationRegion? It is ACPI internally used anyway.
so that OSPM could see that particular range is in use
and be able to notice conflicts if it happens some day.

> 
> >
> > NVDIMM_COMMON_DSM - probably should be serialized, otherwise
> > there is a race risk, when several callers would write to
> > control region.  
> 
> Yes, i did it in patch 6/6, but definitely i should more it to here.
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
  2016-01-08  3:40           ` [Qemu-devel] " Xiao Guangrong
@ 2016-01-08 17:06             ` Igor Mammedov
  -1 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-08 17:06 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: pbonzini, gleb, mtosatti, stefanha, mst, rth, ehabkost,
	dan.j.williams, kvm, qemu-devel

On Fri, 8 Jan 2016 11:40:53 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> On 01/07/2016 07:04 PM, Igor Mammedov wrote:
> > On Wed, 6 Jan 2016 23:39:04 +0800
> > Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> >  
> >> On 01/06/2016 11:23 PM, Igor Mammedov wrote:  
> >>> On Tue,  5 Jan 2016 02:52:05 +0800
> >>> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> >>>  
> >>>> The dsm memory is used to save the input parameters and store
> >>>> the dsm result which is filled by QEMU.
> >>>>
> >>>> The address of dsm memory is decided by bios and patched into
> >>>> int64 object returned by "MEMA" method
> >>>>
> >>>> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> >>>> ---
> >>>>    hw/acpi/aml-build.c         | 12 ++++++++++++
> >>>>    hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
> >>>>    include/hw/acpi/aml-build.h |  1 +
> >>>>    3 files changed, 35 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> >>>> index 78e1290..83eadb3 100644
> >>>> --- a/hw/acpi/aml-build.c
> >>>> +++ b/hw/acpi/aml-build.c
> >>>> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
> >>>>    }
> >>>>
> >>>>    /*
> >>>> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
> >>>> + * encode: QWordConst
> >>>> + */
> >>>> +Aml *aml_int64(const uint64_t val)
> >>>> +{
> >>>> +    Aml *var = aml_alloc();
> >>>> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
> >>>> +    build_append_int_noprefix(var->buf, val, 8);
> >>>> +    return var;
> >>>> +}
> >>>> +
> >>>> +/*
> >>>>     * helper to construct NameString, which returns Aml object
> >>>>     * for using with aml_append or other aml_* terms
> >>>>     */
> >>>> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> >>>> index bc7cd8f..a72104c 100644
> >>>> --- a/hw/acpi/nvdimm.c
> >>>> +++ b/hw/acpi/nvdimm.c
> >>>> @@ -28,6 +28,7 @@
> >>>>
> >>>>    #include "hw/acpi/acpi.h"
> >>>>    #include "hw/acpi/aml-build.h"
> >>>> +#include "hw/acpi/bios-linker-loader.h"
> >>>>    #include "hw/nvram/fw_cfg.h"
> >>>>    #include "hw/mem/nvdimm.h"
> >>>>
> >>>> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> >>>>                        state->dsm_mem->len);
> >>>>    }
> >>>>
> >>>> -#define NVDIMM_COMMON_DSM      "NCAL"
> >>>> +#define NVDIMM_GET_DSM_MEM      "MEMA"
> >>>> +#define NVDIMM_COMMON_DSM       "NCAL"
> >>>>
> >>>>    static void nvdimm_build_common_dsm(Aml *dev)
> >>>>    {
> >>>> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
> >>>>                                  GArray *table_data, GArray *linker,
> >>>>                                  uint8_t revision)
> >>>>    {
> >>>> -    Aml *ssdt, *sb_scope, *dev;
> >>>> +    Aml *ssdt, *sb_scope, *dev, *method;
> >>>> +    int offset;
> >>>>
> >>>>        acpi_add_table(table_offsets, table_data);
> >>>>
> >>>> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
> >>>>
> >>>>        aml_append(sb_scope, dev);
> >>>>
> >>>> +    /*
> >>>> +     * leave it at the end of ssdt so that we can conveniently get the
> >>>> +     * offset of int64 object returned by the function which will be
> >>>> +     * patched with the real address of the dsm memory by BIOS.
> >>>> +     */
> >>>> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
> >>>> +    aml_append(method, aml_return(aml_int64(0x0)));  
> >>> there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick  
> >>
> >> We can not do this due to the trick in  bios_linker_loader_add_pointer() which will
> >> issue a COMMAND_ADD_POINTER to BIOS, however, this request does:
> >>           /*
> >>            * COMMAND_ADD_POINTER - patch the table (originating from
> >>            * @dest_file) at @pointer.offset, by adding a pointer to the table
> >>            * originating from @src_file. 1,2,4 or 8 byte unsigned
> >>            * addition is used depending on @pointer.size.
> >>            */
> >>
> >> that means the new-offset = old-offset + the address of the new table allocated by BIOS.
> >>
> >> So we expect 0 offset here.  
> > perhaps I'm blind but I don't see bios_linker_loader_add_pointer() using
> > value stored in aml_int() in any way, see below.
> >  
> >>  
> >>>  
> >>>> +    aml_append(sb_scope, method);
> >>>>        aml_append(ssdt, sb_scope);
> >>>>        /* copy AML table into ACPI tables blob and patch header there */
> >>>>        g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
> >>>> +
> >>>> +    offset = table_data->len - 8;
> >>>> +
> >>>> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
> >>>> +                             false /* high memory */);
> >>>> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
> >>>> +                                   NVDIMM_DSM_MEM_FILE, table_data,
> >>>> +                                   table_data->data + offset,  
> > here table_data->data + offset is a pointer to the first byte of aml_int() value.
> >
> > then bios_linker_loader_add_pointer(GArray *linker,
> >                                      const char *dest_file,
> >                                      const char *src_file,
> >                                      GArray *table, void *pointer,
> >                                      uint8_t pointer_size)
> > {
> >      ...
> >      size_t offset = (gchar *)pointer - table->data;
> >      ...
> >      entry.pointer.offset = cpu_to_le32(offset);
> >      ...
> > }
> >
> > 'pointer' argument that is passed to bios_linker_loader_add_pointer()
> > isn't dereferenced to access aml_int() value.  
> 
> The story is in BIOS handling, please refer the function, romfile_loader_add_pointer()
> in src/fw/romfile_loader.c of seabios:
> 
>      memcpy(&pointer, dest_file->data + offset, entry->pointer_size);
>      pointer = le64_to_cpu(pointer);
so 'pointer' holds offset from scr_file start,
that looks quite non-intuitive for user of bios_linker_loader_add_pointer() API,
I guess it came from the fact that initially linker API
was intended for usage with fixed tables.

I'd rather make bios_linker_loader_add_pointer() fixed and
make it not rely on contents of binary blobs it's supposed
to patch and pass the offset from scr_file start as part of
COMMAND_ADD_POINTER operation.
Or if it's hard to do so from compat POV with current impl,
write it in blob inside of bios_linker_loader_add_pointer()
and do not require creators of patched blobs to know
how linker works internally.

  bios_linker_loader_add_pointer(GArray *linker,
                                      const char *dest_file,
                                      const char *src_file,
                                      GArray *table,
+                                     uint64_t offset_in_src_file,
                                      void *pointer,
                                      uint8_t pointer_size)

>      pointer += (unsigned long)src_file->data;
that looks wrong src_file->data are going to truncated here if
addr is 64-bit.

>      pointer = cpu_to_le64(pointer);
>      memcpy(dest_file->data + offset, &pointer, entry->pointer_size);
> 
> >  
> >>>> +                                   sizeof(uint64_t));  
> > also it looks like there is bug somewhere in linker as it patches
> > only lower 32 bits of pointed value even though it has been told that
> > pointer size is 64bit.  
> 
> Do you mean that the BIOS allocated memory is always below 4G?
> Yes, it is true in current QEMU as it is using u32 to represent memory
> address, however i did not check the implementation in OVMF.
> 
> Can we assume that BIOS allocatedc address is always 32 bits in QEMU? I
> see this is no spec/comment guarantees this and this is completely
> depended on BIOS's implementation, so i made the QEMU be 64bit address
> aware.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 3/6] nvdimm acpi: introduce patched dsm memory
@ 2016-01-08 17:06             ` Igor Mammedov
  0 siblings, 0 replies; 34+ messages in thread
From: Igor Mammedov @ 2016-01-08 17:06 UTC (permalink / raw)
  To: Xiao Guangrong
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth

On Fri, 8 Jan 2016 11:40:53 +0800
Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:

> On 01/07/2016 07:04 PM, Igor Mammedov wrote:
> > On Wed, 6 Jan 2016 23:39:04 +0800
> > Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> >  
> >> On 01/06/2016 11:23 PM, Igor Mammedov wrote:  
> >>> On Tue,  5 Jan 2016 02:52:05 +0800
> >>> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
> >>>  
> >>>> The dsm memory is used to save the input parameters and store
> >>>> the dsm result which is filled by QEMU.
> >>>>
> >>>> The address of dsm memory is decided by bios and patched into
> >>>> int64 object returned by "MEMA" method
> >>>>
> >>>> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
> >>>> ---
> >>>>    hw/acpi/aml-build.c         | 12 ++++++++++++
> >>>>    hw/acpi/nvdimm.c            | 24 ++++++++++++++++++++++--
> >>>>    include/hw/acpi/aml-build.h |  1 +
> >>>>    3 files changed, 35 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> >>>> index 78e1290..83eadb3 100644
> >>>> --- a/hw/acpi/aml-build.c
> >>>> +++ b/hw/acpi/aml-build.c
> >>>> @@ -394,6 +394,18 @@ Aml *aml_int(const uint64_t val)
> >>>>    }
> >>>>
> >>>>    /*
> >>>> + * ACPI 1.0b: 16.2.3 Data Objects Encoding:
> >>>> + * encode: QWordConst
> >>>> + */
> >>>> +Aml *aml_int64(const uint64_t val)
> >>>> +{
> >>>> +    Aml *var = aml_alloc();
> >>>> +    build_append_byte(var->buf, 0x0E); /* QWordPrefix */
> >>>> +    build_append_int_noprefix(var->buf, val, 8);
> >>>> +    return var;
> >>>> +}
> >>>> +
> >>>> +/*
> >>>>     * helper to construct NameString, which returns Aml object
> >>>>     * for using with aml_append or other aml_* terms
> >>>>     */
> >>>> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> >>>> index bc7cd8f..a72104c 100644
> >>>> --- a/hw/acpi/nvdimm.c
> >>>> +++ b/hw/acpi/nvdimm.c
> >>>> @@ -28,6 +28,7 @@
> >>>>
> >>>>    #include "hw/acpi/acpi.h"
> >>>>    #include "hw/acpi/aml-build.h"
> >>>> +#include "hw/acpi/bios-linker-loader.h"
> >>>>    #include "hw/nvram/fw_cfg.h"
> >>>>    #include "hw/mem/nvdimm.h"
> >>>>
> >>>> @@ -402,7 +403,8 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io,
> >>>>                        state->dsm_mem->len);
> >>>>    }
> >>>>
> >>>> -#define NVDIMM_COMMON_DSM      "NCAL"
> >>>> +#define NVDIMM_GET_DSM_MEM      "MEMA"
> >>>> +#define NVDIMM_COMMON_DSM       "NCAL"
> >>>>
> >>>>    static void nvdimm_build_common_dsm(Aml *dev)
> >>>>    {
> >>>> @@ -468,7 +470,8 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
> >>>>                                  GArray *table_data, GArray *linker,
> >>>>                                  uint8_t revision)
> >>>>    {
> >>>> -    Aml *ssdt, *sb_scope, *dev;
> >>>> +    Aml *ssdt, *sb_scope, *dev, *method;
> >>>> +    int offset;
> >>>>
> >>>>        acpi_add_table(table_offsets, table_data);
> >>>>
> >>>> @@ -499,9 +502,26 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets,
> >>>>
> >>>>        aml_append(sb_scope, dev);
> >>>>
> >>>> +    /*
> >>>> +     * leave it at the end of ssdt so that we can conveniently get the
> >>>> +     * offset of int64 object returned by the function which will be
> >>>> +     * patched with the real address of the dsm memory by BIOS.
> >>>> +     */
> >>>> +    method = aml_method(NVDIMM_GET_DSM_MEM, 0, AML_NOTSERIALIZED);
> >>>> +    aml_append(method, aml_return(aml_int64(0x0)));  
> >>> there is no need in dedicated aml_int64(), you can use aml_int(0x6400000000) trick  
> >>
> >> We can not do this due to the trick in  bios_linker_loader_add_pointer() which will
> >> issue a COMMAND_ADD_POINTER to BIOS, however, this request does:
> >>           /*
> >>            * COMMAND_ADD_POINTER - patch the table (originating from
> >>            * @dest_file) at @pointer.offset, by adding a pointer to the table
> >>            * originating from @src_file. 1,2,4 or 8 byte unsigned
> >>            * addition is used depending on @pointer.size.
> >>            */
> >>
> >> that means the new-offset = old-offset + the address of the new table allocated by BIOS.
> >>
> >> So we expect 0 offset here.  
> > perhaps I'm blind but I don't see bios_linker_loader_add_pointer() using
> > value stored in aml_int() in any way, see below.
> >  
> >>  
> >>>  
> >>>> +    aml_append(sb_scope, method);
> >>>>        aml_append(ssdt, sb_scope);
> >>>>        /* copy AML table into ACPI tables blob and patch header there */
> >>>>        g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
> >>>> +
> >>>> +    offset = table_data->len - 8;
> >>>> +
> >>>> +    bios_linker_loader_alloc(linker, NVDIMM_DSM_MEM_FILE, TARGET_PAGE_SIZE,
> >>>> +                             false /* high memory */);
> >>>> +    bios_linker_loader_add_pointer(linker, ACPI_BUILD_TABLE_FILE,
> >>>> +                                   NVDIMM_DSM_MEM_FILE, table_data,
> >>>> +                                   table_data->data + offset,  
> > here table_data->data + offset is a pointer to the first byte of aml_int() value.
> >
> > then bios_linker_loader_add_pointer(GArray *linker,
> >                                      const char *dest_file,
> >                                      const char *src_file,
> >                                      GArray *table, void *pointer,
> >                                      uint8_t pointer_size)
> > {
> >      ...
> >      size_t offset = (gchar *)pointer - table->data;
> >      ...
> >      entry.pointer.offset = cpu_to_le32(offset);
> >      ...
> > }
> >
> > 'pointer' argument that is passed to bios_linker_loader_add_pointer()
> > isn't dereferenced to access aml_int() value.  
> 
> The story is in BIOS handling, please refer the function, romfile_loader_add_pointer()
> in src/fw/romfile_loader.c of seabios:
> 
>      memcpy(&pointer, dest_file->data + offset, entry->pointer_size);
>      pointer = le64_to_cpu(pointer);
so 'pointer' holds offset from scr_file start,
that looks quite non-intuitive for user of bios_linker_loader_add_pointer() API,
I guess it came from the fact that initially linker API
was intended for usage with fixed tables.

I'd rather make bios_linker_loader_add_pointer() fixed and
make it not rely on contents of binary blobs it's supposed
to patch and pass the offset from scr_file start as part of
COMMAND_ADD_POINTER operation.
Or if it's hard to do so from compat POV with current impl,
write it in blob inside of bios_linker_loader_add_pointer()
and do not require creators of patched blobs to know
how linker works internally.

  bios_linker_loader_add_pointer(GArray *linker,
                                      const char *dest_file,
                                      const char *src_file,
                                      GArray *table,
+                                     uint64_t offset_in_src_file,
                                      void *pointer,
                                      uint8_t pointer_size)

>      pointer += (unsigned long)src_file->data;
that looks wrong src_file->data are going to truncated here if
addr is 64-bit.

>      pointer = cpu_to_le64(pointer);
>      memcpy(dest_file->data + offset, &pointer, entry->pointer_size);
> 
> >  
> >>>> +                                   sizeof(uint64_t));  
> > also it looks like there is bug somewhere in linker as it patches
> > only lower 32 bits of pointed value even though it has been told that
> > pointer size is 64bit.  
> 
> Do you mean that the BIOS allocated memory is always below 4G?
> Yes, it is true in current QEMU as it is using u32 to represent memory
> address, however i did not check the implementation in OVMF.
> 
> Can we assume that BIOS allocatedc address is always 32 bits in QEMU? I
> see this is no spec/comment guarantees this and this is completely
> depended on BIOS's implementation, so i made the QEMU be 64bit address
> aware.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
  2016-01-07 14:13   ` Igor Mammedov
@ 2016-01-12 19:02     ` Xiao Guangrong
  -1 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-12 19:02 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: pbonzini, ehabkost, kvm, mst, gleb, mtosatti, qemu-devel,
	stefanha, dan.j.williams, rth



On 01/07/2016 10:13 PM, Igor Mammedov wrote:
> On Tue,  5 Jan 2016 02:52:02 +0800
> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>
>> This patchset is against commit 5530427f0ca (acpi: extend aml_and() to
>> accept target argument) on pci branch of Michael's git tree
>> and can be found at:
>>        https://github.com/xiaogr/qemu.git nvdimm-acpi-v1
>>
>> This is the second part of vNVDIMM implementation which implements the
>> BIOS patched dsm memory and introduces the framework that allows QEMU
>> to emulate DSM method
>>
>> Thanks to Michael's idea, we do not reserve any memory for NVDIMM ACPI,
>> instead we let BIOS allocate the memory and patch the address to the
>> offset we want
>>
>> IO port is still enabled as it plays as the way to notify QEMU and pass
>> the patched dsm memory address, so that IO port region, 0x0a18 - 0xa20,
>> is reserved and it is divided into two 32 bits ports and used to pass
>> the low 32 bits and high 32 bits of dsm memory address to QEMU
>>
>> Thanks Igor's idea, this patchset also extends DSDT/SSDT to revision 2
>> to apply 64 bit operations, in order to keeping compatibility, old
>> version (<= 2.5) still uses revision 1. Since 64 bit operations breaks
>> old guests (such as windows XP), we should keep the 64 bits stuff in
>> the private place where common ACPI operation does not touch it
>>
>
> general notes:
> 1. could you split out AML API additions/changes into separate patches?
>     even if series nvdims patches couldn't be accepted on next respin,
>     AML API patches could be good and we could pick them up just
>     for API completeness. That also would make them easier to review
>     and reduces count of patches you'd need to respin.

Yes, it is definitely better. Have done it in the v2.

> 2. add test case for NVDIMM table blob, see tests/bios-tables-test.c
>     at the beginning of series.
> 3. make V=1 check would show you ASL diff your patches are introducing,
>     it will save you from booting real guest and dumping/decompiling
>     tables manually.
> 4. at the end of series add NVDIMM table test blob with new table.
>     you can use tests/acpi-test-data/rebuild-expected-aml.sh to make it
> 5. if make check by some miracle passes with these patches,
>     dump NVDIMM table in guest and try to decompile and then compile it
>     back with IASL, it will show you what needs to be fixed.

Igor, really appreciate the tips you shared to me, that helped me a lot
in the development of the new version.

BTW, i did not touch the core bios_linker_loader_add_pointer() API as
i think it can be changed in a separated patchset. In the new version
we zeroed the offset by nvdimm itself and dropped aml_int64().

The new version has been posted out, please review.

Thank you!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] NVDIMM ACPI: introduce the framework of QEMU emulated DSM
@ 2016-01-12 19:02     ` Xiao Guangrong
  0 siblings, 0 replies; 34+ messages in thread
From: Xiao Guangrong @ 2016-01-12 19:02 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: ehabkost, kvm, mst, gleb, mtosatti, qemu-devel, stefanha,
	pbonzini, dan.j.williams, rth



On 01/07/2016 10:13 PM, Igor Mammedov wrote:
> On Tue,  5 Jan 2016 02:52:02 +0800
> Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote:
>
>> This patchset is against commit 5530427f0ca (acpi: extend aml_and() to
>> accept target argument) on pci branch of Michael's git tree
>> and can be found at:
>>        https://github.com/xiaogr/qemu.git nvdimm-acpi-v1
>>
>> This is the second part of vNVDIMM implementation which implements the
>> BIOS patched dsm memory and introduces the framework that allows QEMU
>> to emulate DSM method
>>
>> Thanks to Michael's idea, we do not reserve any memory for NVDIMM ACPI,
>> instead we let BIOS allocate the memory and patch the address to the
>> offset we want
>>
>> IO port is still enabled as it plays as the way to notify QEMU and pass
>> the patched dsm memory address, so that IO port region, 0x0a18 - 0xa20,
>> is reserved and it is divided into two 32 bits ports and used to pass
>> the low 32 bits and high 32 bits of dsm memory address to QEMU
>>
>> Thanks Igor's idea, this patchset also extends DSDT/SSDT to revision 2
>> to apply 64 bit operations, in order to keeping compatibility, old
>> version (<= 2.5) still uses revision 1. Since 64 bit operations breaks
>> old guests (such as windows XP), we should keep the 64 bits stuff in
>> the private place where common ACPI operation does not touch it
>>
>
> general notes:
> 1. could you split out AML API additions/changes into separate patches?
>     even if series nvdims patches couldn't be accepted on next respin,
>     AML API patches could be good and we could pick them up just
>     for API completeness. That also would make them easier to review
>     and reduces count of patches you'd need to respin.

Yes, it is definitely better. Have done it in the v2.

> 2. add test case for NVDIMM table blob, see tests/bios-tables-test.c
>     at the beginning of series.
> 3. make V=1 check would show you ASL diff your patches are introducing,
>     it will save you from booting real guest and dumping/decompiling
>     tables manually.
> 4. at the end of series add NVDIMM table test blob with new table.
>     you can use tests/acpi-test-data/rebuild-expected-aml.sh to make it
> 5. if make check by some miracle passes with these patches,
>     dump NVDIMM table in guest and try to decompile and then compile it
>     back with IASL, it will show you what needs to be fixed.

Igor, really appreciate the tips you shared to me, that helped me a lot
in the development of the new version.

BTW, i did not touch the core bios_linker_loader_add_pointer() API as
i think it can be changed in a separated patchset. In the new version
we zeroed the offset by nvdimm itself and dropped aml_int64().

The new version has been posted out, please review.

Thank you!

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2016-01-12 19:09 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-04 18:52 [PATCH 0/6] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Xiao Guangrong
2016-01-04 18:52 ` [Qemu-devel] " Xiao Guangrong
2016-01-04 18:52 ` [PATCH 1/6] pc: acpi: bump DSDT/SSDT compliance revision to v2 Xiao Guangrong
2016-01-04 18:52   ` [Qemu-devel] " Xiao Guangrong
2016-01-04 18:52 ` [PATCH 2/6] nvdimm acpi: initialize the resource used by NVDIMM ACPI Xiao Guangrong
2016-01-04 18:52   ` [Qemu-devel] " Xiao Guangrong
2016-01-04 18:52 ` [PATCH 3/6] nvdimm acpi: introduce patched dsm memory Xiao Guangrong
2016-01-04 18:52   ` [Qemu-devel] " Xiao Guangrong
2016-01-06 15:23   ` Igor Mammedov
2016-01-06 15:23     ` [Qemu-devel] " Igor Mammedov
2016-01-06 15:39     ` Xiao Guangrong
2016-01-06 15:39       ` [Qemu-devel] " Xiao Guangrong
2016-01-07 11:04       ` Igor Mammedov
2016-01-07 11:04         ` [Qemu-devel] " Igor Mammedov
2016-01-08  3:40         ` Xiao Guangrong
2016-01-08  3:40           ` [Qemu-devel] " Xiao Guangrong
2016-01-08 17:06           ` Igor Mammedov
2016-01-08 17:06             ` [Qemu-devel] " Igor Mammedov
2016-01-04 18:52 ` [PATCH 4/6] acpi: allow using acpi named offset for OperationRegion Xiao Guangrong
2016-01-04 18:52   ` [Qemu-devel] " Xiao Guangrong
2016-01-04 18:52 ` [PATCH 5/6] nvdimm acpi: let qemu handle _DSM method Xiao Guangrong
2016-01-04 18:52   ` [Qemu-devel] " Xiao Guangrong
2016-01-07 14:22   ` Igor Mammedov
2016-01-07 14:22     ` [Qemu-devel] " Igor Mammedov
2016-01-08  4:01     ` Xiao Guangrong
2016-01-08  4:01       ` [Qemu-devel] " Xiao Guangrong
2016-01-08 16:08       ` Igor Mammedov
2016-01-08 16:08         ` [Qemu-devel] " Igor Mammedov
2016-01-04 18:52 ` [PATCH 6/6] nvdimm acpi: emulate dsm method Xiao Guangrong
2016-01-04 18:52   ` [Qemu-devel] " Xiao Guangrong
2016-01-07 14:13 ` [Qemu-devel] [PATCH 0/6] NVDIMM ACPI: introduce the framework of QEMU emulated DSM Igor Mammedov
2016-01-07 14:13   ` Igor Mammedov
2016-01-12 19:02   ` Xiao Guangrong
2016-01-12 19:02     ` Xiao Guangrong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.