All of lore.kernel.org
 help / color / mirror / Atom feed
* [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods
@ 2022-05-30  3:40 Robert Hoo
  2022-05-30  3:40 ` [QEMU PATCH v2 1/6] tests/acpi: allow SSDT changes Robert Hoo
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Robert Hoo @ 2022-05-30  3:40 UTC (permalink / raw)
  To: imammedo, mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu
  Cc: qemu-devel, robert.hu

(v1 Subject was "acpi/nvdimm: support NVDIMM _LS{I,R,W} methods")

Originally NVDIMM Label methods was defined in Intel PMEM _DSM Interface
Spec [1], of function index 4, 5 and 6.
Recent ACPI spec [2] has deprecated those _DSM methods with ACPI NVDIMM
Label Methods _LS{I,R,W}. The essence of these functions has no changes.

This patch set is to update QEMU emulation on this, as well as update
bios-table-test binaries, and substitute trace events for nvdimm_debug().

Patch 1 and 5, the opening and closing parenthesis patches for changes
affecting ACPI tables. Details see tests/qtest/bios-tables-test.c.
Patch 2, a trivial fix on aml_or()/aml_and() usage.
Patch 3, allow NVDIMM _DSM revision 2 to get in.
Patch 4, main body, which implements the virtual _LS{I,R,W} methods and
also generalize QEMU <--> ACPI NVDIMM method interface, which paves the way
for future necessary methods implementation, not only _DSM. The result
SSDT table changes in ASL can be found in Patch 5's commit message.
Patch 6, define trace events for acpi/nvdimm, replace nvdimm_debug()

Test
Tested Linux guest of recent Kernel 5.18.0-rc4, create/destroy
namespace, init labels, etc. works as before.
Tested Windows 10 (1607) guest, and Windows server 2019, but seems vNVDIMM
in Windows guest hasn't ever been supported. Before and after this patch
set, no difference on guest boot up and other functions.

[1] Intel PMEM _DSM Interface Spec v2.0, 3.10 Deprecated Functions
https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf
[2] ACPI Spec v6.4, 6.5.10 NVDIMM Label Methods
https://uefi.org/sites/default/files/resources/ACPI_Spec_6_4_Jan22.pdf

---
Change Log:
v2:
Almost rewritten
Separate Patch 2
Dance with tests/qtest/bios-table-tests
Add trace events

Robert Hoo (6):
  tests/acpi: allow SSDT changes
  acpi/ssdt: Fix aml_or() and aml_and() in if clause
  acpi/nvdimm: NVDIMM _DSM Spec supports revision 2
  nvdimm: Implement ACPI NVDIMM Label Methods
  test/acpi/bios-tables-test: SSDT: update standard AML binaries
  acpi/nvdimm: Define trace events for NVDIMM and substitute
    nvdimm_debug()

 hw/acpi/nvdimm.c                 | 434 +++++++++++++++++++++++--------
 hw/acpi/trace-events             |  14 +
 include/hw/mem/nvdimm.h          |  12 +-
 tests/data/acpi/pc/SSDT.dimmpxm  | Bin 734 -> 1829 bytes
 tests/data/acpi/q35/SSDT.dimmpxm | Bin 734 -> 1829 bytes
 5 files changed, 344 insertions(+), 116 deletions(-)


base-commit: 58b53669e87fed0d70903e05cd42079fbbdbc195
-- 
2.31.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [QEMU PATCH v2 1/6] tests/acpi: allow SSDT changes
  2022-05-30  3:40 [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Robert Hoo
@ 2022-05-30  3:40 ` Robert Hoo
  2022-06-16 11:24   ` Igor Mammedov
  2022-05-30  3:40 ` [QEMU PATCH v2 2/6] acpi/ssdt: Fix aml_or() and aml_and() in if clause Robert Hoo
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Robert Hoo @ 2022-05-30  3:40 UTC (permalink / raw)
  To: imammedo, mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu
  Cc: qemu-devel, robert.hu

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
---
 tests/qtest/bios-tables-test-allowed-diff.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
index dfb8523c8b..eb8bae1407 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1 +1,3 @@
 /* List of comma-separated changed AML files to ignore */
+"tests/data/acpi/pc/SSDT.dimmpxm",
+"tests/data/acpi/q35/SSDT.dimmpxm",
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [QEMU PATCH v2 2/6] acpi/ssdt: Fix aml_or() and aml_and() in if clause
  2022-05-30  3:40 [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Robert Hoo
  2022-05-30  3:40 ` [QEMU PATCH v2 1/6] tests/acpi: allow SSDT changes Robert Hoo
@ 2022-05-30  3:40 ` Robert Hoo
  2022-05-30  3:40 ` [QEMU PATCH v2 3/6] acpi/nvdimm: NVDIMM _DSM Spec supports revision 2 Robert Hoo
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Robert Hoo @ 2022-05-30  3:40 UTC (permalink / raw)
  To: imammedo, mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu
  Cc: qemu-devel, robert.hu

In If condition, using bitwise and/or, rather than logical and/or.

The result change in AML code:

If (((Local6 == Zero) | (Arg0 != Local0)))
==>
If (((Local6 == Zero) || (Arg0 != Local0)))

If (((ObjectType (Arg3) == 0x04) & (SizeOf (Arg3) == One)))
==>
If (((ObjectType (Arg3) == 0x04) && (SizeOf (Arg3) == One)))

Fixes: 90623ebf603 ("nvdimm acpi: check UUID")
Fixes: 4568c948066 ("nvdimm acpi: save arg3 of _DSM method")
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/acpi/nvdimm.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 0d43da19ea..0ab247a870 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -1040,7 +1040,7 @@ static void nvdimm_build_common_dsm(Aml *dev,
 
     uuid_invalid = aml_lnot(aml_equal(uuid, expected_uuid));
 
-    unsupport = aml_if(aml_or(unpatched, uuid_invalid, NULL));
+    unsupport = aml_if(aml_lor(unpatched, uuid_invalid));
 
     /*
      * function 0 is called to inquire what functions are supported by
@@ -1072,10 +1072,9 @@ static void nvdimm_build_common_dsm(Aml *dev,
      * in the DSM Spec.
      */
     pckg = aml_arg(3);
-    ifctx = aml_if(aml_and(aml_equal(aml_object_type(pckg),
+    ifctx = aml_if(aml_land(aml_equal(aml_object_type(pckg),
                    aml_int(4 /* Package */)) /* It is a Package? */,
-                   aml_equal(aml_sizeof(pckg), aml_int(1)) /* 1 element? */,
-                   NULL));
+                   aml_equal(aml_sizeof(pckg), aml_int(1)) /* 1 element? */));
 
     pckg_index = aml_local(2);
     pckg_buf = aml_local(3);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [QEMU PATCH v2 3/6] acpi/nvdimm: NVDIMM _DSM Spec supports revision 2
  2022-05-30  3:40 [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Robert Hoo
  2022-05-30  3:40 ` [QEMU PATCH v2 1/6] tests/acpi: allow SSDT changes Robert Hoo
  2022-05-30  3:40 ` [QEMU PATCH v2 2/6] acpi/ssdt: Fix aml_or() and aml_and() in if clause Robert Hoo
@ 2022-05-30  3:40 ` Robert Hoo
  2022-06-16 11:38   ` Igor Mammedov
  2022-05-30  3:40 ` [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods Robert Hoo
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Robert Hoo @ 2022-05-30  3:40 UTC (permalink / raw)
  To: imammedo, mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu
  Cc: qemu-devel, robert.hu

The Intel Optane PMem DSM Interface, Version 2.0 [1], is the up-to-date
spec for NVDIMM _DSM definition, which supports revision_id == 2.

Nevertheless, Rev.2 of NVDIMM _DSM has no functional change on those Label
Data _DSM Functions, which are the only ones implemented for vNVDIMM.
So, simple change to support this revision_id == 2 case.

[1] https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
---
 hw/acpi/nvdimm.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 0ab247a870..59b42afcf1 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -849,9 +849,13 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
     nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n", in->revision,
                  in->handle, in->function);
 
-    if (in->revision != 0x1 /* Currently we only support DSM Spec Rev1. */) {
-        nvdimm_debug("Revision 0x%x is not supported, expect 0x%x.\n",
-                     in->revision, 0x1);
+    /*
+     * Current NVDIMM _DSM Spec supports Rev1 and Rev2
+     * Intel® OptanePersistent Memory Module DSM Interface, Revision 2.0
+     */
+    if (in->revision != 0x1 && in->revision != 0x2) {
+        nvdimm_debug("Revision 0x%x is not supported, expect 0x1 or 0x2.\n",
+                     in->revision);
         nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
         goto exit;
     }
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods
  2022-05-30  3:40 [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Robert Hoo
                   ` (2 preceding siblings ...)
  2022-05-30  3:40 ` [QEMU PATCH v2 3/6] acpi/nvdimm: NVDIMM _DSM Spec supports revision 2 Robert Hoo
@ 2022-05-30  3:40 ` Robert Hoo
  2022-06-16 12:32   ` Igor Mammedov
  2022-05-30  3:40 ` [QEMU PATCH v2 5/6] test/acpi/bios-tables-test: SSDT: update golden master binaries Robert Hoo
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Robert Hoo @ 2022-05-30  3:40 UTC (permalink / raw)
  To: imammedo, mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu
  Cc: qemu-devel, robert.hu

Recent ACPI spec [1] has defined NVDIMM Label Methods _LS{I,R,W}, which
depricates corresponding _DSM Functions defined by PMEM _DSM Interface spec
[2].

In this implementation, we do 2 things
1. Generalize the QEMU<->ACPI BIOS NVDIMM interface, wrap it with ACPI
method dispatch, _DSM is one of the branches. This also paves the way for
adding other ACPI methods for NVDIMM.
2. Add _LS{I,R,W} method in each NVDIMM device in SSDT.
ASL form of SSDT changes can be found in next test/qtest/bios-table-test
commit message.

[1] ACPI Spec v6.4, 6.5.10 NVDIMM Label Methods
https://uefi.org/sites/default/files/resources/ACPI_Spec_6_4_Jan22.pdf
[2] Intel PMEM _DSM Interface Spec v2.0, 3.10 Deprecated Functions
https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
---
 hw/acpi/nvdimm.c        | 424 +++++++++++++++++++++++++++++++---------
 include/hw/mem/nvdimm.h |   6 +
 2 files changed, 338 insertions(+), 92 deletions(-)

diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 59b42afcf1..50ee85866b 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -416,17 +416,22 @@ static void nvdimm_build_nfit(NVDIMMState *state, GArray *table_offsets,
 
 #define NVDIMM_DSM_MEMORY_SIZE      4096
 
-struct NvdimmDsmIn {
+struct NvdimmMthdIn {
     uint32_t handle;
+    uint32_t method;
+    uint8_t  args[4088];
+} QEMU_PACKED;
+typedef struct NvdimmMthdIn NvdimmMthdIn;
+struct NvdimmDsmIn {
     uint32_t revision;
     uint32_t function;
     /* the remaining size in the page is used by arg3. */
     union {
-        uint8_t arg3[4084];
+        uint8_t arg3[4080];
     };
 } QEMU_PACKED;
 typedef struct NvdimmDsmIn NvdimmDsmIn;
-QEMU_BUILD_BUG_ON(sizeof(NvdimmDsmIn) != NVDIMM_DSM_MEMORY_SIZE);
+QEMU_BUILD_BUG_ON(sizeof(NvdimmMthdIn) != NVDIMM_DSM_MEMORY_SIZE);
 
 struct NvdimmDsmOut {
     /* the size of buffer filled by QEMU. */
@@ -470,7 +475,8 @@ struct NvdimmFuncGetLabelDataIn {
 } QEMU_PACKED;
 typedef struct NvdimmFuncGetLabelDataIn NvdimmFuncGetLabelDataIn;
 QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncGetLabelDataIn) +
-                  offsetof(NvdimmDsmIn, arg3) > NVDIMM_DSM_MEMORY_SIZE);
+                  offsetof(NvdimmDsmIn, arg3) + offsetof(NvdimmMthdIn, args) >
+                  NVDIMM_DSM_MEMORY_SIZE);
 
 struct NvdimmFuncGetLabelDataOut {
     /* the size of buffer filled by QEMU. */
@@ -488,14 +494,16 @@ struct NvdimmFuncSetLabelDataIn {
 } QEMU_PACKED;
 typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn;
 QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) +
-                  offsetof(NvdimmDsmIn, arg3) > NVDIMM_DSM_MEMORY_SIZE);
+                  offsetof(NvdimmDsmIn, arg3) + offsetof(NvdimmMthdIn, args) >
+                  NVDIMM_DSM_MEMORY_SIZE);
 
 struct NvdimmFuncReadFITIn {
     uint32_t offset; /* the offset into FIT buffer. */
 } QEMU_PACKED;
 typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn;
 QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) +
-                  offsetof(NvdimmDsmIn, arg3) > NVDIMM_DSM_MEMORY_SIZE);
+                  offsetof(NvdimmDsmIn, arg3) + offsetof(NvdimmMthdIn, args) >
+                  NVDIMM_DSM_MEMORY_SIZE);
 
 struct NvdimmFuncReadFITOut {
     /* the size of buffer filled by QEMU. */
@@ -636,7 +644,8 @@ static uint32_t nvdimm_get_max_xfer_label_size(void)
      * the max data ACPI can write one time which is transferred by
      * 'Set Namespace Label Data' function.
      */
-    max_set_size = dsm_memory_size - offsetof(NvdimmDsmIn, arg3) -
+    max_set_size = dsm_memory_size - offsetof(NvdimmMthdIn, args) -
+                   offsetof(NvdimmDsmIn, arg3) -
                    sizeof(NvdimmFuncSetLabelDataIn);
 
     return MIN(max_get_size, max_set_size);
@@ -697,16 +706,15 @@ static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
 /*
  * DSM Spec Rev1 4.5 Get Namespace Label Data (Function Index 5).
  */
-static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
-                                      hwaddr dsm_mem_addr)
+static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
+                                    NvdimmFuncGetLabelDataIn *get_label_data,
+                                    hwaddr dsm_mem_addr)
 {
     NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
-    NvdimmFuncGetLabelDataIn *get_label_data;
     NvdimmFuncGetLabelDataOut *get_label_data_out;
     uint32_t status;
     int size;
 
-    get_label_data = (NvdimmFuncGetLabelDataIn *)in->arg3;
     get_label_data->offset = le32_to_cpu(get_label_data->offset);
     get_label_data->length = le32_to_cpu(get_label_data->length);
 
@@ -737,15 +745,13 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
 /*
  * DSM Spec Rev1 4.6 Set Namespace Label Data (Function Index 6).
  */
-static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
+static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
+                                      NvdimmFuncSetLabelDataIn *set_label_data,
                                       hwaddr dsm_mem_addr)
 {
     NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
-    NvdimmFuncSetLabelDataIn *set_label_data;
     uint32_t status;
 
-    set_label_data = (NvdimmFuncSetLabelDataIn *)in->arg3;
-
     set_label_data->offset = le32_to_cpu(set_label_data->offset);
     set_label_data->length = le32_to_cpu(set_label_data->length);
 
@@ -760,19 +766,21 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
     }
 
     assert(offsetof(NvdimmDsmIn, arg3) + sizeof(*set_label_data) +
-                    set_label_data->length <= NVDIMM_DSM_MEMORY_SIZE);
+           set_label_data->length <= NVDIMM_DSM_MEMORY_SIZE -
+           offsetof(NvdimmMthdIn, args));
 
     nvc->write_label_data(nvdimm, set_label_data->in_buf,
                           set_label_data->length, set_label_data->offset);
     nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_SUCCESS, dsm_mem_addr);
 }
 
-static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
+static void nvdimm_dsm_device(uint32_t nv_handle, NvdimmDsmIn *dsm_in,
+                                    hwaddr dsm_mem_addr)
 {
-    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(in->handle);
+    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
 
     /* See the comments in nvdimm_dsm_root(). */
-    if (!in->function) {
+    if (!dsm_in->function) {
         uint32_t supported_func = 0;
 
         if (nvdimm && nvdimm->label_size) {
@@ -794,7 +802,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
     }
 
     /* Encode DSM function according to DSM Spec Rev1. */
-    switch (in->function) {
+    switch (dsm_in->function) {
     case 4 /* Get Namespace Label Size */:
         if (nvdimm->label_size) {
             nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
@@ -803,13 +811,17 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
         break;
     case 5 /* Get Namespace Label Data */:
         if (nvdimm->label_size) {
-            nvdimm_dsm_get_label_data(nvdimm, in, dsm_mem_addr);
+            nvdimm_dsm_get_label_data(nvdimm,
+                                      (NvdimmFuncGetLabelDataIn *)dsm_in->arg3,
+                                      dsm_mem_addr);
             return;
         }
         break;
     case 0x6 /* Set Namespace Label Data */:
         if (nvdimm->label_size) {
-            nvdimm_dsm_set_label_data(nvdimm, in, dsm_mem_addr);
+            nvdimm_dsm_set_label_data(nvdimm,
+                        (NvdimmFuncSetLabelDataIn *)dsm_in->arg3,
+                        dsm_mem_addr);
             return;
         }
         break;
@@ -819,67 +831,128 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
 }
 
 static uint64_t
-nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
+nvdimm_method_read(void *opaque, hwaddr addr, unsigned size)
 {
-    nvdimm_debug("BUG: we never read _DSM IO Port.\n");
+    nvdimm_debug("BUG: we never read NVDIMM Method IO Port.\n");
     return 0;
 }
 
 static void
-nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
+nvdimm_dsm_handle(void *opaque, NvdimmMthdIn *method_in, hwaddr dsm_mem_addr)
 {
     NVDIMMState *state = opaque;
-    NvdimmDsmIn *in;
-    hwaddr dsm_mem_addr = val;
+    NvdimmDsmIn *dsm_in = (NvdimmDsmIn *)method_in->args;
 
     nvdimm_debug("dsm memory address 0x%" HWADDR_PRIx ".\n", dsm_mem_addr);
 
-    /*
-     * The DSM memory is mapped to guest address space so an evil guest
-     * can change its content while we are doing DSM emulation. Avoid
-     * this by copying DSM memory to QEMU local memory.
-     */
-    in = g_new(NvdimmDsmIn, 1);
-    cpu_physical_memory_read(dsm_mem_addr, in, sizeof(*in));
-
-    in->revision = le32_to_cpu(in->revision);
-    in->function = le32_to_cpu(in->function);
-    in->handle = le32_to_cpu(in->handle);
-
-    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n", in->revision,
-                 in->handle, in->function);
+    dsm_in->revision = le32_to_cpu(dsm_in->revision);
+    dsm_in->function = le32_to_cpu(dsm_in->function);
 
+    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
+                 dsm_in->revision, method_in->handle, dsm_in->function);
     /*
      * Current NVDIMM _DSM Spec supports Rev1 and Rev2
      * Intel® OptanePersistent Memory Module DSM Interface, Revision 2.0
      */
-    if (in->revision != 0x1 && in->revision != 0x2) {
+    if (dsm_in->revision != 0x1 && dsm_in->revision != 0x2) {
         nvdimm_debug("Revision 0x%x is not supported, expect 0x1 or 0x2.\n",
-                     in->revision);
+                     dsm_in->revision);
         nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
-        goto exit;
+        return;
     }
 
-    if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
-        nvdimm_dsm_handle_reserved_root_method(state, in, dsm_mem_addr);
-        goto exit;
+    if (method_in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
+        nvdimm_dsm_handle_reserved_root_method(state, dsm_in, dsm_mem_addr);
+        return;
     }
 
      /* Handle 0 is reserved for NVDIMM Root Device. */
-    if (!in->handle) {
-        nvdimm_dsm_root(in, dsm_mem_addr);
-        goto exit;
+    if (!method_in->handle) {
+        nvdimm_dsm_root(dsm_in, dsm_mem_addr);
+        return;
     }
 
-    nvdimm_dsm_device(in, dsm_mem_addr);
+    nvdimm_dsm_device(method_in->handle, dsm_in, dsm_mem_addr);
+}
 
-exit:
-    g_free(in);
+static void nvdimm_lsi_handle(uint32_t nv_handle, hwaddr dsm_mem_addr)
+{
+    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
+
+    if (nvdimm->label_size) {
+        nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
+    }
+
+    return;
+}
+
+static void nvdimm_lsr_handle(uint32_t nv_handle,
+                                    void *data,
+                                    hwaddr dsm_mem_addr)
+{
+    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
+    NvdimmFuncGetLabelDataIn *get_label_data = data;
+
+    if (nvdimm->label_size) {
+        nvdimm_dsm_get_label_data(nvdimm, get_label_data, dsm_mem_addr);
+    }
+    return;
+}
+
+static void nvdimm_lsw_handle(uint32_t nv_handle,
+                                    void *data,
+                                    hwaddr dsm_mem_addr)
+{
+    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
+    NvdimmFuncSetLabelDataIn *set_label_data = data;
+
+    if (nvdimm->label_size) {
+        nvdimm_dsm_set_label_data(nvdimm, set_label_data, dsm_mem_addr);
+    }
+    return;
+}
+
+static void
+nvdimm_method_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
+{
+    NvdimmMthdIn *method_in;
+    hwaddr dsm_mem_addr = val;
+
+    /*
+     * The DSM memory is mapped to guest address space so an evil guest
+     * can change its content while we are doing DSM emulation. Avoid
+     * this by copying DSM memory to QEMU local memory.
+     */
+    method_in = g_new(NvdimmMthdIn, 1);
+    cpu_physical_memory_read(dsm_mem_addr, method_in, sizeof(*method_in));
+
+    method_in->handle = le32_to_cpu(method_in->handle);
+    method_in->method = le32_to_cpu(method_in->method);
+
+    switch (method_in->method) {
+    case NVDIMM_METHOD_DSM:
+        nvdimm_dsm_handle(opaque, method_in, dsm_mem_addr);
+        break;
+    case NVDIMM_METHOD_LSI:
+        nvdimm_lsi_handle(method_in->handle, dsm_mem_addr);
+        break;
+    case NVDIMM_METHOD_LSR:
+        nvdimm_lsr_handle(method_in->handle, method_in->args, dsm_mem_addr);
+        break;
+    case NVDIMM_METHOD_LSW:
+        nvdimm_lsw_handle(method_in->handle, method_in->args, dsm_mem_addr);
+        break;
+    default:
+        nvdimm_debug("%s: Unkown method 0x%x\n", __func__, method_in->method);
+        break;
+    }
+
+    g_free(method_in);
 }
 
-static const MemoryRegionOps nvdimm_dsm_ops = {
-    .read = nvdimm_dsm_read,
-    .write = nvdimm_dsm_write,
+static const MemoryRegionOps nvdimm_method_ops = {
+    .read = nvdimm_method_read,
+    .write = nvdimm_method_write,
     .endianness = DEVICE_LITTLE_ENDIAN,
     .valid = {
         .min_access_size = 4,
@@ -899,12 +972,12 @@ void nvdimm_init_acpi_state(NVDIMMState *state, MemoryRegion *io,
                             FWCfgState *fw_cfg, Object *owner)
 {
     state->dsm_io = dsm_io;
-    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops, state,
+    memory_region_init_io(&state->io_mr, owner, &nvdimm_method_ops, state,
                           "nvdimm-acpi-io", dsm_io.bit_width >> 3);
     memory_region_add_subregion(io, dsm_io.address, &state->io_mr);
 
     state->dsm_mem = g_array_new(false, true /* clear */, 1);
-    acpi_data_push(state->dsm_mem, sizeof(NvdimmDsmIn));
+    acpi_data_push(state->dsm_mem, sizeof(NvdimmMthdIn));
     fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
                     state->dsm_mem->len);
 
@@ -918,13 +991,22 @@ void nvdimm_init_acpi_state(NVDIMMState *state, MemoryRegion *io,
 #define NVDIMM_DSM_IOPORT       "NPIO"
 
 #define NVDIMM_DSM_NOTIFY       "NTFI"
+#define NVDIMM_DSM_METHOD       "MTHD"
 #define NVDIMM_DSM_HANDLE       "HDLE"
 #define NVDIMM_DSM_REVISION     "REVS"
 #define NVDIMM_DSM_FUNCTION     "FUNC"
 #define NVDIMM_DSM_ARG3         "FARG"
 
-#define NVDIMM_DSM_OUT_BUF_SIZE "RLEN"
-#define NVDIMM_DSM_OUT_BUF      "ODAT"
+#define NVDIMM_DSM_OFFSET       "OFST"
+#define NVDIMM_DSM_TRANS_LEN    "TRSL"
+#define NVDIMM_DSM_IN_BUFF      "IDAT"
+
+#define NVDIMM_DSM_OUT_BUF_SIZE     "RLEN"
+#define NVDIMM_DSM_OUT_BUF          "ODAT"
+#define NVDIMM_DSM_OUT_STATUS       "STUS"
+#define NVDIMM_DSM_OUT_LSA_SIZE     "SIZE"
+#define NVDIMM_DSM_OUT_MAX_TRANS    "MAXT"
+
 
 #define NVDIMM_DSM_RFIT_STATUS  "RSTA"
 
@@ -938,7 +1020,6 @@ static void nvdimm_build_common_dsm(Aml *dev,
     Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf, *dsm_out_buf_size;
     Aml *whilectx, *offset;
     uint8_t byte_list[1];
-    AmlRegionSpace rs;
 
     method = aml_method(NVDIMM_COMMON_DSM, 5, AML_SERIALIZED);
     uuid = aml_arg(0);
@@ -949,37 +1030,15 @@ static void nvdimm_build_common_dsm(Aml *dev,
 
     aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR), dsm_mem));
 
-    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
-        rs = AML_SYSTEM_IO;
-    } else {
-        rs = AML_SYSTEM_MEMORY;
-    }
-
-    /* map DSM memory and IO into ACPI namespace. */
-    aml_append(method, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
-               aml_int(nvdimm_state->dsm_io.address),
-               nvdimm_state->dsm_io.bit_width >> 3));
     aml_append(method, aml_operation_region(NVDIMM_DSM_MEMORY,
-               AML_SYSTEM_MEMORY, dsm_mem, sizeof(NvdimmDsmIn)));
-
-    /*
-     * DSM notifier:
-     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and notify QEMU to
-     *                    emulate the access.
-     *
-     * It is the IO port so that accessing them will cause VM-exit, the
-     * control will be transferred to QEMU.
-     */
-    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC, AML_NOLOCK,
-                      AML_PRESERVE);
-    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
-               nvdimm_state->dsm_io.bit_width));
-    aml_append(method, field);
+               AML_SYSTEM_MEMORY, dsm_mem, sizeof(NvdimmMthdIn)));
 
     /*
      * DSM input:
      * NVDIMM_DSM_HANDLE: store device's handle, it's zero if the _DSM call
      *                    happens on NVDIMM Root Device.
+     * NVDIMM_DSM_METHOD: ACPI method indicator, to distinguish _DSM and
+     *                    other ACPI methods.
      * NVDIMM_DSM_REVISION: store the Arg1 of _DSM call.
      * NVDIMM_DSM_FUNCTION: store the Arg2 of _DSM call.
      * NVDIMM_DSM_ARG3: store the Arg3 of _DSM call which is a Package
@@ -991,13 +1050,16 @@ static void nvdimm_build_common_dsm(Aml *dev,
     field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
                       AML_PRESERVE);
     aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
-               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
+               sizeof(typeof_field(NvdimmMthdIn, handle)) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
+               sizeof(typeof_field(NvdimmMthdIn, method)) * BITS_PER_BYTE));
     aml_append(field, aml_named_field(NVDIMM_DSM_REVISION,
                sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
     aml_append(field, aml_named_field(NVDIMM_DSM_FUNCTION,
                sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
     aml_append(field, aml_named_field(NVDIMM_DSM_ARG3,
-         (sizeof(NvdimmDsmIn) - offsetof(NvdimmDsmIn, arg3)) * BITS_PER_BYTE));
+         (sizeof(NvdimmMthdIn) - offsetof(NvdimmMthdIn, args) -
+          offsetof(NvdimmDsmIn, arg3)) * BITS_PER_BYTE));
     aml_append(method, field);
 
     /*
@@ -1065,6 +1127,7 @@ static void nvdimm_build_common_dsm(Aml *dev,
      * it reserves 0 for root device and is the handle for NVDIMM devices.
      * See the comments in nvdimm_slot_to_handle().
      */
+    aml_append(method, aml_store(aml_int(0), aml_name(NVDIMM_DSM_METHOD)));
     aml_append(method, aml_store(handle, aml_name(NVDIMM_DSM_HANDLE)));
     aml_append(method, aml_store(aml_arg(1), aml_name(NVDIMM_DSM_REVISION)));
     aml_append(method, aml_store(function, aml_name(NVDIMM_DSM_FUNCTION)));
@@ -1250,6 +1313,7 @@ static void nvdimm_build_fit(Aml *dev)
 static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots)
 {
     uint32_t slot;
+    Aml *method, *pkg, *field;
 
     for (slot = 0; slot < ram_slots; slot++) {
         uint32_t handle = nvdimm_slot_to_handle(slot);
@@ -1266,6 +1330,155 @@ static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots)
          * table NFIT or _FIT.
          */
         aml_append(nvdimm_dev, aml_name_decl("_ADR", aml_int(handle)));
+        aml_append(nvdimm_dev, aml_operation_region(NVDIMM_DSM_MEMORY,
+                   AML_SYSTEM_MEMORY, aml_name(NVDIMM_ACPI_MEM_ADDR),
+                   sizeof(NvdimmMthdIn)));
+
+        /* ACPI 6.4: 6.5.10 NVDIMM Label Methods, _LS{I,R,W} */
+
+        /* Begin of _LSI Block */
+        method = aml_method("_LSI", 0, AML_SERIALIZED);
+        /* _LSI Input field */
+        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
+                          AML_PRESERVE);
+        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
+                   sizeof(typeof_field(NvdimmMthdIn, handle)) * BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
+                   sizeof(typeof_field(NvdimmMthdIn, method)) * BITS_PER_BYTE));
+        aml_append(method, field);
+
+        /* _LSI Output field */
+        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
+                          AML_PRESERVE);
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
+                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut, len)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
+                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
+                   func_ret_status)) * BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_LSA_SIZE,
+                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut, label_size)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_MAX_TRANS,
+                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut, max_xfer)) *
+                   BITS_PER_BYTE));
+        aml_append(method, field);
+
+        aml_append(method, aml_store(aml_int(handle),
+                                      aml_name(NVDIMM_DSM_HANDLE)));
+        aml_append(method, aml_store(aml_int(0x100),
+                                      aml_name(NVDIMM_DSM_METHOD)));
+        aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
+                                      aml_name(NVDIMM_DSM_NOTIFY)));
+
+        pkg = aml_package(3);
+        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
+        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_LSA_SIZE));
+        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_MAX_TRANS));
+
+        aml_append(method, aml_name_decl("RPKG", pkg));
+
+        aml_append(method, aml_return(aml_name("RPKG")));
+        aml_append(nvdimm_dev, method); /* End of _LSI Block */
+
+
+        /* Begin of _LSR Block */
+        method = aml_method("_LSR", 2, AML_SERIALIZED);
+
+        /* _LSR Input field */
+        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
+                          AML_PRESERVE);
+        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
+                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
+                   sizeof(typeof_field(NvdimmMthdIn, method)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
+                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn, offset)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
+                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn, length)) *
+                   BITS_PER_BYTE));
+        aml_append(method, field);
+
+        /* _LSR Output field */
+        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
+                          AML_PRESERVE);
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
+                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut, len)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
+                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut,
+                   func_ret_status)) * BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF,
+                   (NVDIMM_DSM_MEMORY_SIZE -
+                    offsetof(NvdimmFuncGetLabelDataOut, out_buf)) *
+                    BITS_PER_BYTE));
+        aml_append(method, field);
+
+        aml_append(method, aml_store(aml_int(handle),
+                                      aml_name(NVDIMM_DSM_HANDLE)));
+        aml_append(method, aml_store(aml_int(0x101),
+                                      aml_name(NVDIMM_DSM_METHOD)));
+        aml_append(method, aml_store(aml_arg(0), aml_name(NVDIMM_DSM_OFFSET)));
+        aml_append(method, aml_store(aml_arg(1),
+                                      aml_name(NVDIMM_DSM_TRANS_LEN)));
+        aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
+                                      aml_name(NVDIMM_DSM_NOTIFY)));
+
+        aml_append(method, aml_store(aml_shiftleft(aml_arg(1), aml_int(3)),
+                                         aml_local(1)));
+        aml_append(method, aml_create_field(aml_name(NVDIMM_DSM_OUT_BUF),
+                   aml_int(0), aml_local(1), "OBUF"));
+
+        pkg = aml_package(2);
+        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
+        aml_append(pkg, aml_name("OBUF"));
+        aml_append(method, aml_name_decl("RPKG", pkg));
+
+        aml_append(method, aml_return(aml_name("RPKG")));
+        aml_append(nvdimm_dev, method); /* End of _LSR Block */
+
+        /* Begin of _LSW Block */
+        method = aml_method("_LSW", 3, AML_SERIALIZED);
+        /* _LSW Input field */
+        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
+                          AML_PRESERVE);
+        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
+                   sizeof(typeof_field(NvdimmMthdIn, handle)) * BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
+                   sizeof(typeof_field(NvdimmMthdIn, method)) * BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
+                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn, offset)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
+                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn, length)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_IN_BUFF, 32640));
+        aml_append(method, field);
+
+        /* _LSW Output field */
+        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
+                          AML_PRESERVE);
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
+                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut, len)) *
+                   BITS_PER_BYTE));
+        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
+                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut,
+                   func_ret_status)) * BITS_PER_BYTE));
+        aml_append(method, field);
+
+        aml_append(method, aml_store(aml_int(handle), aml_name(NVDIMM_DSM_HANDLE)));
+        aml_append(method, aml_store(aml_int(0x102), aml_name(NVDIMM_DSM_METHOD)));
+        aml_append(method, aml_store(aml_arg(0), aml_name(NVDIMM_DSM_OFFSET)));
+        aml_append(method, aml_store(aml_arg(1), aml_name(NVDIMM_DSM_TRANS_LEN)));
+        aml_append(method, aml_store(aml_arg(2), aml_name(NVDIMM_DSM_IN_BUFF)));
+        aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
+                                      aml_name(NVDIMM_DSM_NOTIFY)));
+
+        aml_append(method, aml_return(aml_name(NVDIMM_DSM_OUT_STATUS)));
+        aml_append(nvdimm_dev, method); /* End of _LSW Block */
 
         nvdimm_build_device_dsm(nvdimm_dev, handle);
         aml_append(root_dev, nvdimm_dev);
@@ -1278,7 +1491,8 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
                               uint32_t ram_slots, const char *oem_id)
 {
     int mem_addr_offset;
-    Aml *ssdt, *sb_scope, *dev;
+    Aml *ssdt, *sb_scope, *dev, *field;
+    AmlRegionSpace rs;
     AcpiTable table = { .sig = "SSDT", .rev = 1,
                         .oem_id = oem_id, .oem_table_id = "NVDIMM" };
 
@@ -1286,6 +1500,9 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
 
     acpi_table_begin(&table, table_data);
     ssdt = init_aml_allocator();
+
+    mem_addr_offset = build_append_named_dword(table_data,
+                                               NVDIMM_ACPI_MEM_ADDR);
     sb_scope = aml_scope("\\_SB");
 
     dev = aml_device("NVDR");
@@ -1303,6 +1520,31 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
      */
     aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0012")));
 
+    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
+        rs = AML_SYSTEM_IO;
+    } else {
+        rs = AML_SYSTEM_MEMORY;
+    }
+
+    /* map DSM memory and IO into ACPI namespace. */
+    aml_append(dev, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
+               aml_int(nvdimm_state->dsm_io.address),
+               nvdimm_state->dsm_io.bit_width >> 3));
+
+    /*
+     * DSM notifier:
+     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and notify QEMU to
+     *                    emulate the access.
+     *
+     * It is the IO port so that accessing them will cause VM-exit, the
+     * control will be transferred to QEMU.
+     */
+    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC, AML_NOLOCK,
+                      AML_PRESERVE);
+    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
+               nvdimm_state->dsm_io.bit_width));
+    aml_append(dev, field);
+
     nvdimm_build_common_dsm(dev, nvdimm_state);
 
     /* 0 is reserved for root device. */
@@ -1316,12 +1558,10 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
 
     /* copy AML table into ACPI tables blob and patch header there */
     g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
-    mem_addr_offset = build_append_named_dword(table_data,
-                                               NVDIMM_ACPI_MEM_ADDR);
 
     bios_linker_loader_alloc(linker,
                              NVDIMM_DSM_MEM_FILE, nvdimm_state->dsm_mem,
-                             sizeof(NvdimmDsmIn), false /* high memory */);
+                             sizeof(NvdimmMthdIn), false /* high memory */);
     bios_linker_loader_add_pointer(linker,
         ACPI_BUILD_TABLE_FILE, mem_addr_offset, sizeof(uint32_t),
         NVDIMM_DSM_MEM_FILE, 0);
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index cf8f59be44..0206b6125b 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -37,6 +37,12 @@
         }                                                     \
     } while (0)
 
+/* NVDIMM ACPI Methods */
+#define NVDIMM_METHOD_DSM   0
+#define NVDIMM_METHOD_LSI   0x100
+#define NVDIMM_METHOD_LSR   0x101
+#define NVDIMM_METHOD_LSW   0x102
+
 /*
  * The minimum label data size is required by NVDIMM Namespace
  * specification, see the chapter 2 Namespaces:
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [QEMU PATCH v2 5/6] test/acpi/bios-tables-test: SSDT: update golden master binaries
  2022-05-30  3:40 [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Robert Hoo
                   ` (3 preceding siblings ...)
  2022-05-30  3:40 ` [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods Robert Hoo
@ 2022-05-30  3:40 ` Robert Hoo
  2022-05-30  3:40 ` [QEMU PATCH v2 6/6] acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug() Robert Hoo
  2022-06-06  6:26 ` [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Hu, Robert
  6 siblings, 0 replies; 20+ messages in thread
From: Robert Hoo @ 2022-05-30  3:40 UTC (permalink / raw)
  To: imammedo, mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu
  Cc: qemu-devel, robert.hu

Diff in disassembled ASL files
@@ -1,100 +1,103 @@
 /*
  * Intel ACPI Component Architecture
  * AML/ASL+ Disassembler version 20190509 (64-bit version)
  * Copyright (c) 2000 - 2019 Intel Corporation
  *
  * Disassembling to symbolic ASL+ operators
  *
- * Disassembly of tests/data/acpi/q35/SSDT.dimmpxm, Wed May 25 11:02:18 2022
+ * Disassembly of /tmp/aml-U0ONM1, Wed May 25 11:02:18 2022
  *
  * Original Table Header:
  *     Signature        "SSDT"
- *     Length           0x000002DE (734)
+ *     Length           0x00000725 (1829)
  *     Revision         0x01
- *     Checksum         0x46
+ *     Checksum         0xEA
  *     OEM ID           "BOCHS "
  *     OEM Table ID     "NVDIMM"
  *     OEM Revision     0x00000001 (1)
  *     Compiler ID      "BXPC"
  *     Compiler Version 0x00000001 (1)
  */
 DefinitionBlock ("", "SSDT", 1, "BOCHS ", "NVDIMM", 0x00000001)
 {
+    Name (MEMA, 0x07FFF000)
     Scope (\_SB)
     {
         Device (NVDR)
         {
             Name (_HID, "ACPI0012" /* NVDIMM Root Device */)  // _HID: Hardware ID
+            OperationRegion (NPIO, SystemIO, 0x0A18, 0x04)
+            Field (NPIO, DWordAcc, NoLock, Preserve)
+            {
+                NTFI,   32
+            }
+
             Method (NCAL, 5, Serialized)
             {
                 Local6 = MEMA /* \MEMA */
-                OperationRegion (NPIO, SystemIO, 0x0A18, 0x04)
                 OperationRegion (NRAM, SystemMemory, Local6, 0x1000)
-                Field (NPIO, DWordAcc, NoLock, Preserve)
-                {
-                    NTFI,   32
-                }
-
                 Field (NRAM, DWordAcc, NoLock, Preserve)
                 {
                     HDLE,   32,
+                    MTHD,   32,
                     REVS,   32,
                     FUNC,   32,
-                    FARG,   32672
+                    FARG,   32640
                 }

                 Field (NRAM, DWordAcc, NoLock, Preserve)
                 {
                     RLEN,   32,
                     ODAT,   32736
                 }

                 If ((Arg4 == Zero))
                 {
                     Local0 = ToUUID ("2f10e7a4-9e91-11e4-89d3-123b93f75cba")
                 }
                 ElseIf ((Arg4 == 0x00010000))
                 {
                     Local0 = ToUUID ("648b9cf2-cda1-4312-8ad9-49c4af32bd62")
                 }
                 Else
                 {
                     Local0 = ToUUID ("4309ac30-0d11-11e4-9191-0800200c9a66")
                 }

-                If (((Local6 == Zero) | (Arg0 != Local0)))
+                If (((Local6 == Zero) || (Arg0 != Local0)))
                 {
                     If ((Arg2 == Zero))
                     {
                         Return (Buffer (One)
                         {
                              0x00                                             // .
                         })
                     }

                     Return (Buffer (One)
                     {
                          0x01                                             // .
                     })
                 }

+                MTHD = Zero
                 HDLE = Arg4
                 REVS = Arg1
                 FUNC = Arg2
-                If (((ObjectType (Arg3) == 0x04) & (SizeOf (Arg3) == One)))
+                If (((ObjectType (Arg3) == 0x04) && (SizeOf (Arg3) == One)))
                 {
                     Local2 = Arg3 [Zero]
                     Local3 = DerefOf (Local2)
                     FARG = Local3
                 }

                 NTFI = Local6
                 Local1 = (RLEN - 0x04)
                 If ((Local1 < 0x08))
                 {
                     Local2 = Zero
                     Name (TBUF, Buffer (One)
                     {
                          0x00                                             // .
                     })
                     Local7 = Buffer (Zero){}
@@ -161,45 +164,304 @@
                     Else
                     {
                         If ((Local1 == Zero))
                         {
                             Return (Local2)
                         }

                         Local3 += Local1
                         Concatenate (Local2, Local0, Local2)
                     }
                 }
             }

             Device (NV00)
             {
                 Name (_ADR, One)  // _ADR: Address
+                OperationRegion (NRAM, SystemMemory, MEMA, 0x1000)
+                Method (_LSI, 0, Serialized)  // _LSI: Label Storage Information
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32,
+                        SIZE,   32,
+                        MAXT,   32
+                    }
+
+                    HDLE = One
+                    MTHD = 0x0100
+                    NTFI = MEMA /* \MEMA */
+                    Name (RPKG, Package (0x03)
+                    {
+                        STUS,
+                        SIZE,
+                        MAXT
+                    })
+                    Return (RPKG) /* \_SB_.NVDR.NV00._LSI.RPKG */
+                }
+
+                Method (_LSR, 2, Serialized)  // _LSR: Label Storage Read
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32,
+                        OFST,   32,
+                        TRSL,   32
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32,
+                        ODAT,   32704
+                    }
+
+                    HDLE = One
+                    MTHD = 0x0101
+                    OFST = Arg0
+                    TRSL = Arg1
+                    NTFI = MEMA /* \MEMA */
+                    Local1 = (Arg1 << 0x03)
+                    CreateField (ODAT, Zero, Local1, OBUF)
+                    Name (RPKG, Package (0x02)
+                    {
+                        STUS,
+                        OBUF
+                    })
+                    Return (RPKG) /* \_SB_.NVDR.NV00._LSR.RPKG */
+                }
+
+                Method (_LSW, 3, Serialized)  // _LSW: Label Storage Write
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32,
+                        OFST,   32,
+                        TRSL,   32,
+                        IDAT,   32640
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32
+                    }
+
+                    HDLE = One
+                    MTHD = 0x0102
+                    OFST = Arg0
+                    TRSL = Arg1
+                    IDAT = Arg2
+                    NTFI = MEMA /* \MEMA */
+                    Return (STUS) /* \_SB_.NVDR.NV00._LSW.STUS */
+                }
+
                 Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                 {
                     Return (NCAL (Arg0, Arg1, Arg2, Arg3, One))
                 }
             }

             Device (NV01)
             {
                 Name (_ADR, 0x02)  // _ADR: Address
+                OperationRegion (NRAM, SystemMemory, MEMA, 0x1000)
+                Method (_LSI, 0, Serialized)  // _LSI: Label Storage Information
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32,
+                        SIZE,   32,
+                        MAXT,   32
+                    }
+
+                    HDLE = 0x02
+                    MTHD = 0x0100
+                    NTFI = MEMA /* \MEMA */
+                    Name (RPKG, Package (0x03)
+                    {
+                        STUS,
+                        SIZE,
+                        MAXT
+                    })
+                    Return (RPKG) /* \_SB_.NVDR.NV01._LSI.RPKG */
+                }
+
+                Method (_LSR, 2, Serialized)  // _LSR: Label Storage Read
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32,
+                        OFST,   32,
+                        TRSL,   32
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32,
+                        ODAT,   32704
+                    }
+
+                    HDLE = 0x02
+                    MTHD = 0x0101
+                    OFST = Arg0
+                    TRSL = Arg1
+                    NTFI = MEMA /* \MEMA */
+                    Local1 = (Arg1 << 0x03)
+                    CreateField (ODAT, Zero, Local1, OBUF)
+                    Name (RPKG, Package (0x02)
+                    {
+                        STUS,
+                        OBUF
+                    })
+                    Return (RPKG) /* \_SB_.NVDR.NV01._LSR.RPKG */
+                }
+
+                Method (_LSW, 3, Serialized)  // _LSW: Label Storage Write
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32,
+                        OFST,   32,
+                        TRSL,   32,
+                        IDAT,   32640
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32
+                    }
+
+                    HDLE = 0x02
+                    MTHD = 0x0102
+                    OFST = Arg0
+                    TRSL = Arg1
+                    IDAT = Arg2
+                    NTFI = MEMA /* \MEMA */
+                    Return (STUS) /* \_SB_.NVDR.NV01._LSW.STUS */
+                }
+
                 Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                 {
                     Return (NCAL (Arg0, Arg1, Arg2, Arg3, 0x02))
                 }
             }

             Device (NV02)
             {
                 Name (_ADR, 0x03)  // _ADR: Address
+                OperationRegion (NRAM, SystemMemory, MEMA, 0x1000)
+                Method (_LSI, 0, Serialized)  // _LSI: Label Storage Information
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32,
+                        SIZE,   32,
+                        MAXT,   32
+                    }
+
+                    HDLE = 0x03
+                    MTHD = 0x0100
+                    NTFI = MEMA /* \MEMA */
+                    Name (RPKG, Package (0x03)
+                    {
+                        STUS,
+                        SIZE,
+                        MAXT
+                    })
+                    Return (RPKG) /* \_SB_.NVDR.NV02._LSI.RPKG */
+                }
+
+                Method (_LSR, 2, Serialized)  // _LSR: Label Storage Read
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32,
+                        OFST,   32,
+                        TRSL,   32
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32,
+                        ODAT,   32704
+                    }
+
+                    HDLE = 0x03
+                    MTHD = 0x0101
+                    OFST = Arg0
+                    TRSL = Arg1
+                    NTFI = MEMA /* \MEMA */
+                    Local1 = (Arg1 << 0x03)
+                    CreateField (ODAT, Zero, Local1, OBUF)
+                    Name (RPKG, Package (0x02)
+                    {
+                        STUS,
+                        OBUF
+                    })
+                    Return (RPKG) /* \_SB_.NVDR.NV02._LSR.RPKG */
+                }
+
+                Method (_LSW, 3, Serialized)  // _LSW: Label Storage Write
+                {
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        HDLE,   32,
+                        MTHD,   32,
+                        OFST,   32,
+                        TRSL,   32,
+                        IDAT,   32640
+                    }
+
+                    Field (NRAM, DWordAcc, NoLock, Preserve)
+                    {
+                        RLEN,   32,
+                        STUS,   32
+                    }
+
+                    HDLE = 0x03
+                    MTHD = 0x0102
+                    OFST = Arg0
+                    TRSL = Arg1
+                    IDAT = Arg2
+                    NTFI = MEMA /* \MEMA */
+                    Return (STUS) /* \_SB_.NVDR.NV02._LSW.STUS */
+                }
+
                 Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                 {
                     Return (NCAL (Arg0, Arg1, Arg2, Arg3, 0x03))
                 }
             }
         }
     }
-
-    Name (MEMA, 0x07FFF000)
 }

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
---
 tests/data/acpi/pc/SSDT.dimmpxm             | Bin 734 -> 1829 bytes
 tests/data/acpi/q35/SSDT.dimmpxm            | Bin 734 -> 1829 bytes
 tests/qtest/bios-tables-test-allowed-diff.h |   2 --
 3 files changed, 2 deletions(-)

diff --git a/tests/data/acpi/pc/SSDT.dimmpxm b/tests/data/acpi/pc/SSDT.dimmpxm
index ac55387d57e48adb99eb738a102308688a262fb8..672d51bc348366f667b1d3aeeec45dd06727fd5b 100644
GIT binary patch
literal 1829
zcmc&!L1@!p6n<&Tnx&mIU14LQ#DizM9YpXTY16E6O<R(54l`(yO1rj0f80Fmwr+>b
z31Sy^5s`G{D0mmVdG+MUn}>mSFN%no_jh&JUFJ6M^8d-7?|<)m@B3bKT{5ml0hsTZ
zQZ}y(#d%3l)!-cfG7IG_?yQ<q#W;NW6-~$w7OQ%uYHq0a1Ej`Q^NKVkX3I)CJv{^F
zda<mOnAjx8Ma)hNU&2L0R)mzADrUvP7{N&O0H%p5)MJn^J6G^IoR<nYK{fJ{pylRu
zL9P_Df-GvY>))bgCrKe%Ay*Vil4z{|jCxb<G7x^8OZcA?*Saqc_{SfTH{Gv`Z>-)8
z@3Olb#|kLm%Zn%Xdhe6josY`9*E4S&t2aT<)2~Le{MZ5C?Xn=mpVuvKvg_7i*Ilx_
zQMUy?A7<#n5I|yN899<B@*^!I=v{o~K5cUmcdN~i?KXfzHk}%&A#YO0x>u1i7qPwT
zdp5@sa9AT#kufmgL(tg2wCC7la~q3tU>m;ytTb?MJaYU7S+lt?*ycC_z%B*nJ}#+5
zRnpank1btlw%WjIx*<YJjcT%DjIt$JH-IeRMi7I28g2u_Bu5xT&`w!976c(G1Q!hE
z#dlqL;s->@mSwNnSO@FcK~F+pj$52S*#&fDAD7gvYLJ}8!W;s%{WL?6P0hmF9`9n*
z7%+qHy@WS{!JLORySCz3j=3RC7U$Dxwkk>*b7&E?OW=}JZlqe!71rz|hTLLyrjE>^
z8x>v9mAiqH#05hj3{@;hO7+87C<?2U=Vp@^!iYvNVtqVI&9XrjjT^)~@+9_2Ff_d&
zn4O8CeAJOYqJv~iKUu%|O}s-rkBP}zb6Czk7cPWcsJxWNZLIcA?D%XP@lbFMa5nl4
zp|e5DAMgkr=h4DTf7tj4A9fOBZYZe2G*y8M4ap#%N(L_Uk2>VfqQfBDc?dCg79j>i
pPN)R`_e=-9?@KY$mm+*VQqUhQIr&JOO^U;8|6htjBBzuh{2L~m{ty5F

delta 297
zcmZ3=caN1TIM^lR9uortW7tG4sXPIHt(f>=r}*e5H!Z&~mmrRK4^J0fN9O=f0|P@N
z1`%ITKW9fD-U44&U&plQ2EPDLe@1QzE-n@zJIK+OA&r|sAi9woB+l#?;^wIk-6#W+
zVD@nFaa9O%4GUIq3-xnWaB~cDZ}>HFVU~qt?c_9uNs}`Y7#46&&SF?1$jk^P7z=Vh
zdI~Z@nhLT&x)#V(Pwva+Vwv2Y&B(CXozaCcO~x<Gz<?v((ItpcL?GTJ*q3Dq$blJ|
oS=o%yO>#h4L$E9tlZYUyG#3*@-UuSkj3Lj=0rDgd!-N0q0JVcq&Hw-a

diff --git a/tests/data/acpi/q35/SSDT.dimmpxm b/tests/data/acpi/q35/SSDT.dimmpxm
index 98e6f0e3f3bb02dd419e36bdd1db9b94c728c406..ef8ecc186ddd3c0a2016e12bb92c026b8682fe5e 100644
GIT binary patch
literal 1829
zcmc&!L1@!p6n<&Tnx&mIU14LQ#DizM9YpXTY16E6O<R(54l`(yO1rj0f80Fmwr+>b
z31Sy^5s`G{D0mmVdG+MU`#|s{Ui2Vp-rv<>cbVJ3%l{{TzW=@Nz3+R`b;-CI1z<jB
zO4+<F6z3&HRfBf`$SjnzxU*_b731*bS2Q6%Sgh)qs=1-w50DyL%`4KFm@O;m^z;mv
z>&3ECVq%jV7co0Ad<ho~SrJlds+biEV+14B1DGbNP>(^v>|DX;a9%3p1l7n(f|i@t
z1-Vkp3bLrpu78i3og{^5g<Mf6Nusg-GwMyb%Ru;XF5!PlT<g9(;2(cT-gLh$zOi~Y
zyvy!#94nxJuP>h5>%C8AcRnibUeCOBtlk7YPrn+?@?!^lx66VMe_pq|$*x<6U3bay
zM%@<pe3+pNLjZ}9W#mYj$d9y;pm*_^`?SqP+^sf4w%Y&>*>q~8hP+7~>t039T*UUS
z@7Wwnz+sI{MaIB{4?$!9(w<|J&222ofo=Q-u+qT2@yPKTWX<NnV4LFz0J{{N`?#bM
zRY_MPKDKnJ*=hr;>4pd?HLA&qG0K)?-2k$L8$k^AX}ApxlN@0XK|5vbSP+D$5L_^r
z7vFX1i60OpTb8|QV;!t(2R#XaIc{+RWf#zqeOywvszGuV3v&q6_R|bOHZ==xd%TNL
zW55i)_Y&R+26G<P?AnTBIp&5mS)5O&*{UdM%%MflE`du@xshr?S6H)?7;=l*nL09u
zZd7zZSMCN*5*G}CFjTQ1Db){Sp(w0Yo|{cx2qPN7i1qO-HOmHBG;R>{$dlM7!O--A
zV0I>k@=-_rhz^o5{bc>-H}MYjJSHY<&0#tFT(}T&q4H85x3StgvE#FG#6!V3z}e_G
zgw6&nf50PjoJR{U{bA!HeAr2hxuKv6(^Lg6G$ez3DjB%YKkAS_iw=Wq=OM)0S%er6
pIiV8#-!mQbzAwd0UyAVgNI`$J<m4l9HYpCX{eLMAiJVf3@NXo1{ty5F

delta 297
zcmZ3=caN1TIM^lR9uortquWF-sXPIHt(f>=r}*e5H!Z&~mmrRK4^J0fN9O=f0|P@N
z1`%ITKW9fD-U44&U&plQ2EPDLe@1QzE-n@zJIK+OA&r|sAi9woB+l#?;^wIk-6#W+
zVD@nFaa9O%4GUIq3-xnWaB~cDZ}>HFVU~qt?c_9uNs}`Y7#46&&SF?1$jk^P7z=Vh
zdI~Z@nhLT&x)#V(Pwva+Vwv2Y&B(CXozaCcO~x<Gz<?v((ItpcL?GTJ*q3Dq$blJ|
oS=o%yO>#h4L$E9tlZYUyG#3*@-UuSkj3Lj=0rDgd!-xOu0Hgy@&Hw-a

diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
index eb8bae1407..dfb8523c8b 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1,3 +1 @@
 /* List of comma-separated changed AML files to ignore */
-"tests/data/acpi/pc/SSDT.dimmpxm",
-"tests/data/acpi/q35/SSDT.dimmpxm",
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [QEMU PATCH v2 6/6] acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug()
  2022-05-30  3:40 [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Robert Hoo
                   ` (4 preceding siblings ...)
  2022-05-30  3:40 ` [QEMU PATCH v2 5/6] test/acpi/bios-tables-test: SSDT: update golden master binaries Robert Hoo
@ 2022-05-30  3:40 ` Robert Hoo
  2022-06-16 12:35   ` Igor Mammedov
  2022-06-06  6:26 ` [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Hu, Robert
  6 siblings, 1 reply; 20+ messages in thread
From: Robert Hoo @ 2022-05-30  3:40 UTC (permalink / raw)
  To: imammedo, mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu
  Cc: qemu-devel, robert.hu

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
---
 hw/acpi/nvdimm.c        | 38 ++++++++++++++++++--------------------
 hw/acpi/trace-events    | 14 ++++++++++++++
 include/hw/mem/nvdimm.h |  8 --------
 3 files changed, 32 insertions(+), 28 deletions(-)

diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
index 50ee85866b..fc777990e6 100644
--- a/hw/acpi/nvdimm.c
+++ b/hw/acpi/nvdimm.c
@@ -35,6 +35,7 @@
 #include "hw/nvram/fw_cfg.h"
 #include "hw/mem/nvdimm.h"
 #include "qemu/nvdimm-utils.h"
+#include "trace.h"
 
 /*
  * define Byte Addressable Persistent Memory (PM) Region according to
@@ -558,8 +559,8 @@ static void nvdimm_dsm_func_read_fit(NVDIMMState *state, NvdimmDsmIn *in,
 
     fit = fit_buf->fit;
 
-    nvdimm_debug("Read FIT: offset 0x%x FIT size 0x%x Dirty %s.\n",
-                 read_fit->offset, fit->len, fit_buf->dirty ? "Yes" : "No");
+    trace_acpi_nvdimm_read_fit(read_fit->offset, fit->len,
+                               fit_buf->dirty ? "Yes" : "No");
 
     if (read_fit->offset > fit->len) {
         func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID;
@@ -667,7 +668,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice *nvdimm, hwaddr dsm_mem_addr)
     label_size = nvdimm->label_size;
     mxfer = nvdimm_get_max_xfer_label_size();
 
-    nvdimm_debug("label_size 0x%x, max_xfer 0x%x.\n", label_size, mxfer);
+    trace_acpi_nvdimm_label_info(label_size, mxfer);
 
     label_size_out.func_ret_status = cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS);
     label_size_out.label_size = cpu_to_le32(label_size);
@@ -683,20 +684,18 @@ static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
     uint32_t ret = NVDIMM_DSM_RET_STATUS_INVALID;
 
     if (offset + length < offset) {
-        nvdimm_debug("offset 0x%x + length 0x%x is overflow.\n", offset,
-                     length);
+        trace_acpi_nvdimm_label_overflow(offset, length);
         return ret;
     }
 
     if (nvdimm->label_size < offset + length) {
-        nvdimm_debug("position 0x%x is beyond label data (len = %" PRIx64 ").\n",
-                     offset + length, nvdimm->label_size);
+        trace_acpi_nvdimm_label_oversize(offset + length, nvdimm->label_size);
         return ret;
     }
 
     if (length > nvdimm_get_max_xfer_label_size()) {
-        nvdimm_debug("length (0x%x) is larger than max_xfer (0x%x).\n",
-                     length, nvdimm_get_max_xfer_label_size());
+        trace_acpi_nvdimm_label_xfer_exceed(length,
+                                            nvdimm_get_max_xfer_label_size());
         return ret;
     }
 
@@ -718,8 +717,8 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
     get_label_data->offset = le32_to_cpu(get_label_data->offset);
     get_label_data->length = le32_to_cpu(get_label_data->length);
 
-    nvdimm_debug("Read Label Data: offset 0x%x length 0x%x.\n",
-                 get_label_data->offset, get_label_data->length);
+    trace_acpi_nvdimm_read_label(get_label_data->offset,
+                                 get_label_data->length);
 
     status = nvdimm_rw_label_data_check(nvdimm, get_label_data->offset,
                                         get_label_data->length);
@@ -755,8 +754,8 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
     set_label_data->offset = le32_to_cpu(set_label_data->offset);
     set_label_data->length = le32_to_cpu(set_label_data->length);
 
-    nvdimm_debug("Write Label Data: offset 0x%x length 0x%x.\n",
-                 set_label_data->offset, set_label_data->length);
+    trace_acpi_nvdimm_write_label(set_label_data->offset,
+                                  set_label_data->length);
 
     status = nvdimm_rw_label_data_check(nvdimm, set_label_data->offset,
                                         set_label_data->length);
@@ -833,7 +832,7 @@ static void nvdimm_dsm_device(uint32_t nv_handle, NvdimmDsmIn *dsm_in,
 static uint64_t
 nvdimm_method_read(void *opaque, hwaddr addr, unsigned size)
 {
-    nvdimm_debug("BUG: we never read NVDIMM Method IO Port.\n");
+    trace_acpi_nvdimm_read_io_port();
     return 0;
 }
 
@@ -843,20 +842,19 @@ nvdimm_dsm_handle(void *opaque, NvdimmMthdIn *method_in, hwaddr dsm_mem_addr)
     NVDIMMState *state = opaque;
     NvdimmDsmIn *dsm_in = (NvdimmDsmIn *)method_in->args;
 
-    nvdimm_debug("dsm memory address 0x%" HWADDR_PRIx ".\n", dsm_mem_addr);
+    trace_acpi_nvdimm_dsm_mem_addr(dsm_mem_addr);
 
     dsm_in->revision = le32_to_cpu(dsm_in->revision);
     dsm_in->function = le32_to_cpu(dsm_in->function);
 
-    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
-                 dsm_in->revision, method_in->handle, dsm_in->function);
+    trace_acpi_nvdimm_dsm_info(dsm_in->revision,
+                 method_in->handle, dsm_in->function);
     /*
      * Current NVDIMM _DSM Spec supports Rev1 and Rev2
      * Intel® OptanePersistent Memory Module DSM Interface, Revision 2.0
      */
     if (dsm_in->revision != 0x1 && dsm_in->revision != 0x2) {
-        nvdimm_debug("Revision 0x%x is not supported, expect 0x1 or 0x2.\n",
-                     dsm_in->revision);
+        trace_acpi_nvdimm_invalid_revision(dsm_in->revision);
         nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
         return;
     }
@@ -943,7 +941,7 @@ nvdimm_method_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
         nvdimm_lsw_handle(method_in->handle, method_in->args, dsm_mem_addr);
         break;
     default:
-        nvdimm_debug("%s: Unkown method 0x%x\n", __func__, method_in->method);
+        trace_acpi_nvdimm_invalid_method(method_in->method);
         break;
     }
 
diff --git a/hw/acpi/trace-events b/hw/acpi/trace-events
index 2250126a22..db4c69009f 100644
--- a/hw/acpi/trace-events
+++ b/hw/acpi/trace-events
@@ -70,3 +70,17 @@ acpi_erst_reset_out(unsigned record_count) "record_count %u"
 acpi_erst_post_load(void *header, unsigned slot_size) "header: 0x%p slot_size %u"
 acpi_erst_class_init_in(void)
 acpi_erst_class_init_out(void)
+
+# nvdimm.c
+acpi_nvdimm_read_fit(uint32_t offset, uint32_t len, const char *dirty) "Read FIT: offset 0x%" PRIx32 " FIT size 0x%" PRIx32 " Dirty %s"
+acpi_nvdimm_label_info(uint32_t label_size, uint32_t mxfer) "label_size 0x%" PRIx32 ", max_xfer 0x%" PRIx32
+acpi_nvdimm_label_overflow(uint32_t offset, uint32_t length) "offset 0x%" PRIx32 " + length 0x%" PRIx32 " is overflow"
+acpi_nvdimm_label_oversize(uint32_t pos, uint64_t size) "position 0x%" PRIx32 " is beyond label data (len = %" PRIu64 ")"
+acpi_nvdimm_label_xfer_exceed(uint32_t length, uint32_t max_xfer) "length (0x%" PRIx32 ") is larger than max_xfer (0x%" PRIx32 ")"
+acpi_nvdimm_read_label(uint32_t offset, uint32_t length) "Read Label Data: offset 0x%" PRIx32 " length 0x%" PRIx32
+acpi_nvdimm_write_label(uint32_t offset, uint32_t length) "Write Label Data: offset 0x%" PRIx32 " length 0x%" PRIx32
+acpi_nvdimm_read_io_port(void) "Alert: we never read NVDIMM Method IO Port"
+acpi_nvdimm_dsm_mem_addr(uint64_t dsm_mem_addr) "dsm memory address 0x%" PRIx64
+acpi_nvdimm_dsm_info(uint32_t revision, uint32_t handle, uint32_t function) "Revision 0x%" PRIx32 " Handle 0x%" PRIx32 " Function 0x%" PRIx32
+acpi_nvdimm_invalid_revision(uint32_t revision) "Revision 0x%" PRIx32 " is not supported, expect 0x1 or 0x2"
+acpi_nvdimm_invalid_method(uint32_t method) "Unkown method %" PRId32
diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
index 0206b6125b..c83e273829 100644
--- a/include/hw/mem/nvdimm.h
+++ b/include/hw/mem/nvdimm.h
@@ -29,14 +29,6 @@
 #include "hw/acpi/aml-build.h"
 #include "qom/object.h"
 
-#define NVDIMM_DEBUG 0
-#define nvdimm_debug(fmt, ...)                                \
-    do {                                                      \
-        if (NVDIMM_DEBUG) {                                   \
-            fprintf(stderr, "nvdimm: " fmt, ## __VA_ARGS__);  \
-        }                                                     \
-    } while (0)
-
 /* NVDIMM ACPI Methods */
 #define NVDIMM_METHOD_DSM   0
 #define NVDIMM_METHOD_LSI   0x100
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* RE: [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods
  2022-05-30  3:40 [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Robert Hoo
                   ` (5 preceding siblings ...)
  2022-05-30  3:40 ` [QEMU PATCH v2 6/6] acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug() Robert Hoo
@ 2022-06-06  6:26 ` Hu, Robert
  6 siblings, 0 replies; 20+ messages in thread
From: Hu, Robert @ 2022-06-06  6:26 UTC (permalink / raw)
  To: Robert Hoo, imammedo, mst, xiaoguangrong.eric, ani, Williams,
	Dan J, Liu, Jingqi
  Cc: qemu-devel

Ping...

Best Regards,
Robert Hoo

> -----Original Message-----
> From: Robert Hoo <robert.hu@linux.intel.com>
> Sent: Monday, May 30, 2022 11:41
> To: imammedo@redhat.com; mst@redhat.com;
> xiaoguangrong.eric@gmail.com; ani@anisinha.ca; Williams, Dan J
> <dan.j.williams@intel.com>; Liu, Jingqi <jingqi.liu@intel.com>
> Cc: qemu-devel@nongnu.org; Hu, Robert <robert.hu@intel.com>
> Subject: [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods
> 
> (v1 Subject was "acpi/nvdimm: support NVDIMM _LS{I,R,W} methods")
> 
> Originally NVDIMM Label methods was defined in Intel PMEM _DSM Interface
> Spec [1], of function index 4, 5 and 6.
> Recent ACPI spec [2] has deprecated those _DSM methods with ACPI NVDIMM
> Label Methods _LS{I,R,W}. The essence of these functions has no changes.
> 
> This patch set is to update QEMU emulation on this, as well as update bios-
> table-test binaries, and substitute trace events for nvdimm_debug().
> 
> Patch 1 and 5, the opening and closing parenthesis patches for changes
> affecting ACPI tables. Details see tests/qtest/bios-tables-test.c.
> Patch 2, a trivial fix on aml_or()/aml_and() usage.
> Patch 3, allow NVDIMM _DSM revision 2 to get in.
> Patch 4, main body, which implements the virtual _LS{I,R,W} methods and also
> generalize QEMU <--> ACPI NVDIMM method interface, which paves the way
> for future necessary methods implementation, not only _DSM. The result SSDT
> table changes in ASL can be found in Patch 5's commit message.
> Patch 6, define trace events for acpi/nvdimm, replace nvdimm_debug()
> 
> Test
> Tested Linux guest of recent Kernel 5.18.0-rc4, create/destroy namespace, init
> labels, etc. works as before.
> Tested Windows 10 (1607) guest, and Windows server 2019, but seems
> vNVDIMM in Windows guest hasn't ever been supported. Before and after this
> patch set, no difference on guest boot up and other functions.
> 
> [1] Intel PMEM _DSM Interface Spec v2.0, 3.10 Deprecated Functions
> https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf
> [2] ACPI Spec v6.4, 6.5.10 NVDIMM Label Methods
> https://uefi.org/sites/default/files/resources/ACPI_Spec_6_4_Jan22.pdf
> 
> ---
> Change Log:
> v2:
> Almost rewritten
> Separate Patch 2
> Dance with tests/qtest/bios-table-tests
> Add trace events
> 
> Robert Hoo (6):
>   tests/acpi: allow SSDT changes
>   acpi/ssdt: Fix aml_or() and aml_and() in if clause
>   acpi/nvdimm: NVDIMM _DSM Spec supports revision 2
>   nvdimm: Implement ACPI NVDIMM Label Methods
>   test/acpi/bios-tables-test: SSDT: update standard AML binaries
>   acpi/nvdimm: Define trace events for NVDIMM and substitute
>     nvdimm_debug()
> 
>  hw/acpi/nvdimm.c                 | 434 +++++++++++++++++++++++--------
>  hw/acpi/trace-events             |  14 +
>  include/hw/mem/nvdimm.h          |  12 +-
>  tests/data/acpi/pc/SSDT.dimmpxm  | Bin 734 -> 1829 bytes
> tests/data/acpi/q35/SSDT.dimmpxm | Bin 734 -> 1829 bytes
>  5 files changed, 344 insertions(+), 116 deletions(-)
> 
> 
> base-commit: 58b53669e87fed0d70903e05cd42079fbbdbc195
> --
> 2.31.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 1/6] tests/acpi: allow SSDT changes
  2022-05-30  3:40 ` [QEMU PATCH v2 1/6] tests/acpi: allow SSDT changes Robert Hoo
@ 2022-06-16 11:24   ` Igor Mammedov
  0 siblings, 0 replies; 20+ messages in thread
From: Igor Mammedov @ 2022-06-16 11:24 UTC (permalink / raw)
  To: Robert Hoo
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Mon, 30 May 2022 11:40:42 +0800
Robert Hoo <robert.hu@linux.intel.com> wrote:

> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>

Reviewed-by: Igor Mammedov <imammedo@redhat.com>

> ---
>  tests/qtest/bios-tables-test-allowed-diff.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
> index dfb8523c8b..eb8bae1407 100644
> --- a/tests/qtest/bios-tables-test-allowed-diff.h
> +++ b/tests/qtest/bios-tables-test-allowed-diff.h
> @@ -1 +1,3 @@
>  /* List of comma-separated changed AML files to ignore */
> +"tests/data/acpi/pc/SSDT.dimmpxm",
> +"tests/data/acpi/q35/SSDT.dimmpxm",



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 3/6] acpi/nvdimm: NVDIMM _DSM Spec supports revision 2
  2022-05-30  3:40 ` [QEMU PATCH v2 3/6] acpi/nvdimm: NVDIMM _DSM Spec supports revision 2 Robert Hoo
@ 2022-06-16 11:38   ` Igor Mammedov
  2022-07-01  8:31     ` Robert Hoo
  0 siblings, 1 reply; 20+ messages in thread
From: Igor Mammedov @ 2022-06-16 11:38 UTC (permalink / raw)
  To: Robert Hoo
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Mon, 30 May 2022 11:40:44 +0800
Robert Hoo <robert.hu@linux.intel.com> wrote:

> The Intel Optane PMem DSM Interface, Version 2.0 [1], is the up-to-date
> spec for NVDIMM _DSM definition, which supports revision_id == 2.
> 
> Nevertheless, Rev.2 of NVDIMM _DSM has no functional change on those Label
> Data _DSM Functions, which are the only ones implemented for vNVDIMM.
> So, simple change to support this revision_id == 2 case.
> 
> [1] https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf

pls enumerate functions that QEMU implement and that are supported by rev=2,
do we really need rev2 ?

also don't we need make sure that rev1 only function are excluded?
/spec above says, functions 3-6 are deprecated and limited to rev1 only/
"
Warning: This function has been deprecated in preference to the ACPI 6.2 _LSW (Label Storage Write)
NVDIMM Device Interface and is only supported with Arg1 – Revision Id = 1. It is included here for
backwards compatibility with existing Arg1 - Revision Id = 1 implementations.
"

> 
> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> ---
>  hw/acpi/nvdimm.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 0ab247a870..59b42afcf1 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -849,9 +849,13 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
>      nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n", in->revision,
>                   in->handle, in->function);
>  
> -    if (in->revision != 0x1 /* Currently we only support DSM Spec Rev1. */) {
> -        nvdimm_debug("Revision 0x%x is not supported, expect 0x%x.\n",
> -                     in->revision, 0x1);
> +    /*
> +     * Current NVDIMM _DSM Spec supports Rev1 and Rev2
> +     * Intel® OptanePersistent Memory Module DSM Interface, Revision 2.0
> +     */
> +    if (in->revision != 0x1 && in->revision != 0x2) {
> +        nvdimm_debug("Revision 0x%x is not supported, expect 0x1 or 0x2.\n",
> +                     in->revision);
>          nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
>          goto exit;
>      }



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods
  2022-05-30  3:40 ` [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods Robert Hoo
@ 2022-06-16 12:32   ` Igor Mammedov
  2022-07-01  9:23     ` Robert Hoo
  0 siblings, 1 reply; 20+ messages in thread
From: Igor Mammedov @ 2022-06-16 12:32 UTC (permalink / raw)
  To: Robert Hoo
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Mon, 30 May 2022 11:40:45 +0800
Robert Hoo <robert.hu@linux.intel.com> wrote:

> Recent ACPI spec [1] has defined NVDIMM Label Methods _LS{I,R,W}, which
> depricates corresponding _DSM Functions defined by PMEM _DSM Interface spec
> [2].
> 
> In this implementation, we do 2 things
> 1. Generalize the QEMU<->ACPI BIOS NVDIMM interface, wrap it with ACPI
> method dispatch, _DSM is one of the branches. This also paves the way for
> adding other ACPI methods for NVDIMM.
> 2. Add _LS{I,R,W} method in each NVDIMM device in SSDT.
> ASL form of SSDT changes can be found in next test/qtest/bios-table-test
> commit message.
> 
> [1] ACPI Spec v6.4, 6.5.10 NVDIMM Label Methods
> https://uefi.org/sites/default/files/resources/ACPI_Spec_6_4_Jan22.pdf
> [2] Intel PMEM _DSM Interface Spec v2.0, 3.10 Deprecated Functions
> https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf
> 
> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> ---
>  hw/acpi/nvdimm.c        | 424 +++++++++++++++++++++++++++++++---------

This patch is too large and doing to many things to be reviewable.
It needs to be split into smaller distinct chunks.
(however hold your horses and read on)

The patch it is too intrusive and my hunch is that it breaks
ABI and needs a bunch of compat knobs to work properly and
that I'd like to avoid unless there is not other way around
the problem.

I was skeptical about this approach during v1 review and
now I'm pretty much sure it's over-engineered and we can
just repack data we receive from existing label _DSM functions
to provide _LS{I,R,W} like it was suggested in v1.
It will be much simpler and affect only AML side without
complicating ABI and without any compat cruft and will work
with ping-pong migration without any issues.


>  include/hw/mem/nvdimm.h |   6 +
>  2 files changed, 338 insertions(+), 92 deletions(-)
> 
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 59b42afcf1..50ee85866b 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -416,17 +416,22 @@ static void nvdimm_build_nfit(NVDIMMState *state, GArray *table_offsets,
>  
>  #define NVDIMM_DSM_MEMORY_SIZE      4096
>  
> -struct NvdimmDsmIn {
> +struct NvdimmMthdIn {
>      uint32_t handle;
> +    uint32_t method;
> +    uint8_t  args[4088];
> +} QEMU_PACKED;
> +typedef struct NvdimmMthdIn NvdimmMthdIn;
> +struct NvdimmDsmIn {
>      uint32_t revision;
>      uint32_t function;
>      /* the remaining size in the page is used by arg3. */
>      union {
> -        uint8_t arg3[4084];
> +        uint8_t arg3[4080];
>      };
>  } QEMU_PACKED;
>  typedef struct NvdimmDsmIn NvdimmDsmIn;
> -QEMU_BUILD_BUG_ON(sizeof(NvdimmDsmIn) != NVDIMM_DSM_MEMORY_SIZE);
> +QEMU_BUILD_BUG_ON(sizeof(NvdimmMthdIn) != NVDIMM_DSM_MEMORY_SIZE);
>  
>  struct NvdimmDsmOut {
>      /* the size of buffer filled by QEMU. */
> @@ -470,7 +475,8 @@ struct NvdimmFuncGetLabelDataIn {
>  } QEMU_PACKED;
>  typedef struct NvdimmFuncGetLabelDataIn NvdimmFuncGetLabelDataIn;
>  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncGetLabelDataIn) +
> -                  offsetof(NvdimmDsmIn, arg3) > NVDIMM_DSM_MEMORY_SIZE);
> +                  offsetof(NvdimmDsmIn, arg3) + offsetof(NvdimmMthdIn, args) >
> +                  NVDIMM_DSM_MEMORY_SIZE);
>  
>  struct NvdimmFuncGetLabelDataOut {
>      /* the size of buffer filled by QEMU. */
> @@ -488,14 +494,16 @@ struct NvdimmFuncSetLabelDataIn {
>  } QEMU_PACKED;
>  typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn;
>  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) +
> -                  offsetof(NvdimmDsmIn, arg3) > NVDIMM_DSM_MEMORY_SIZE);
> +                  offsetof(NvdimmDsmIn, arg3) + offsetof(NvdimmMthdIn, args) >
> +                  NVDIMM_DSM_MEMORY_SIZE);
>  
>  struct NvdimmFuncReadFITIn {
>      uint32_t offset; /* the offset into FIT buffer. */
>  } QEMU_PACKED;
>  typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn;
>  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) +
> -                  offsetof(NvdimmDsmIn, arg3) > NVDIMM_DSM_MEMORY_SIZE);
> +                  offsetof(NvdimmDsmIn, arg3) + offsetof(NvdimmMthdIn, args) >
> +                  NVDIMM_DSM_MEMORY_SIZE);
>  
>  struct NvdimmFuncReadFITOut {
>      /* the size of buffer filled by QEMU. */
> @@ -636,7 +644,8 @@ static uint32_t nvdimm_get_max_xfer_label_size(void)
>       * the max data ACPI can write one time which is transferred by
>       * 'Set Namespace Label Data' function.
>       */
> -    max_set_size = dsm_memory_size - offsetof(NvdimmDsmIn, arg3) -
> +    max_set_size = dsm_memory_size - offsetof(NvdimmMthdIn, args) -
> +                   offsetof(NvdimmDsmIn, arg3) -
>                     sizeof(NvdimmFuncSetLabelDataIn);
>  
>      return MIN(max_get_size, max_set_size);
> @@ -697,16 +706,15 @@ static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
>  /*
>   * DSM Spec Rev1 4.5 Get Namespace Label Data (Function Index 5).
>   */
> -static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
> -                                      hwaddr dsm_mem_addr)
> +static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
> +                                    NvdimmFuncGetLabelDataIn *get_label_data,
> +                                    hwaddr dsm_mem_addr)
>  {
>      NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
> -    NvdimmFuncGetLabelDataIn *get_label_data;
>      NvdimmFuncGetLabelDataOut *get_label_data_out;
>      uint32_t status;
>      int size;
>  
> -    get_label_data = (NvdimmFuncGetLabelDataIn *)in->arg3;
>      get_label_data->offset = le32_to_cpu(get_label_data->offset);
>      get_label_data->length = le32_to_cpu(get_label_data->length);
>  
> @@ -737,15 +745,13 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
>  /*
>   * DSM Spec Rev1 4.6 Set Namespace Label Data (Function Index 6).
>   */
> -static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
> +static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
> +                                      NvdimmFuncSetLabelDataIn *set_label_data,
>                                        hwaddr dsm_mem_addr)
>  {
>      NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
> -    NvdimmFuncSetLabelDataIn *set_label_data;
>      uint32_t status;
>  
> -    set_label_data = (NvdimmFuncSetLabelDataIn *)in->arg3;
> -
>      set_label_data->offset = le32_to_cpu(set_label_data->offset);
>      set_label_data->length = le32_to_cpu(set_label_data->length);
>  
> @@ -760,19 +766,21 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
>      }
>  
>      assert(offsetof(NvdimmDsmIn, arg3) + sizeof(*set_label_data) +
> -                    set_label_data->length <= NVDIMM_DSM_MEMORY_SIZE);
> +           set_label_data->length <= NVDIMM_DSM_MEMORY_SIZE -
> +           offsetof(NvdimmMthdIn, args));
>  
>      nvc->write_label_data(nvdimm, set_label_data->in_buf,
>                            set_label_data->length, set_label_data->offset);
>      nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_SUCCESS, dsm_mem_addr);
>  }
>  
> -static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
> +static void nvdimm_dsm_device(uint32_t nv_handle, NvdimmDsmIn *dsm_in,
> +                                    hwaddr dsm_mem_addr)
>  {
> -    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(in->handle);
> +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
>  
>      /* See the comments in nvdimm_dsm_root(). */
> -    if (!in->function) {
> +    if (!dsm_in->function) {
>          uint32_t supported_func = 0;
>  
>          if (nvdimm && nvdimm->label_size) {
> @@ -794,7 +802,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
>      }
>  
>      /* Encode DSM function according to DSM Spec Rev1. */
> -    switch (in->function) {
> +    switch (dsm_in->function) {
>      case 4 /* Get Namespace Label Size */:
>          if (nvdimm->label_size) {
>              nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
> @@ -803,13 +811,17 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
>          break;
>      case 5 /* Get Namespace Label Data */:
>          if (nvdimm->label_size) {
> -            nvdimm_dsm_get_label_data(nvdimm, in, dsm_mem_addr);
> +            nvdimm_dsm_get_label_data(nvdimm,
> +                                      (NvdimmFuncGetLabelDataIn *)dsm_in->arg3,
> +                                      dsm_mem_addr);
>              return;
>          }
>          break;
>      case 0x6 /* Set Namespace Label Data */:
>          if (nvdimm->label_size) {
> -            nvdimm_dsm_set_label_data(nvdimm, in, dsm_mem_addr);
> +            nvdimm_dsm_set_label_data(nvdimm,
> +                        (NvdimmFuncSetLabelDataIn *)dsm_in->arg3,
> +                        dsm_mem_addr);
>              return;
>          }
>          break;
> @@ -819,67 +831,128 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr)
>  }
>  
>  static uint64_t
> -nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
> +nvdimm_method_read(void *opaque, hwaddr addr, unsigned size)
>  {
> -    nvdimm_debug("BUG: we never read _DSM IO Port.\n");
> +    nvdimm_debug("BUG: we never read NVDIMM Method IO Port.\n");
>      return 0;
>  }
>  
>  static void
> -nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
> +nvdimm_dsm_handle(void *opaque, NvdimmMthdIn *method_in, hwaddr dsm_mem_addr)
>  {
>      NVDIMMState *state = opaque;
> -    NvdimmDsmIn *in;
> -    hwaddr dsm_mem_addr = val;
> +    NvdimmDsmIn *dsm_in = (NvdimmDsmIn *)method_in->args;
>  
>      nvdimm_debug("dsm memory address 0x%" HWADDR_PRIx ".\n", dsm_mem_addr);
>  
> -    /*
> -     * The DSM memory is mapped to guest address space so an evil guest
> -     * can change its content while we are doing DSM emulation. Avoid
> -     * this by copying DSM memory to QEMU local memory.
> -     */
> -    in = g_new(NvdimmDsmIn, 1);
> -    cpu_physical_memory_read(dsm_mem_addr, in, sizeof(*in));
> -
> -    in->revision = le32_to_cpu(in->revision);
> -    in->function = le32_to_cpu(in->function);
> -    in->handle = le32_to_cpu(in->handle);
> -
> -    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n", in->revision,
> -                 in->handle, in->function);
> +    dsm_in->revision = le32_to_cpu(dsm_in->revision);
> +    dsm_in->function = le32_to_cpu(dsm_in->function);
>  
> +    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> +                 dsm_in->revision, method_in->handle, dsm_in->function);
>      /*
>       * Current NVDIMM _DSM Spec supports Rev1 and Rev2
>       * Intel® OptanePersistent Memory Module DSM Interface, Revision 2.0
>       */
> -    if (in->revision != 0x1 && in->revision != 0x2) {
> +    if (dsm_in->revision != 0x1 && dsm_in->revision != 0x2) {
>          nvdimm_debug("Revision 0x%x is not supported, expect 0x1 or 0x2.\n",
> -                     in->revision);
> +                     dsm_in->revision);
>          nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
> -        goto exit;
> +        return;
>      }
>  
> -    if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> -        nvdimm_dsm_handle_reserved_root_method(state, in, dsm_mem_addr);
> -        goto exit;
> +    if (method_in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> +        nvdimm_dsm_handle_reserved_root_method(state, dsm_in, dsm_mem_addr);
> +        return;
>      }
>  
>       /* Handle 0 is reserved for NVDIMM Root Device. */
> -    if (!in->handle) {
> -        nvdimm_dsm_root(in, dsm_mem_addr);
> -        goto exit;
> +    if (!method_in->handle) {
> +        nvdimm_dsm_root(dsm_in, dsm_mem_addr);
> +        return;
>      }
>  
> -    nvdimm_dsm_device(in, dsm_mem_addr);
> +    nvdimm_dsm_device(method_in->handle, dsm_in, dsm_mem_addr);
> +}
>  
> -exit:
> -    g_free(in);
> +static void nvdimm_lsi_handle(uint32_t nv_handle, hwaddr dsm_mem_addr)
> +{
> +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> +
> +    if (nvdimm->label_size) {
> +        nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
> +    }
> +
> +    return;
> +}
> +
> +static void nvdimm_lsr_handle(uint32_t nv_handle,
> +                                    void *data,
> +                                    hwaddr dsm_mem_addr)
> +{
> +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> +    NvdimmFuncGetLabelDataIn *get_label_data = data;
> +
> +    if (nvdimm->label_size) {
> +        nvdimm_dsm_get_label_data(nvdimm, get_label_data, dsm_mem_addr);
> +    }
> +    return;
> +}
> +
> +static void nvdimm_lsw_handle(uint32_t nv_handle,
> +                                    void *data,
> +                                    hwaddr dsm_mem_addr)
> +{
> +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> +    NvdimmFuncSetLabelDataIn *set_label_data = data;
> +
> +    if (nvdimm->label_size) {
> +        nvdimm_dsm_set_label_data(nvdimm, set_label_data, dsm_mem_addr);
> +    }
> +    return;
> +}
> +
> +static void
> +nvdimm_method_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
> +{
> +    NvdimmMthdIn *method_in;
> +    hwaddr dsm_mem_addr = val;
> +
> +    /*
> +     * The DSM memory is mapped to guest address space so an evil guest
> +     * can change its content while we are doing DSM emulation. Avoid
> +     * this by copying DSM memory to QEMU local memory.
> +     */
> +    method_in = g_new(NvdimmMthdIn, 1);
> +    cpu_physical_memory_read(dsm_mem_addr, method_in, sizeof(*method_in));
> +
> +    method_in->handle = le32_to_cpu(method_in->handle);
> +    method_in->method = le32_to_cpu(method_in->method);
> +
> +    switch (method_in->method) {
> +    case NVDIMM_METHOD_DSM:
> +        nvdimm_dsm_handle(opaque, method_in, dsm_mem_addr);
> +        break;
> +    case NVDIMM_METHOD_LSI:
> +        nvdimm_lsi_handle(method_in->handle, dsm_mem_addr);
> +        break;
> +    case NVDIMM_METHOD_LSR:
> +        nvdimm_lsr_handle(method_in->handle, method_in->args, dsm_mem_addr);
> +        break;
> +    case NVDIMM_METHOD_LSW:
> +        nvdimm_lsw_handle(method_in->handle, method_in->args, dsm_mem_addr);
> +        break;
> +    default:
> +        nvdimm_debug("%s: Unkown method 0x%x\n", __func__, method_in->method);
> +        break;
> +    }
> +
> +    g_free(method_in);
>  }
>  
> -static const MemoryRegionOps nvdimm_dsm_ops = {
> -    .read = nvdimm_dsm_read,
> -    .write = nvdimm_dsm_write,
> +static const MemoryRegionOps nvdimm_method_ops = {
> +    .read = nvdimm_method_read,
> +    .write = nvdimm_method_write,
>      .endianness = DEVICE_LITTLE_ENDIAN,
>      .valid = {
>          .min_access_size = 4,
> @@ -899,12 +972,12 @@ void nvdimm_init_acpi_state(NVDIMMState *state, MemoryRegion *io,
>                              FWCfgState *fw_cfg, Object *owner)
>  {
>      state->dsm_io = dsm_io;
> -    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops, state,
> +    memory_region_init_io(&state->io_mr, owner, &nvdimm_method_ops, state,
>                            "nvdimm-acpi-io", dsm_io.bit_width >> 3);
>      memory_region_add_subregion(io, dsm_io.address, &state->io_mr);
>  
>      state->dsm_mem = g_array_new(false, true /* clear */, 1);
> -    acpi_data_push(state->dsm_mem, sizeof(NvdimmDsmIn));
> +    acpi_data_push(state->dsm_mem, sizeof(NvdimmMthdIn));
>      fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem->data,
>                      state->dsm_mem->len);
>  
> @@ -918,13 +991,22 @@ void nvdimm_init_acpi_state(NVDIMMState *state, MemoryRegion *io,
>  #define NVDIMM_DSM_IOPORT       "NPIO"
>  
>  #define NVDIMM_DSM_NOTIFY       "NTFI"
> +#define NVDIMM_DSM_METHOD       "MTHD"
>  #define NVDIMM_DSM_HANDLE       "HDLE"
>  #define NVDIMM_DSM_REVISION     "REVS"
>  #define NVDIMM_DSM_FUNCTION     "FUNC"
>  #define NVDIMM_DSM_ARG3         "FARG"
>  
> -#define NVDIMM_DSM_OUT_BUF_SIZE "RLEN"
> -#define NVDIMM_DSM_OUT_BUF      "ODAT"
> +#define NVDIMM_DSM_OFFSET       "OFST"
> +#define NVDIMM_DSM_TRANS_LEN    "TRSL"
> +#define NVDIMM_DSM_IN_BUFF      "IDAT"
> +
> +#define NVDIMM_DSM_OUT_BUF_SIZE     "RLEN"
> +#define NVDIMM_DSM_OUT_BUF          "ODAT"
> +#define NVDIMM_DSM_OUT_STATUS       "STUS"
> +#define NVDIMM_DSM_OUT_LSA_SIZE     "SIZE"
> +#define NVDIMM_DSM_OUT_MAX_TRANS    "MAXT"
> +
>  
>  #define NVDIMM_DSM_RFIT_STATUS  "RSTA"
>  
> @@ -938,7 +1020,6 @@ static void nvdimm_build_common_dsm(Aml *dev,
>      Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf, *dsm_out_buf_size;
>      Aml *whilectx, *offset;
>      uint8_t byte_list[1];
> -    AmlRegionSpace rs;
>  
>      method = aml_method(NVDIMM_COMMON_DSM, 5, AML_SERIALIZED);
>      uuid = aml_arg(0);
> @@ -949,37 +1030,15 @@ static void nvdimm_build_common_dsm(Aml *dev,
>  
>      aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR), dsm_mem));
>  
> -    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
> -        rs = AML_SYSTEM_IO;
> -    } else {
> -        rs = AML_SYSTEM_MEMORY;
> -    }
> -
> -    /* map DSM memory and IO into ACPI namespace. */
> -    aml_append(method, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
> -               aml_int(nvdimm_state->dsm_io.address),
> -               nvdimm_state->dsm_io.bit_width >> 3));
>      aml_append(method, aml_operation_region(NVDIMM_DSM_MEMORY,
> -               AML_SYSTEM_MEMORY, dsm_mem, sizeof(NvdimmDsmIn)));
> -
> -    /*
> -     * DSM notifier:
> -     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and notify QEMU to
> -     *                    emulate the access.
> -     *
> -     * It is the IO port so that accessing them will cause VM-exit, the
> -     * control will be transferred to QEMU.
> -     */
> -    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC, AML_NOLOCK,
> -                      AML_PRESERVE);
> -    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
> -               nvdimm_state->dsm_io.bit_width));
> -    aml_append(method, field);
> +               AML_SYSTEM_MEMORY, dsm_mem, sizeof(NvdimmMthdIn)));
>  
>      /*
>       * DSM input:
>       * NVDIMM_DSM_HANDLE: store device's handle, it's zero if the _DSM call
>       *                    happens on NVDIMM Root Device.
> +     * NVDIMM_DSM_METHOD: ACPI method indicator, to distinguish _DSM and
> +     *                    other ACPI methods.
>       * NVDIMM_DSM_REVISION: store the Arg1 of _DSM call.
>       * NVDIMM_DSM_FUNCTION: store the Arg2 of _DSM call.
>       * NVDIMM_DSM_ARG3: store the Arg3 of _DSM call which is a Package
> @@ -991,13 +1050,16 @@ static void nvdimm_build_common_dsm(Aml *dev,
>      field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
>                        AML_PRESERVE);
>      aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> -               sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE));
> +               sizeof(typeof_field(NvdimmMthdIn, handle)) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> +               sizeof(typeof_field(NvdimmMthdIn, method)) * BITS_PER_BYTE));
>      aml_append(field, aml_named_field(NVDIMM_DSM_REVISION,
>                 sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE));
>      aml_append(field, aml_named_field(NVDIMM_DSM_FUNCTION,
>                 sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE));
>      aml_append(field, aml_named_field(NVDIMM_DSM_ARG3,
> -         (sizeof(NvdimmDsmIn) - offsetof(NvdimmDsmIn, arg3)) * BITS_PER_BYTE));
> +         (sizeof(NvdimmMthdIn) - offsetof(NvdimmMthdIn, args) -
> +          offsetof(NvdimmDsmIn, arg3)) * BITS_PER_BYTE));
>      aml_append(method, field);
>  
>      /*
> @@ -1065,6 +1127,7 @@ static void nvdimm_build_common_dsm(Aml *dev,
>       * it reserves 0 for root device and is the handle for NVDIMM devices.
>       * See the comments in nvdimm_slot_to_handle().
>       */
> +    aml_append(method, aml_store(aml_int(0), aml_name(NVDIMM_DSM_METHOD)));
>      aml_append(method, aml_store(handle, aml_name(NVDIMM_DSM_HANDLE)));
>      aml_append(method, aml_store(aml_arg(1), aml_name(NVDIMM_DSM_REVISION)));
>      aml_append(method, aml_store(function, aml_name(NVDIMM_DSM_FUNCTION)));
> @@ -1250,6 +1313,7 @@ static void nvdimm_build_fit(Aml *dev)
>  static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots)
>  {
>      uint32_t slot;
> +    Aml *method, *pkg, *field;
>  
>      for (slot = 0; slot < ram_slots; slot++) {
>          uint32_t handle = nvdimm_slot_to_handle(slot);
> @@ -1266,6 +1330,155 @@ static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots)
>           * table NFIT or _FIT.
>           */
>          aml_append(nvdimm_dev, aml_name_decl("_ADR", aml_int(handle)));
> +        aml_append(nvdimm_dev, aml_operation_region(NVDIMM_DSM_MEMORY,
> +                   AML_SYSTEM_MEMORY, aml_name(NVDIMM_ACPI_MEM_ADDR),
> +                   sizeof(NvdimmMthdIn)));
> +
> +        /* ACPI 6.4: 6.5.10 NVDIMM Label Methods, _LS{I,R,W} */
> +
> +        /* Begin of _LSI Block */
> +        method = aml_method("_LSI", 0, AML_SERIALIZED);
> +        /* _LSI Input field */
> +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
> +                          AML_PRESERVE);
> +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> +                   sizeof(typeof_field(NvdimmMthdIn, handle)) * BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> +                   sizeof(typeof_field(NvdimmMthdIn, method)) * BITS_PER_BYTE));
> +        aml_append(method, field);
> +
> +        /* _LSI Output field */
> +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
> +                          AML_PRESERVE);
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut, len)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> +                   func_ret_status)) * BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_LSA_SIZE,
> +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut, label_size)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_MAX_TRANS,
> +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut, max_xfer)) *
> +                   BITS_PER_BYTE));
> +        aml_append(method, field);
> +
> +        aml_append(method, aml_store(aml_int(handle),
> +                                      aml_name(NVDIMM_DSM_HANDLE)));
> +        aml_append(method, aml_store(aml_int(0x100),
> +                                      aml_name(NVDIMM_DSM_METHOD)));
> +        aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> +                                      aml_name(NVDIMM_DSM_NOTIFY)));
> +
> +        pkg = aml_package(3);
> +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
> +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_LSA_SIZE));
> +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_MAX_TRANS));
> +
> +        aml_append(method, aml_name_decl("RPKG", pkg));
> +
> +        aml_append(method, aml_return(aml_name("RPKG")));
> +        aml_append(nvdimm_dev, method); /* End of _LSI Block */
> +
> +
> +        /* Begin of _LSR Block */
> +        method = aml_method("_LSR", 2, AML_SERIALIZED);
> +
> +        /* _LSR Input field */
> +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
> +                          AML_PRESERVE);
> +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
> +                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn, offset)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
> +                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn, length)) *
> +                   BITS_PER_BYTE));
> +        aml_append(method, field);
> +
> +        /* _LSR Output field */
> +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
> +                          AML_PRESERVE);
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> +                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut, len)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> +                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut,
> +                   func_ret_status)) * BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF,
> +                   (NVDIMM_DSM_MEMORY_SIZE -
> +                    offsetof(NvdimmFuncGetLabelDataOut, out_buf)) *
> +                    BITS_PER_BYTE));
> +        aml_append(method, field);
> +
> +        aml_append(method, aml_store(aml_int(handle),
> +                                      aml_name(NVDIMM_DSM_HANDLE)));
> +        aml_append(method, aml_store(aml_int(0x101),
> +                                      aml_name(NVDIMM_DSM_METHOD)));
> +        aml_append(method, aml_store(aml_arg(0), aml_name(NVDIMM_DSM_OFFSET)));
> +        aml_append(method, aml_store(aml_arg(1),
> +                                      aml_name(NVDIMM_DSM_TRANS_LEN)));
> +        aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> +                                      aml_name(NVDIMM_DSM_NOTIFY)));
> +
> +        aml_append(method, aml_store(aml_shiftleft(aml_arg(1), aml_int(3)),
> +                                         aml_local(1)));
> +        aml_append(method, aml_create_field(aml_name(NVDIMM_DSM_OUT_BUF),
> +                   aml_int(0), aml_local(1), "OBUF"));
> +
> +        pkg = aml_package(2);
> +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
> +        aml_append(pkg, aml_name("OBUF"));
> +        aml_append(method, aml_name_decl("RPKG", pkg));
> +
> +        aml_append(method, aml_return(aml_name("RPKG")));
> +        aml_append(nvdimm_dev, method); /* End of _LSR Block */
> +
> +        /* Begin of _LSW Block */
> +        method = aml_method("_LSW", 3, AML_SERIALIZED);
> +        /* _LSW Input field */
> +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
> +                          AML_PRESERVE);
> +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> +                   sizeof(typeof_field(NvdimmMthdIn, handle)) * BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> +                   sizeof(typeof_field(NvdimmMthdIn, method)) * BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
> +                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn, offset)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
> +                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn, length)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_IN_BUFF, 32640));
> +        aml_append(method, field);
> +
> +        /* _LSW Output field */
> +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC, AML_NOLOCK,
> +                          AML_PRESERVE);
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> +                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut, len)) *
> +                   BITS_PER_BYTE));
> +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> +                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut,
> +                   func_ret_status)) * BITS_PER_BYTE));
> +        aml_append(method, field);
> +
> +        aml_append(method, aml_store(aml_int(handle), aml_name(NVDIMM_DSM_HANDLE)));
> +        aml_append(method, aml_store(aml_int(0x102), aml_name(NVDIMM_DSM_METHOD)));
> +        aml_append(method, aml_store(aml_arg(0), aml_name(NVDIMM_DSM_OFFSET)));
> +        aml_append(method, aml_store(aml_arg(1), aml_name(NVDIMM_DSM_TRANS_LEN)));
> +        aml_append(method, aml_store(aml_arg(2), aml_name(NVDIMM_DSM_IN_BUFF)));
> +        aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> +                                      aml_name(NVDIMM_DSM_NOTIFY)));
> +
> +        aml_append(method, aml_return(aml_name(NVDIMM_DSM_OUT_STATUS)));
> +        aml_append(nvdimm_dev, method); /* End of _LSW Block */
>  
>          nvdimm_build_device_dsm(nvdimm_dev, handle);
>          aml_append(root_dev, nvdimm_dev);
> @@ -1278,7 +1491,8 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
>                                uint32_t ram_slots, const char *oem_id)
>  {
>      int mem_addr_offset;
> -    Aml *ssdt, *sb_scope, *dev;
> +    Aml *ssdt, *sb_scope, *dev, *field;
> +    AmlRegionSpace rs;
>      AcpiTable table = { .sig = "SSDT", .rev = 1,
>                          .oem_id = oem_id, .oem_table_id = "NVDIMM" };
>  
> @@ -1286,6 +1500,9 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
>  
>      acpi_table_begin(&table, table_data);
>      ssdt = init_aml_allocator();
> +
> +    mem_addr_offset = build_append_named_dword(table_data,
> +                                               NVDIMM_ACPI_MEM_ADDR);
>      sb_scope = aml_scope("\\_SB");
>  
>      dev = aml_device("NVDR");
> @@ -1303,6 +1520,31 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
>       */
>      aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0012")));
>  
> +    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
> +        rs = AML_SYSTEM_IO;
> +    } else {
> +        rs = AML_SYSTEM_MEMORY;
> +    }
> +
> +    /* map DSM memory and IO into ACPI namespace. */
> +    aml_append(dev, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
> +               aml_int(nvdimm_state->dsm_io.address),
> +               nvdimm_state->dsm_io.bit_width >> 3));
> +
> +    /*
> +     * DSM notifier:
> +     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and notify QEMU to
> +     *                    emulate the access.
> +     *
> +     * It is the IO port so that accessing them will cause VM-exit, the
> +     * control will be transferred to QEMU.
> +     */
> +    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC, AML_NOLOCK,
> +                      AML_PRESERVE);
> +    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
> +               nvdimm_state->dsm_io.bit_width));
> +    aml_append(dev, field);
> +
>      nvdimm_build_common_dsm(dev, nvdimm_state);
>  
>      /* 0 is reserved for root device. */
> @@ -1316,12 +1558,10 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data,
>  
>      /* copy AML table into ACPI tables blob and patch header there */
>      g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf->len);
> -    mem_addr_offset = build_append_named_dword(table_data,
> -                                               NVDIMM_ACPI_MEM_ADDR);
>  
>      bios_linker_loader_alloc(linker,
>                               NVDIMM_DSM_MEM_FILE, nvdimm_state->dsm_mem,
> -                             sizeof(NvdimmDsmIn), false /* high memory */);
> +                             sizeof(NvdimmMthdIn), false /* high memory */);
>      bios_linker_loader_add_pointer(linker,
>          ACPI_BUILD_TABLE_FILE, mem_addr_offset, sizeof(uint32_t),
>          NVDIMM_DSM_MEM_FILE, 0);
> diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> index cf8f59be44..0206b6125b 100644
> --- a/include/hw/mem/nvdimm.h
> +++ b/include/hw/mem/nvdimm.h
> @@ -37,6 +37,12 @@
>          }                                                     \
>      } while (0)
>  
> +/* NVDIMM ACPI Methods */
> +#define NVDIMM_METHOD_DSM   0
> +#define NVDIMM_METHOD_LSI   0x100
> +#define NVDIMM_METHOD_LSR   0x101
> +#define NVDIMM_METHOD_LSW   0x102
> +
>  /*
>   * The minimum label data size is required by NVDIMM Namespace
>   * specification, see the chapter 2 Namespaces:



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 6/6] acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug()
  2022-05-30  3:40 ` [QEMU PATCH v2 6/6] acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug() Robert Hoo
@ 2022-06-16 12:35   ` Igor Mammedov
  2022-07-01  8:35     ` Robert Hoo
  0 siblings, 1 reply; 20+ messages in thread
From: Igor Mammedov @ 2022-06-16 12:35 UTC (permalink / raw)
  To: Robert Hoo
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Mon, 30 May 2022 11:40:47 +0800
Robert Hoo <robert.hu@linux.intel.com> wrote:

suggest to put this patch as the 1st in series
(well you can rebase it on current master and
post that right away for merging since it doesn't
really depend on other patches, and post new patches on
top (whenever they are ready) will use tracing)

> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> ---
>  hw/acpi/nvdimm.c        | 38 ++++++++++++++++++--------------------
>  hw/acpi/trace-events    | 14 ++++++++++++++
>  include/hw/mem/nvdimm.h |  8 --------
>  3 files changed, 32 insertions(+), 28 deletions(-)
> 
> diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> index 50ee85866b..fc777990e6 100644
> --- a/hw/acpi/nvdimm.c
> +++ b/hw/acpi/nvdimm.c
> @@ -35,6 +35,7 @@
>  #include "hw/nvram/fw_cfg.h"
>  #include "hw/mem/nvdimm.h"
>  #include "qemu/nvdimm-utils.h"
> +#include "trace.h"
>  
>  /*
>   * define Byte Addressable Persistent Memory (PM) Region according to
> @@ -558,8 +559,8 @@ static void nvdimm_dsm_func_read_fit(NVDIMMState *state, NvdimmDsmIn *in,
>  
>      fit = fit_buf->fit;
>  
> -    nvdimm_debug("Read FIT: offset 0x%x FIT size 0x%x Dirty %s.\n",
> -                 read_fit->offset, fit->len, fit_buf->dirty ? "Yes" : "No");
> +    trace_acpi_nvdimm_read_fit(read_fit->offset, fit->len,
> +                               fit_buf->dirty ? "Yes" : "No");
>  
>      if (read_fit->offset > fit->len) {
>          func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID;
> @@ -667,7 +668,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice *nvdimm, hwaddr dsm_mem_addr)
>      label_size = nvdimm->label_size;
>      mxfer = nvdimm_get_max_xfer_label_size();
>  
> -    nvdimm_debug("label_size 0x%x, max_xfer 0x%x.\n", label_size, mxfer);
> +    trace_acpi_nvdimm_label_info(label_size, mxfer);
>  
>      label_size_out.func_ret_status = cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS);
>      label_size_out.label_size = cpu_to_le32(label_size);
> @@ -683,20 +684,18 @@ static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
>      uint32_t ret = NVDIMM_DSM_RET_STATUS_INVALID;
>  
>      if (offset + length < offset) {
> -        nvdimm_debug("offset 0x%x + length 0x%x is overflow.\n", offset,
> -                     length);
> +        trace_acpi_nvdimm_label_overflow(offset, length);
>          return ret;
>      }
>  
>      if (nvdimm->label_size < offset + length) {
> -        nvdimm_debug("position 0x%x is beyond label data (len = %" PRIx64 ").\n",
> -                     offset + length, nvdimm->label_size);
> +        trace_acpi_nvdimm_label_oversize(offset + length, nvdimm->label_size);
>          return ret;
>      }
>  
>      if (length > nvdimm_get_max_xfer_label_size()) {
> -        nvdimm_debug("length (0x%x) is larger than max_xfer (0x%x).\n",
> -                     length, nvdimm_get_max_xfer_label_size());
> +        trace_acpi_nvdimm_label_xfer_exceed(length,
> +                                            nvdimm_get_max_xfer_label_size());
>          return ret;
>      }
>  
> @@ -718,8 +717,8 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
>      get_label_data->offset = le32_to_cpu(get_label_data->offset);
>      get_label_data->length = le32_to_cpu(get_label_data->length);
>  
> -    nvdimm_debug("Read Label Data: offset 0x%x length 0x%x.\n",
> -                 get_label_data->offset, get_label_data->length);
> +    trace_acpi_nvdimm_read_label(get_label_data->offset,
> +                                 get_label_data->length);
>  
>      status = nvdimm_rw_label_data_check(nvdimm, get_label_data->offset,
>                                          get_label_data->length);
> @@ -755,8 +754,8 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
>      set_label_data->offset = le32_to_cpu(set_label_data->offset);
>      set_label_data->length = le32_to_cpu(set_label_data->length);
>  
> -    nvdimm_debug("Write Label Data: offset 0x%x length 0x%x.\n",
> -                 set_label_data->offset, set_label_data->length);
> +    trace_acpi_nvdimm_write_label(set_label_data->offset,
> +                                  set_label_data->length);
>  
>      status = nvdimm_rw_label_data_check(nvdimm, set_label_data->offset,
>                                          set_label_data->length);
> @@ -833,7 +832,7 @@ static void nvdimm_dsm_device(uint32_t nv_handle, NvdimmDsmIn *dsm_in,
>  static uint64_t
>  nvdimm_method_read(void *opaque, hwaddr addr, unsigned size)
>  {
> -    nvdimm_debug("BUG: we never read NVDIMM Method IO Port.\n");
> +    trace_acpi_nvdimm_read_io_port();
>      return 0;
>  }
>  
> @@ -843,20 +842,19 @@ nvdimm_dsm_handle(void *opaque, NvdimmMthdIn *method_in, hwaddr dsm_mem_addr)
>      NVDIMMState *state = opaque;
>      NvdimmDsmIn *dsm_in = (NvdimmDsmIn *)method_in->args;
>  
> -    nvdimm_debug("dsm memory address 0x%" HWADDR_PRIx ".\n", dsm_mem_addr);
> +    trace_acpi_nvdimm_dsm_mem_addr(dsm_mem_addr);
>  
>      dsm_in->revision = le32_to_cpu(dsm_in->revision);
>      dsm_in->function = le32_to_cpu(dsm_in->function);
>  
> -    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> -                 dsm_in->revision, method_in->handle, dsm_in->function);
> +    trace_acpi_nvdimm_dsm_info(dsm_in->revision,
> +                 method_in->handle, dsm_in->function);
>      /*
>       * Current NVDIMM _DSM Spec supports Rev1 and Rev2
>       * Intel® OptanePersistent Memory Module DSM Interface, Revision 2.0
>       */
>      if (dsm_in->revision != 0x1 && dsm_in->revision != 0x2) {
> -        nvdimm_debug("Revision 0x%x is not supported, expect 0x1 or 0x2.\n",
> -                     dsm_in->revision);
> +        trace_acpi_nvdimm_invalid_revision(dsm_in->revision);
>          nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr);
>          return;
>      }
> @@ -943,7 +941,7 @@ nvdimm_method_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
>          nvdimm_lsw_handle(method_in->handle, method_in->args, dsm_mem_addr);
>          break;
>      default:
> -        nvdimm_debug("%s: Unkown method 0x%x\n", __func__, method_in->method);
> +        trace_acpi_nvdimm_invalid_method(method_in->method);
>          break;
>      }
>  
> diff --git a/hw/acpi/trace-events b/hw/acpi/trace-events
> index 2250126a22..db4c69009f 100644
> --- a/hw/acpi/trace-events
> +++ b/hw/acpi/trace-events
> @@ -70,3 +70,17 @@ acpi_erst_reset_out(unsigned record_count) "record_count %u"
>  acpi_erst_post_load(void *header, unsigned slot_size) "header: 0x%p slot_size %u"
>  acpi_erst_class_init_in(void)
>  acpi_erst_class_init_out(void)
> +
> +# nvdimm.c
> +acpi_nvdimm_read_fit(uint32_t offset, uint32_t len, const char *dirty) "Read FIT: offset 0x%" PRIx32 " FIT size 0x%" PRIx32 " Dirty %s"
> +acpi_nvdimm_label_info(uint32_t label_size, uint32_t mxfer) "label_size 0x%" PRIx32 ", max_xfer 0x%" PRIx32
> +acpi_nvdimm_label_overflow(uint32_t offset, uint32_t length) "offset 0x%" PRIx32 " + length 0x%" PRIx32 " is overflow"
> +acpi_nvdimm_label_oversize(uint32_t pos, uint64_t size) "position 0x%" PRIx32 " is beyond label data (len = %" PRIu64 ")"
> +acpi_nvdimm_label_xfer_exceed(uint32_t length, uint32_t max_xfer) "length (0x%" PRIx32 ") is larger than max_xfer (0x%" PRIx32 ")"
> +acpi_nvdimm_read_label(uint32_t offset, uint32_t length) "Read Label Data: offset 0x%" PRIx32 " length 0x%" PRIx32
> +acpi_nvdimm_write_label(uint32_t offset, uint32_t length) "Write Label Data: offset 0x%" PRIx32 " length 0x%" PRIx32
> +acpi_nvdimm_read_io_port(void) "Alert: we never read NVDIMM Method IO Port"
> +acpi_nvdimm_dsm_mem_addr(uint64_t dsm_mem_addr) "dsm memory address 0x%" PRIx64
> +acpi_nvdimm_dsm_info(uint32_t revision, uint32_t handle, uint32_t function) "Revision 0x%" PRIx32 " Handle 0x%" PRIx32 " Function 0x%" PRIx32
> +acpi_nvdimm_invalid_revision(uint32_t revision) "Revision 0x%" PRIx32 " is not supported, expect 0x1 or 0x2"
> +acpi_nvdimm_invalid_method(uint32_t method) "Unkown method %" PRId32
> diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> index 0206b6125b..c83e273829 100644
> --- a/include/hw/mem/nvdimm.h
> +++ b/include/hw/mem/nvdimm.h
> @@ -29,14 +29,6 @@
>  #include "hw/acpi/aml-build.h"
>  #include "qom/object.h"
>  
> -#define NVDIMM_DEBUG 0
> -#define nvdimm_debug(fmt, ...)                                \
> -    do {                                                      \
> -        if (NVDIMM_DEBUG) {                                   \
> -            fprintf(stderr, "nvdimm: " fmt, ## __VA_ARGS__);  \
> -        }                                                     \
> -    } while (0)
> -
>  /* NVDIMM ACPI Methods */
>  #define NVDIMM_METHOD_DSM   0
>  #define NVDIMM_METHOD_LSI   0x100



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 3/6] acpi/nvdimm: NVDIMM _DSM Spec supports revision 2
  2022-06-16 11:38   ` Igor Mammedov
@ 2022-07-01  8:31     ` Robert Hoo
  0 siblings, 0 replies; 20+ messages in thread
From: Robert Hoo @ 2022-07-01  8:31 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Thu, 2022-06-16 at 13:38 +0200, Igor Mammedov wrote:
> On Mon, 30 May 2022 11:40:44 +0800
> Robert Hoo <robert.hu@linux.intel.com> wrote:
> 
> > The Intel Optane PMem DSM Interface, Version 2.0 [1], is the up-to-
> > date
> > spec for NVDIMM _DSM definition, which supports revision_id == 2.
> > 
> > Nevertheless, Rev.2 of NVDIMM _DSM has no functional change on
> > those Label
> > Data _DSM Functions, which are the only ones implemented for
> > vNVDIMM.
> > So, simple change to support this revision_id == 2 case.
> > 
> > [1] 
> > https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf
> 
> pls enumerate functions that QEMU implement and that are supported by
> rev=2,
> do we really need rev2 ?

No matter rev.1 or rev.2, current QEMU implements only the three label
methods: get namespace label data size (func index 4), get namespace
label data (func index 5), set namespace label data (func index 6).
In both rev.1 an rev.2, these 3 _DSM label methods are deprecated by
ACPI Label methods. So, okay, we don't really need rev.2, at present.
> 
> also don't we need make sure that rev1 only function are excluded?
> /spec above says, functions 3-6 are deprecated and limited to rev1
> only/
> "
> Warning: This function has been deprecated in preference to the ACPI
> 6.2 _LSW (Label Storage Write)
> NVDIMM Device Interface and is only supported with Arg1 – Revision Id
> = 1. It is included here for
> backwards compatibility with existing Arg1 - Revision Id = 1
> implementations.
> "
Well, they're deprecated, not obsoleted, so still included, I think.
Anyway, as said above, we don't need this patch at this moment, let's
keep it unchanged.
> 
> > 
> > Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> > Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> > ---
> >  hw/acpi/nvdimm.c | 10 +++++++---
> >  1 file changed, 7 insertions(+), 3 deletions(-)
> > 
> > diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> > index 0ab247a870..59b42afcf1 100644
> > --- a/hw/acpi/nvdimm.c
> > +++ b/hw/acpi/nvdimm.c
> > @@ -849,9 +849,13 @@ nvdimm_dsm_write(void *opaque, hwaddr addr,
> > uint64_t val, unsigned size)
> >      nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> > in->revision,
> >                   in->handle, in->function);
> >  
> > -    if (in->revision != 0x1 /* Currently we only support DSM Spec
> > Rev1. */) {
> > -        nvdimm_debug("Revision 0x%x is not supported, expect
> > 0x%x.\n",
> > -                     in->revision, 0x1);
> > +    /*
> > +     * Current NVDIMM _DSM Spec supports Rev1 and Rev2
> > +     * Intel® OptanePersistent Memory Module DSM Interface,
> > Revision 2.0
> > +     */
> > +    if (in->revision != 0x1 && in->revision != 0x2) {
> > +        nvdimm_debug("Revision 0x%x is not supported, expect 0x1
> > or 0x2.\n",
> > +                     in->revision);
> >          nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT,
> > dsm_mem_addr);
> >          goto exit;
> >      }
> 
> 



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 6/6] acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug()
  2022-06-16 12:35   ` Igor Mammedov
@ 2022-07-01  8:35     ` Robert Hoo
  0 siblings, 0 replies; 20+ messages in thread
From: Robert Hoo @ 2022-07-01  8:35 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Thu, 2022-06-16 at 14:35 +0200, Igor Mammedov wrote:
> On Mon, 30 May 2022 11:40:47 +0800
> Robert Hoo <robert.hu@linux.intel.com> wrote:
> 
> suggest to put this patch as the 1st in series
> (well you can rebase it on current master and
> post that right away for merging since it doesn't
> really depend on other patches, and post new patches on
> top (whenever they are ready) will use tracing)

OK
> 
> > Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> > Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> > ---
> >  hw/acpi/nvdimm.c        | 38 ++++++++++++++++++-------------------
> > -
> >  hw/acpi/trace-events    | 14 ++++++++++++++
> >  include/hw/mem/nvdimm.h |  8 --------
> >  3 files changed, 32 insertions(+), 28 deletions(-)
> > 
> > diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> > index 50ee85866b..fc777990e6 100644
> > --- a/hw/acpi/nvdimm.c
> > +++ b/hw/acpi/nvdimm.c
> > @@ -35,6 +35,7 @@
> >  #include "hw/nvram/fw_cfg.h"
> >  #include "hw/mem/nvdimm.h"
> >  #include "qemu/nvdimm-utils.h"
> > +#include "trace.h"
> >  
> >  /*
> >   * define Byte Addressable Persistent Memory (PM) Region according
> > to
> > @@ -558,8 +559,8 @@ static void
> > nvdimm_dsm_func_read_fit(NVDIMMState *state, NvdimmDsmIn *in,
> >  
> >      fit = fit_buf->fit;
> >  
> > -    nvdimm_debug("Read FIT: offset 0x%x FIT size 0x%x Dirty
> > %s.\n",
> > -                 read_fit->offset, fit->len, fit_buf->dirty ?
> > "Yes" : "No");
> > +    trace_acpi_nvdimm_read_fit(read_fit->offset, fit->len,
> > +                               fit_buf->dirty ? "Yes" : "No");
> >  
> >      if (read_fit->offset > fit->len) {
> >          func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID;
> > @@ -667,7 +668,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice
> > *nvdimm, hwaddr dsm_mem_addr)
> >      label_size = nvdimm->label_size;
> >      mxfer = nvdimm_get_max_xfer_label_size();
> >  
> > -    nvdimm_debug("label_size 0x%x, max_xfer 0x%x.\n", label_size,
> > mxfer);
> > +    trace_acpi_nvdimm_label_info(label_size, mxfer);
> >  
> >      label_size_out.func_ret_status =
> > cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS);
> >      label_size_out.label_size = cpu_to_le32(label_size);
> > @@ -683,20 +684,18 @@ static uint32_t
> > nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
> >      uint32_t ret = NVDIMM_DSM_RET_STATUS_INVALID;
> >  
> >      if (offset + length < offset) {
> > -        nvdimm_debug("offset 0x%x + length 0x%x is overflow.\n",
> > offset,
> > -                     length);
> > +        trace_acpi_nvdimm_label_overflow(offset, length);
> >          return ret;
> >      }
> >  
> >      if (nvdimm->label_size < offset + length) {
> > -        nvdimm_debug("position 0x%x is beyond label data (len = %"
> > PRIx64 ").\n",
> > -                     offset + length, nvdimm->label_size);
> > +        trace_acpi_nvdimm_label_oversize(offset + length, nvdimm-
> > >label_size);
> >          return ret;
> >      }
> >  
> >      if (length > nvdimm_get_max_xfer_label_size()) {
> > -        nvdimm_debug("length (0x%x) is larger than max_xfer
> > (0x%x).\n",
> > -                     length, nvdimm_get_max_xfer_label_size());
> > +        trace_acpi_nvdimm_label_xfer_exceed(length,
> > +                                            nvdimm_get_max_xfer_la
> > bel_size());
> >          return ret;
> >      }
> >  
> > @@ -718,8 +717,8 @@ static void
> > nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
> >      get_label_data->offset = le32_to_cpu(get_label_data->offset);
> >      get_label_data->length = le32_to_cpu(get_label_data->length);
> >  
> > -    nvdimm_debug("Read Label Data: offset 0x%x length 0x%x.\n",
> > -                 get_label_data->offset, get_label_data->length);
> > +    trace_acpi_nvdimm_read_label(get_label_data->offset,
> > +                                 get_label_data->length);
> >  
> >      status = nvdimm_rw_label_data_check(nvdimm, get_label_data-
> > >offset,
> >                                          get_label_data->length);
> > @@ -755,8 +754,8 @@ static void
> > nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
> >      set_label_data->offset = le32_to_cpu(set_label_data->offset);
> >      set_label_data->length = le32_to_cpu(set_label_data->length);
> >  
> > -    nvdimm_debug("Write Label Data: offset 0x%x length 0x%x.\n",
> > -                 set_label_data->offset, set_label_data->length);
> > +    trace_acpi_nvdimm_write_label(set_label_data->offset,
> > +                                  set_label_data->length);
> >  
> >      status = nvdimm_rw_label_data_check(nvdimm, set_label_data-
> > >offset,
> >                                          set_label_data->length);
> > @@ -833,7 +832,7 @@ static void nvdimm_dsm_device(uint32_t
> > nv_handle, NvdimmDsmIn *dsm_in,
> >  static uint64_t
> >  nvdimm_method_read(void *opaque, hwaddr addr, unsigned size)
> >  {
> > -    nvdimm_debug("BUG: we never read NVDIMM Method IO Port.\n");
> > +    trace_acpi_nvdimm_read_io_port();
> >      return 0;
> >  }
> >  
> > @@ -843,20 +842,19 @@ nvdimm_dsm_handle(void *opaque, NvdimmMthdIn
> > *method_in, hwaddr dsm_mem_addr)
> >      NVDIMMState *state = opaque;
> >      NvdimmDsmIn *dsm_in = (NvdimmDsmIn *)method_in->args;
> >  
> > -    nvdimm_debug("dsm memory address 0x%" HWADDR_PRIx ".\n",
> > dsm_mem_addr);
> > +    trace_acpi_nvdimm_dsm_mem_addr(dsm_mem_addr);
> >  
> >      dsm_in->revision = le32_to_cpu(dsm_in->revision);
> >      dsm_in->function = le32_to_cpu(dsm_in->function);
> >  
> > -    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> > -                 dsm_in->revision, method_in->handle, dsm_in-
> > >function);
> > +    trace_acpi_nvdimm_dsm_info(dsm_in->revision,
> > +                 method_in->handle, dsm_in->function);
> >      /*
> >       * Current NVDIMM _DSM Spec supports Rev1 and Rev2
> >       * Intel® OptanePersistent Memory Module DSM Interface,
> > Revision 2.0
> >       */
> >      if (dsm_in->revision != 0x1 && dsm_in->revision != 0x2) {
> > -        nvdimm_debug("Revision 0x%x is not supported, expect 0x1
> > or 0x2.\n",
> > -                     dsm_in->revision);
> > +        trace_acpi_nvdimm_invalid_revision(dsm_in->revision);
> >          nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT,
> > dsm_mem_addr);
> >          return;
> >      }
> > @@ -943,7 +941,7 @@ nvdimm_method_write(void *opaque, hwaddr addr,
> > uint64_t val, unsigned size)
> >          nvdimm_lsw_handle(method_in->handle, method_in->args,
> > dsm_mem_addr);
> >          break;
> >      default:
> > -        nvdimm_debug("%s: Unkown method 0x%x\n", __func__,
> > method_in->method);
> > +        trace_acpi_nvdimm_invalid_method(method_in->method);
> >          break;
> >      }
> >  
> > diff --git a/hw/acpi/trace-events b/hw/acpi/trace-events
> > index 2250126a22..db4c69009f 100644
> > --- a/hw/acpi/trace-events
> > +++ b/hw/acpi/trace-events
> > @@ -70,3 +70,17 @@ acpi_erst_reset_out(unsigned record_count)
> > "record_count %u"
> >  acpi_erst_post_load(void *header, unsigned slot_size) "header:
> > 0x%p slot_size %u"
> >  acpi_erst_class_init_in(void)
> >  acpi_erst_class_init_out(void)
> > +
> > +# nvdimm.c
> > +acpi_nvdimm_read_fit(uint32_t offset, uint32_t len, const char
> > *dirty) "Read FIT: offset 0x%" PRIx32 " FIT size 0x%" PRIx32 "
> > Dirty %s"
> > +acpi_nvdimm_label_info(uint32_t label_size, uint32_t mxfer)
> > "label_size 0x%" PRIx32 ", max_xfer 0x%" PRIx32
> > +acpi_nvdimm_label_overflow(uint32_t offset, uint32_t length)
> > "offset 0x%" PRIx32 " + length 0x%" PRIx32 " is overflow"
> > +acpi_nvdimm_label_oversize(uint32_t pos, uint64_t size) "position
> > 0x%" PRIx32 " is beyond label data (len = %" PRIu64 ")"
> > +acpi_nvdimm_label_xfer_exceed(uint32_t length, uint32_t max_xfer)
> > "length (0x%" PRIx32 ") is larger than max_xfer (0x%" PRIx32 ")"
> > +acpi_nvdimm_read_label(uint32_t offset, uint32_t length) "Read
> > Label Data: offset 0x%" PRIx32 " length 0x%" PRIx32
> > +acpi_nvdimm_write_label(uint32_t offset, uint32_t length) "Write
> > Label Data: offset 0x%" PRIx32 " length 0x%" PRIx32
> > +acpi_nvdimm_read_io_port(void) "Alert: we never read NVDIMM Method
> > IO Port"
> > +acpi_nvdimm_dsm_mem_addr(uint64_t dsm_mem_addr) "dsm memory
> > address 0x%" PRIx64
> > +acpi_nvdimm_dsm_info(uint32_t revision, uint32_t handle, uint32_t
> > function) "Revision 0x%" PRIx32 " Handle 0x%" PRIx32 " Function
> > 0x%" PRIx32
> > +acpi_nvdimm_invalid_revision(uint32_t revision) "Revision 0x%"
> > PRIx32 " is not supported, expect 0x1 or 0x2"
> > +acpi_nvdimm_invalid_method(uint32_t method) "Unkown method %"
> > PRId32
> > diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> > index 0206b6125b..c83e273829 100644
> > --- a/include/hw/mem/nvdimm.h
> > +++ b/include/hw/mem/nvdimm.h
> > @@ -29,14 +29,6 @@
> >  #include "hw/acpi/aml-build.h"
> >  #include "qom/object.h"
> >  
> > -#define NVDIMM_DEBUG 0
> > -#define nvdimm_debug(fmt, ...)                                \
> > -    do {                                                      \
> > -        if (NVDIMM_DEBUG) {                                   \
> > -            fprintf(stderr, "nvdimm: " fmt, ## __VA_ARGS__);  \
> > -        }                                                     \
> > -    } while (0)
> > -
> >  /* NVDIMM ACPI Methods */
> >  #define NVDIMM_METHOD_DSM   0
> >  #define NVDIMM_METHOD_LSI   0x100
> 
> 



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods
  2022-06-16 12:32   ` Igor Mammedov
@ 2022-07-01  9:23     ` Robert Hoo
  2022-07-19  2:46       ` Robert Hoo
  2022-07-21  8:58       ` Igor Mammedov
  0 siblings, 2 replies; 20+ messages in thread
From: Robert Hoo @ 2022-07-01  9:23 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Thu, 2022-06-16 at 14:32 +0200, Igor Mammedov wrote:
> On Mon, 30 May 2022 11:40:45 +0800
> Robert Hoo <robert.hu@linux.intel.com> wrote:
> 
> > Recent ACPI spec [1] has defined NVDIMM Label Methods _LS{I,R,W},
> > which
> > depricates corresponding _DSM Functions defined by PMEM _DSM
> > Interface spec
> > [2].
> > 
> > In this implementation, we do 2 things
> > 1. Generalize the QEMU<->ACPI BIOS NVDIMM interface, wrap it with
> > ACPI
> > method dispatch, _DSM is one of the branches. This also paves the
> > way for
> > adding other ACPI methods for NVDIMM.
> > 2. Add _LS{I,R,W} method in each NVDIMM device in SSDT.
> > ASL form of SSDT changes can be found in next test/qtest/bios-
> > table-test
> > commit message.
> > 
> > [1] ACPI Spec v6.4, 6.5.10 NVDIMM Label Methods
> > https://uefi.org/sites/default/files/resources/ACPI_Spec_6_4_Jan22.pdf
> > [2] Intel PMEM _DSM Interface Spec v2.0, 3.10 Deprecated Functions
> > https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf
> > 
> > Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> > Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> > ---
> >  hw/acpi/nvdimm.c        | 424 +++++++++++++++++++++++++++++++-----
> > ----
> 
> This patch is too large and doing to many things to be reviewable.
> It needs to be split into smaller distinct chunks.
> (however hold your horses and read on)
> 
> The patch it is too intrusive and my hunch is that it breaks
> ABI and needs a bunch of compat knobs to work properly and
> that I'd like to avoid unless there is not other way around
> the problem.

Is the ABI here you mentioned the "struct NvdimmMthdIn{}" stuff?
and the compat knobs refers to related functions' input/output params?

My thoughts is that eventually, sooner or later, more ACPI methods will
be implemented per request, although now we can play the trick of
wrapper new methods over the pipe of old _DSM implementation.
Though this changes a little on existing struct NvdimmDsmIn {}, it
paves the way for the future; and actually the change is more an
extension or generalization, not fundamentally changes the framework.

In short, my point is the change/generalization/extension will be
inevitable, even if not present.
> 
> I was skeptical about this approach during v1 review and
> now I'm pretty much sure it's over-engineered and we can
> just repack data we receive from existing label _DSM functions
> to provide _LS{I,R,W} like it was suggested in v1.
> It will be much simpler and affect only AML side without
> complicating ABI and without any compat cruft and will work
> with ping-pong migration without any issues.

Ostensibly it may looks simpler, actually not, I think. The AML "common
pipe" NCAL() is already complex, it packs all _DSMs and NFIT() function
logics there, packing new stuff in/through it will be bug-prone.
Though this time we can avert touching it, as the new ACPI methods
deprecating old _DSM functionally is almost the same.
How about next time? are we going to always packing new methods logic
in NCAL()?
My point is that we should implement new methods as itself, of course,
as a general programming rule, we can/should abstract common routines,
but not packing them in one large function.
> 
> 
> >  include/hw/mem/nvdimm.h |   6 +
> >  2 files changed, 338 insertions(+), 92 deletions(-)
> > 
> > diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> > index 59b42afcf1..50ee85866b 100644
> > --- a/hw/acpi/nvdimm.c
> > +++ b/hw/acpi/nvdimm.c
> > @@ -416,17 +416,22 @@ static void nvdimm_build_nfit(NVDIMMState
> > *state, GArray *table_offsets,
> >  
> >  #define NVDIMM_DSM_MEMORY_SIZE      4096
> >  
> > -struct NvdimmDsmIn {
> > +struct NvdimmMthdIn {
> >      uint32_t handle;
> > +    uint32_t method;
> > +    uint8_t  args[4088];
> > +} QEMU_PACKED;
> > +typedef struct NvdimmMthdIn NvdimmMthdIn;
> > +struct NvdimmDsmIn {
> >      uint32_t revision;
> >      uint32_t function;
> >      /* the remaining size in the page is used by arg3. */
> >      union {
> > -        uint8_t arg3[4084];
> > +        uint8_t arg3[4080];
> >      };
> >  } QEMU_PACKED;
> >  typedef struct NvdimmDsmIn NvdimmDsmIn;
> > -QEMU_BUILD_BUG_ON(sizeof(NvdimmDsmIn) != NVDIMM_DSM_MEMORY_SIZE);
> > +QEMU_BUILD_BUG_ON(sizeof(NvdimmMthdIn) != NVDIMM_DSM_MEMORY_SIZE);
> >  
> >  struct NvdimmDsmOut {
> >      /* the size of buffer filled by QEMU. */
> > @@ -470,7 +475,8 @@ struct NvdimmFuncGetLabelDataIn {
> >  } QEMU_PACKED;
> >  typedef struct NvdimmFuncGetLabelDataIn NvdimmFuncGetLabelDataIn;
> >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncGetLabelDataIn) +
> > -                  offsetof(NvdimmDsmIn, arg3) >
> > NVDIMM_DSM_MEMORY_SIZE);
> > +                  offsetof(NvdimmDsmIn, arg3) +
> > offsetof(NvdimmMthdIn, args) >
> > +                  NVDIMM_DSM_MEMORY_SIZE);
> >  
> >  struct NvdimmFuncGetLabelDataOut {
> >      /* the size of buffer filled by QEMU. */
> > @@ -488,14 +494,16 @@ struct NvdimmFuncSetLabelDataIn {
> >  } QEMU_PACKED;
> >  typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn;
> >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) +
> > -                  offsetof(NvdimmDsmIn, arg3) >
> > NVDIMM_DSM_MEMORY_SIZE);
> > +                  offsetof(NvdimmDsmIn, arg3) +
> > offsetof(NvdimmMthdIn, args) >
> > +                  NVDIMM_DSM_MEMORY_SIZE);
> >  
> >  struct NvdimmFuncReadFITIn {
> >      uint32_t offset; /* the offset into FIT buffer. */
> >  } QEMU_PACKED;
> >  typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn;
> >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) +
> > -                  offsetof(NvdimmDsmIn, arg3) >
> > NVDIMM_DSM_MEMORY_SIZE);
> > +                  offsetof(NvdimmDsmIn, arg3) +
> > offsetof(NvdimmMthdIn, args) >
> > +                  NVDIMM_DSM_MEMORY_SIZE);
> >  
> >  struct NvdimmFuncReadFITOut {
> >      /* the size of buffer filled by QEMU. */
> > @@ -636,7 +644,8 @@ static uint32_t
> > nvdimm_get_max_xfer_label_size(void)
> >       * the max data ACPI can write one time which is transferred
> > by
> >       * 'Set Namespace Label Data' function.
> >       */
> > -    max_set_size = dsm_memory_size - offsetof(NvdimmDsmIn, arg3) -
> > +    max_set_size = dsm_memory_size - offsetof(NvdimmMthdIn, args)
> > -
> > +                   offsetof(NvdimmDsmIn, arg3) -
> >                     sizeof(NvdimmFuncSetLabelDataIn);
> >  
> >      return MIN(max_get_size, max_set_size);
> > @@ -697,16 +706,15 @@ static uint32_t
> > nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
> >  /*
> >   * DSM Spec Rev1 4.5 Get Namespace Label Data (Function Index 5).
> >   */
> > -static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
> > NvdimmDsmIn *in,
> > -                                      hwaddr dsm_mem_addr)
> > +static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
> > +                                    NvdimmFuncGetLabelDataIn
> > *get_label_data,
> > +                                    hwaddr dsm_mem_addr)
> >  {
> >      NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
> > -    NvdimmFuncGetLabelDataIn *get_label_data;
> >      NvdimmFuncGetLabelDataOut *get_label_data_out;
> >      uint32_t status;
> >      int size;
> >  
> > -    get_label_data = (NvdimmFuncGetLabelDataIn *)in->arg3;
> >      get_label_data->offset = le32_to_cpu(get_label_data->offset);
> >      get_label_data->length = le32_to_cpu(get_label_data->length);
> >  
> > @@ -737,15 +745,13 @@ static void
> > nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
> >  /*
> >   * DSM Spec Rev1 4.6 Set Namespace Label Data (Function Index 6).
> >   */
> > -static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
> > NvdimmDsmIn *in,
> > +static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
> > +                                      NvdimmFuncSetLabelDataIn
> > *set_label_data,
> >                                        hwaddr dsm_mem_addr)
> >  {
> >      NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
> > -    NvdimmFuncSetLabelDataIn *set_label_data;
> >      uint32_t status;
> >  
> > -    set_label_data = (NvdimmFuncSetLabelDataIn *)in->arg3;
> > -
> >      set_label_data->offset = le32_to_cpu(set_label_data->offset);
> >      set_label_data->length = le32_to_cpu(set_label_data->length);
> >  
> > @@ -760,19 +766,21 @@ static void
> > nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
> >      }
> >  
> >      assert(offsetof(NvdimmDsmIn, arg3) + sizeof(*set_label_data) +
> > -                    set_label_data->length <=
> > NVDIMM_DSM_MEMORY_SIZE);
> > +           set_label_data->length <= NVDIMM_DSM_MEMORY_SIZE -
> > +           offsetof(NvdimmMthdIn, args));
> >  
> >      nvc->write_label_data(nvdimm, set_label_data->in_buf,
> >                            set_label_data->length, set_label_data-
> > >offset);
> >      nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_SUCCESS,
> > dsm_mem_addr);
> >  }
> >  
> > -static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr
> > dsm_mem_addr)
> > +static void nvdimm_dsm_device(uint32_t nv_handle, NvdimmDsmIn
> > *dsm_in,
> > +                                    hwaddr dsm_mem_addr)
> >  {
> > -    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(in-
> > >handle);
> > +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> >  
> >      /* See the comments in nvdimm_dsm_root(). */
> > -    if (!in->function) {
> > +    if (!dsm_in->function) {
> >          uint32_t supported_func = 0;
> >  
> >          if (nvdimm && nvdimm->label_size) {
> > @@ -794,7 +802,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in,
> > hwaddr dsm_mem_addr)
> >      }
> >  
> >      /* Encode DSM function according to DSM Spec Rev1. */
> > -    switch (in->function) {
> > +    switch (dsm_in->function) {
> >      case 4 /* Get Namespace Label Size */:
> >          if (nvdimm->label_size) {
> >              nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
> > @@ -803,13 +811,17 @@ static void nvdimm_dsm_device(NvdimmDsmIn
> > *in, hwaddr dsm_mem_addr)
> >          break;
> >      case 5 /* Get Namespace Label Data */:
> >          if (nvdimm->label_size) {
> > -            nvdimm_dsm_get_label_data(nvdimm, in, dsm_mem_addr);
> > +            nvdimm_dsm_get_label_data(nvdimm,
> > +                                      (NvdimmFuncGetLabelDataIn
> > *)dsm_in->arg3,
> > +                                      dsm_mem_addr);
> >              return;
> >          }
> >          break;
> >      case 0x6 /* Set Namespace Label Data */:
> >          if (nvdimm->label_size) {
> > -            nvdimm_dsm_set_label_data(nvdimm, in, dsm_mem_addr);
> > +            nvdimm_dsm_set_label_data(nvdimm,
> > +                        (NvdimmFuncSetLabelDataIn *)dsm_in->arg3,
> > +                        dsm_mem_addr);
> >              return;
> >          }
> >          break;
> > @@ -819,67 +831,128 @@ static void nvdimm_dsm_device(NvdimmDsmIn
> > *in, hwaddr dsm_mem_addr)
> >  }
> >  
> >  static uint64_t
> > -nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
> > +nvdimm_method_read(void *opaque, hwaddr addr, unsigned size)
> >  {
> > -    nvdimm_debug("BUG: we never read _DSM IO Port.\n");
> > +    nvdimm_debug("BUG: we never read NVDIMM Method IO Port.\n");
> >      return 0;
> >  }
> >  
> >  static void
> > -nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned
> > size)
> > +nvdimm_dsm_handle(void *opaque, NvdimmMthdIn *method_in, hwaddr
> > dsm_mem_addr)
> >  {
> >      NVDIMMState *state = opaque;
> > -    NvdimmDsmIn *in;
> > -    hwaddr dsm_mem_addr = val;
> > +    NvdimmDsmIn *dsm_in = (NvdimmDsmIn *)method_in->args;
> >  
> >      nvdimm_debug("dsm memory address 0x%" HWADDR_PRIx ".\n",
> > dsm_mem_addr);
> >  
> > -    /*
> > -     * The DSM memory is mapped to guest address space so an evil
> > guest
> > -     * can change its content while we are doing DSM emulation.
> > Avoid
> > -     * this by copying DSM memory to QEMU local memory.
> > -     */
> > -    in = g_new(NvdimmDsmIn, 1);
> > -    cpu_physical_memory_read(dsm_mem_addr, in, sizeof(*in));
> > -
> > -    in->revision = le32_to_cpu(in->revision);
> > -    in->function = le32_to_cpu(in->function);
> > -    in->handle = le32_to_cpu(in->handle);
> > -
> > -    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> > in->revision,
> > -                 in->handle, in->function);
> > +    dsm_in->revision = le32_to_cpu(dsm_in->revision);
> > +    dsm_in->function = le32_to_cpu(dsm_in->function);
> >  
> > +    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> > +                 dsm_in->revision, method_in->handle, dsm_in-
> > >function);
> >      /*
> >       * Current NVDIMM _DSM Spec supports Rev1 and Rev2
> >       * Intel® OptanePersistent Memory Module DSM Interface,
> > Revision 2.0
> >       */
> > -    if (in->revision != 0x1 && in->revision != 0x2) {
> > +    if (dsm_in->revision != 0x1 && dsm_in->revision != 0x2) {
> >          nvdimm_debug("Revision 0x%x is not supported, expect 0x1
> > or 0x2.\n",
> > -                     in->revision);
> > +                     dsm_in->revision);
> >          nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT,
> > dsm_mem_addr);
> > -        goto exit;
> > +        return;
> >      }
> >  
> > -    if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> > -        nvdimm_dsm_handle_reserved_root_method(state, in,
> > dsm_mem_addr);
> > -        goto exit;
> > +    if (method_in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> > +        nvdimm_dsm_handle_reserved_root_method(state, dsm_in,
> > dsm_mem_addr);
> > +        return;
> >      }
> >  
> >       /* Handle 0 is reserved for NVDIMM Root Device. */
> > -    if (!in->handle) {
> > -        nvdimm_dsm_root(in, dsm_mem_addr);
> > -        goto exit;
> > +    if (!method_in->handle) {
> > +        nvdimm_dsm_root(dsm_in, dsm_mem_addr);
> > +        return;
> >      }
> >  
> > -    nvdimm_dsm_device(in, dsm_mem_addr);
> > +    nvdimm_dsm_device(method_in->handle, dsm_in, dsm_mem_addr);
> > +}
> >  
> > -exit:
> > -    g_free(in);
> > +static void nvdimm_lsi_handle(uint32_t nv_handle, hwaddr
> > dsm_mem_addr)
> > +{
> > +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> > +
> > +    if (nvdimm->label_size) {
> > +        nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
> > +    }
> > +
> > +    return;
> > +}
> > +
> > +static void nvdimm_lsr_handle(uint32_t nv_handle,
> > +                                    void *data,
> > +                                    hwaddr dsm_mem_addr)
> > +{
> > +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> > +    NvdimmFuncGetLabelDataIn *get_label_data = data;
> > +
> > +    if (nvdimm->label_size) {
> > +        nvdimm_dsm_get_label_data(nvdimm, get_label_data,
> > dsm_mem_addr);
> > +    }
> > +    return;
> > +}
> > +
> > +static void nvdimm_lsw_handle(uint32_t nv_handle,
> > +                                    void *data,
> > +                                    hwaddr dsm_mem_addr)
> > +{
> > +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> > +    NvdimmFuncSetLabelDataIn *set_label_data = data;
> > +
> > +    if (nvdimm->label_size) {
> > +        nvdimm_dsm_set_label_data(nvdimm, set_label_data,
> > dsm_mem_addr);
> > +    }
> > +    return;
> > +}
> > +
> > +static void
> > +nvdimm_method_write(void *opaque, hwaddr addr, uint64_t val,
> > unsigned size)
> > +{
> > +    NvdimmMthdIn *method_in;
> > +    hwaddr dsm_mem_addr = val;
> > +
> > +    /*
> > +     * The DSM memory is mapped to guest address space so an evil
> > guest
> > +     * can change its content while we are doing DSM emulation.
> > Avoid
> > +     * this by copying DSM memory to QEMU local memory.
> > +     */
> > +    method_in = g_new(NvdimmMthdIn, 1);
> > +    cpu_physical_memory_read(dsm_mem_addr, method_in,
> > sizeof(*method_in));
> > +
> > +    method_in->handle = le32_to_cpu(method_in->handle);
> > +    method_in->method = le32_to_cpu(method_in->method);
> > +
> > +    switch (method_in->method) {
> > +    case NVDIMM_METHOD_DSM:
> > +        nvdimm_dsm_handle(opaque, method_in, dsm_mem_addr);
> > +        break;
> > +    case NVDIMM_METHOD_LSI:
> > +        nvdimm_lsi_handle(method_in->handle, dsm_mem_addr);
> > +        break;
> > +    case NVDIMM_METHOD_LSR:
> > +        nvdimm_lsr_handle(method_in->handle, method_in->args,
> > dsm_mem_addr);
> > +        break;
> > +    case NVDIMM_METHOD_LSW:
> > +        nvdimm_lsw_handle(method_in->handle, method_in->args,
> > dsm_mem_addr);
> > +        break;
> > +    default:
> > +        nvdimm_debug("%s: Unkown method 0x%x\n", __func__,
> > method_in->method);
> > +        break;
> > +    }
> > +
> > +    g_free(method_in);
> >  }
> >  
> > -static const MemoryRegionOps nvdimm_dsm_ops = {
> > -    .read = nvdimm_dsm_read,
> > -    .write = nvdimm_dsm_write,
> > +static const MemoryRegionOps nvdimm_method_ops = {
> > +    .read = nvdimm_method_read,
> > +    .write = nvdimm_method_write,
> >      .endianness = DEVICE_LITTLE_ENDIAN,
> >      .valid = {
> >          .min_access_size = 4,
> > @@ -899,12 +972,12 @@ void nvdimm_init_acpi_state(NVDIMMState
> > *state, MemoryRegion *io,
> >                              FWCfgState *fw_cfg, Object *owner)
> >  {
> >      state->dsm_io = dsm_io;
> > -    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops,
> > state,
> > +    memory_region_init_io(&state->io_mr, owner,
> > &nvdimm_method_ops, state,
> >                            "nvdimm-acpi-io", dsm_io.bit_width >>
> > 3);
> >      memory_region_add_subregion(io, dsm_io.address, &state-
> > >io_mr);
> >  
> >      state->dsm_mem = g_array_new(false, true /* clear */, 1);
> > -    acpi_data_push(state->dsm_mem, sizeof(NvdimmDsmIn));
> > +    acpi_data_push(state->dsm_mem, sizeof(NvdimmMthdIn));
> >      fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem-
> > >data,
> >                      state->dsm_mem->len);
> >  
> > @@ -918,13 +991,22 @@ void nvdimm_init_acpi_state(NVDIMMState
> > *state, MemoryRegion *io,
> >  #define NVDIMM_DSM_IOPORT       "NPIO"
> >  
> >  #define NVDIMM_DSM_NOTIFY       "NTFI"
> > +#define NVDIMM_DSM_METHOD       "MTHD"
> >  #define NVDIMM_DSM_HANDLE       "HDLE"
> >  #define NVDIMM_DSM_REVISION     "REVS"
> >  #define NVDIMM_DSM_FUNCTION     "FUNC"
> >  #define NVDIMM_DSM_ARG3         "FARG"
> >  
> > -#define NVDIMM_DSM_OUT_BUF_SIZE "RLEN"
> > -#define NVDIMM_DSM_OUT_BUF      "ODAT"
> > +#define NVDIMM_DSM_OFFSET       "OFST"
> > +#define NVDIMM_DSM_TRANS_LEN    "TRSL"
> > +#define NVDIMM_DSM_IN_BUFF      "IDAT"
> > +
> > +#define NVDIMM_DSM_OUT_BUF_SIZE     "RLEN"
> > +#define NVDIMM_DSM_OUT_BUF          "ODAT"
> > +#define NVDIMM_DSM_OUT_STATUS       "STUS"
> > +#define NVDIMM_DSM_OUT_LSA_SIZE     "SIZE"
> > +#define NVDIMM_DSM_OUT_MAX_TRANS    "MAXT"
> > +
> >  
> >  #define NVDIMM_DSM_RFIT_STATUS  "RSTA"
> >  
> > @@ -938,7 +1020,6 @@ static void nvdimm_build_common_dsm(Aml *dev,
> >      Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf,
> > *dsm_out_buf_size;
> >      Aml *whilectx, *offset;
> >      uint8_t byte_list[1];
> > -    AmlRegionSpace rs;
> >  
> >      method = aml_method(NVDIMM_COMMON_DSM, 5, AML_SERIALIZED);
> >      uuid = aml_arg(0);
> > @@ -949,37 +1030,15 @@ static void nvdimm_build_common_dsm(Aml
> > *dev,
> >  
> >      aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > dsm_mem));
> >  
> > -    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
> > -        rs = AML_SYSTEM_IO;
> > -    } else {
> > -        rs = AML_SYSTEM_MEMORY;
> > -    }
> > -
> > -    /* map DSM memory and IO into ACPI namespace. */
> > -    aml_append(method, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
> > -               aml_int(nvdimm_state->dsm_io.address),
> > -               nvdimm_state->dsm_io.bit_width >> 3));
> >      aml_append(method, aml_operation_region(NVDIMM_DSM_MEMORY,
> > -               AML_SYSTEM_MEMORY, dsm_mem, sizeof(NvdimmDsmIn)));
> > -
> > -    /*
> > -     * DSM notifier:
> > -     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and
> > notify QEMU to
> > -     *                    emulate the access.
> > -     *
> > -     * It is the IO port so that accessing them will cause VM-
> > exit, the
> > -     * control will be transferred to QEMU.
> > -     */
> > -    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC,
> > AML_NOLOCK,
> > -                      AML_PRESERVE);
> > -    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
> > -               nvdimm_state->dsm_io.bit_width));
> > -    aml_append(method, field);
> > +               AML_SYSTEM_MEMORY, dsm_mem, sizeof(NvdimmMthdIn)));
> >  
> >      /*
> >       * DSM input:
> >       * NVDIMM_DSM_HANDLE: store device's handle, it's zero if the
> > _DSM call
> >       *                    happens on NVDIMM Root Device.
> > +     * NVDIMM_DSM_METHOD: ACPI method indicator, to distinguish
> > _DSM and
> > +     *                    other ACPI methods.
> >       * NVDIMM_DSM_REVISION: store the Arg1 of _DSM call.
> >       * NVDIMM_DSM_FUNCTION: store the Arg2 of _DSM call.
> >       * NVDIMM_DSM_ARG3: store the Arg3 of _DSM call which is a
> > Package
> > @@ -991,13 +1050,16 @@ static void nvdimm_build_common_dsm(Aml
> > *dev,
> >      field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > AML_NOLOCK,
> >                        AML_PRESERVE);
> >      aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > -               sizeof(typeof_field(NvdimmDsmIn, handle)) *
> > BITS_PER_BYTE));
> > +               sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > BITS_PER_BYTE));
> > +    aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > +               sizeof(typeof_field(NvdimmMthdIn, method)) *
> > BITS_PER_BYTE));
> >      aml_append(field, aml_named_field(NVDIMM_DSM_REVISION,
> >                 sizeof(typeof_field(NvdimmDsmIn, revision)) *
> > BITS_PER_BYTE));
> >      aml_append(field, aml_named_field(NVDIMM_DSM_FUNCTION,
> >                 sizeof(typeof_field(NvdimmDsmIn, function)) *
> > BITS_PER_BYTE));
> >      aml_append(field, aml_named_field(NVDIMM_DSM_ARG3,
> > -         (sizeof(NvdimmDsmIn) - offsetof(NvdimmDsmIn, arg3)) *
> > BITS_PER_BYTE));
> > +         (sizeof(NvdimmMthdIn) - offsetof(NvdimmMthdIn, args) -
> > +          offsetof(NvdimmDsmIn, arg3)) * BITS_PER_BYTE));
> >      aml_append(method, field);
> >  
> >      /*
> > @@ -1065,6 +1127,7 @@ static void nvdimm_build_common_dsm(Aml *dev,
> >       * it reserves 0 for root device and is the handle for NVDIMM
> > devices.
> >       * See the comments in nvdimm_slot_to_handle().
> >       */
> > +    aml_append(method, aml_store(aml_int(0),
> > aml_name(NVDIMM_DSM_METHOD)));
> >      aml_append(method, aml_store(handle,
> > aml_name(NVDIMM_DSM_HANDLE)));
> >      aml_append(method, aml_store(aml_arg(1),
> > aml_name(NVDIMM_DSM_REVISION)));
> >      aml_append(method, aml_store(function,
> > aml_name(NVDIMM_DSM_FUNCTION)));
> > @@ -1250,6 +1313,7 @@ static void nvdimm_build_fit(Aml *dev)
> >  static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t
> > ram_slots)
> >  {
> >      uint32_t slot;
> > +    Aml *method, *pkg, *field;
> >  
> >      for (slot = 0; slot < ram_slots; slot++) {
> >          uint32_t handle = nvdimm_slot_to_handle(slot);
> > @@ -1266,6 +1330,155 @@ static void nvdimm_build_nvdimm_devices(Aml
> > *root_dev, uint32_t ram_slots)
> >           * table NFIT or _FIT.
> >           */
> >          aml_append(nvdimm_dev, aml_name_decl("_ADR",
> > aml_int(handle)));
> > +        aml_append(nvdimm_dev,
> > aml_operation_region(NVDIMM_DSM_MEMORY,
> > +                   AML_SYSTEM_MEMORY,
> > aml_name(NVDIMM_ACPI_MEM_ADDR),
> > +                   sizeof(NvdimmMthdIn)));
> > +
> > +        /* ACPI 6.4: 6.5.10 NVDIMM Label Methods, _LS{I,R,W} */
> > +
> > +        /* Begin of _LSI Block */
> > +        method = aml_method("_LSI", 0, AML_SERIALIZED);
> > +        /* _LSI Input field */
> > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > AML_NOLOCK,
> > +                          AML_PRESERVE);
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > BITS_PER_BYTE));
> > +        aml_append(method, field);
> > +
> > +        /* _LSI Output field */
> > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > AML_NOLOCK,
> > +                          AML_PRESERVE);
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> > len)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> > +                   func_ret_status)) * BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_LSA_SIZE,
> > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> > label_size)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field,
> > aml_named_field(NVDIMM_DSM_OUT_MAX_TRANS,
> > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> > max_xfer)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(method, field);
> > +
> > +        aml_append(method, aml_store(aml_int(handle),
> > +                                      aml_name(NVDIMM_DSM_HANDLE))
> > );
> > +        aml_append(method, aml_store(aml_int(0x100),
> > +                                      aml_name(NVDIMM_DSM_METHOD))
> > );
> > +        aml_append(method,
> > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > +                                      aml_name(NVDIMM_DSM_NOTIFY))
> > );
> > +
> > +        pkg = aml_package(3);
> > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
> > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_LSA_SIZE));
> > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_MAX_TRANS));
> > +
> > +        aml_append(method, aml_name_decl("RPKG", pkg));
> > +
> > +        aml_append(method, aml_return(aml_name("RPKG")));
> > +        aml_append(nvdimm_dev, method); /* End of _LSI Block */
> > +
> > +
> > +        /* Begin of _LSR Block */
> > +        method = aml_method("_LSR", 2, AML_SERIALIZED);
> > +
> > +        /* _LSR Input field */
> > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > AML_NOLOCK,
> > +                          AML_PRESERVE);
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
> > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn,
> > offset)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
> > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn,
> > length)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(method, field);
> > +
> > +        /* _LSR Output field */
> > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > AML_NOLOCK,
> > +                          AML_PRESERVE);
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut,
> > len)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut,
> > +                   func_ret_status)) * BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF,
> > +                   (NVDIMM_DSM_MEMORY_SIZE -
> > +                    offsetof(NvdimmFuncGetLabelDataOut, out_buf))
> > *
> > +                    BITS_PER_BYTE));
> > +        aml_append(method, field);
> > +
> > +        aml_append(method, aml_store(aml_int(handle),
> > +                                      aml_name(NVDIMM_DSM_HANDLE))
> > );
> > +        aml_append(method, aml_store(aml_int(0x101),
> > +                                      aml_name(NVDIMM_DSM_METHOD))
> > );
> > +        aml_append(method, aml_store(aml_arg(0),
> > aml_name(NVDIMM_DSM_OFFSET)));
> > +        aml_append(method, aml_store(aml_arg(1),
> > +                                      aml_name(NVDIMM_DSM_TRANS_LE
> > N)));
> > +        aml_append(method,
> > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > +                                      aml_name(NVDIMM_DSM_NOTIFY))
> > );
> > +
> > +        aml_append(method, aml_store(aml_shiftleft(aml_arg(1),
> > aml_int(3)),
> > +                                         aml_local(1)));
> > +        aml_append(method,
> > aml_create_field(aml_name(NVDIMM_DSM_OUT_BUF),
> > +                   aml_int(0), aml_local(1), "OBUF"));
> > +
> > +        pkg = aml_package(2);
> > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
> > +        aml_append(pkg, aml_name("OBUF"));
> > +        aml_append(method, aml_name_decl("RPKG", pkg));
> > +
> > +        aml_append(method, aml_return(aml_name("RPKG")));
> > +        aml_append(nvdimm_dev, method); /* End of _LSR Block */
> > +
> > +        /* Begin of _LSW Block */
> > +        method = aml_method("_LSW", 3, AML_SERIALIZED);
> > +        /* _LSW Input field */
> > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > AML_NOLOCK,
> > +                          AML_PRESERVE);
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
> > +                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn,
> > offset)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
> > +                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn,
> > length)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_IN_BUFF,
> > 32640));
> > +        aml_append(method, field);
> > +
> > +        /* _LSW Output field */
> > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > AML_NOLOCK,
> > +                          AML_PRESERVE);
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > +                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut,
> > len)) *
> > +                   BITS_PER_BYTE));
> > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > +                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut,
> > +                   func_ret_status)) * BITS_PER_BYTE));
> > +        aml_append(method, field);
> > +
> > +        aml_append(method, aml_store(aml_int(handle),
> > aml_name(NVDIMM_DSM_HANDLE)));
> > +        aml_append(method, aml_store(aml_int(0x102),
> > aml_name(NVDIMM_DSM_METHOD)));
> > +        aml_append(method, aml_store(aml_arg(0),
> > aml_name(NVDIMM_DSM_OFFSET)));
> > +        aml_append(method, aml_store(aml_arg(1),
> > aml_name(NVDIMM_DSM_TRANS_LEN)));
> > +        aml_append(method, aml_store(aml_arg(2),
> > aml_name(NVDIMM_DSM_IN_BUFF)));
> > +        aml_append(method,
> > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > +                                      aml_name(NVDIMM_DSM_NOTIFY))
> > );
> > +
> > +        aml_append(method,
> > aml_return(aml_name(NVDIMM_DSM_OUT_STATUS)));
> > +        aml_append(nvdimm_dev, method); /* End of _LSW Block */
> >  
> >          nvdimm_build_device_dsm(nvdimm_dev, handle);
> >          aml_append(root_dev, nvdimm_dev);
> > @@ -1278,7 +1491,8 @@ static void nvdimm_build_ssdt(GArray
> > *table_offsets, GArray *table_data,
> >                                uint32_t ram_slots, const char
> > *oem_id)
> >  {
> >      int mem_addr_offset;
> > -    Aml *ssdt, *sb_scope, *dev;
> > +    Aml *ssdt, *sb_scope, *dev, *field;
> > +    AmlRegionSpace rs;
> >      AcpiTable table = { .sig = "SSDT", .rev = 1,
> >                          .oem_id = oem_id, .oem_table_id = "NVDIMM"
> > };
> >  
> > @@ -1286,6 +1500,9 @@ static void nvdimm_build_ssdt(GArray
> > *table_offsets, GArray *table_data,
> >  
> >      acpi_table_begin(&table, table_data);
> >      ssdt = init_aml_allocator();
> > +
> > +    mem_addr_offset = build_append_named_dword(table_data,
> > +                                               NVDIMM_ACPI_MEM_ADD
> > R);
> >      sb_scope = aml_scope("\\_SB");
> >  
> >      dev = aml_device("NVDR");
> > @@ -1303,6 +1520,31 @@ static void nvdimm_build_ssdt(GArray
> > *table_offsets, GArray *table_data,
> >       */
> >      aml_append(dev, aml_name_decl("_HID",
> > aml_string("ACPI0012")));
> >  
> > +    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
> > +        rs = AML_SYSTEM_IO;
> > +    } else {
> > +        rs = AML_SYSTEM_MEMORY;
> > +    }
> > +
> > +    /* map DSM memory and IO into ACPI namespace. */
> > +    aml_append(dev, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
> > +               aml_int(nvdimm_state->dsm_io.address),
> > +               nvdimm_state->dsm_io.bit_width >> 3));
> > +
> > +    /*
> > +     * DSM notifier:
> > +     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and
> > notify QEMU to
> > +     *                    emulate the access.
> > +     *
> > +     * It is the IO port so that accessing them will cause VM-
> > exit, the
> > +     * control will be transferred to QEMU.
> > +     */
> > +    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC,
> > AML_NOLOCK,
> > +                      AML_PRESERVE);
> > +    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
> > +               nvdimm_state->dsm_io.bit_width));
> > +    aml_append(dev, field);
> > +
> >      nvdimm_build_common_dsm(dev, nvdimm_state);
> >  
> >      /* 0 is reserved for root device. */
> > @@ -1316,12 +1558,10 @@ static void nvdimm_build_ssdt(GArray
> > *table_offsets, GArray *table_data,
> >  
> >      /* copy AML table into ACPI tables blob and patch header there
> > */
> >      g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf-
> > >len);
> > -    mem_addr_offset = build_append_named_dword(table_data,
> > -                                               NVDIMM_ACPI_MEM_ADD
> > R);
> >  
> >      bios_linker_loader_alloc(linker,
> >                               NVDIMM_DSM_MEM_FILE, nvdimm_state-
> > >dsm_mem,
> > -                             sizeof(NvdimmDsmIn), false /* high
> > memory */);
> > +                             sizeof(NvdimmMthdIn), false /* high
> > memory */);
> >      bios_linker_loader_add_pointer(linker,
> >          ACPI_BUILD_TABLE_FILE, mem_addr_offset, sizeof(uint32_t),
> >          NVDIMM_DSM_MEM_FILE, 0);
> > diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> > index cf8f59be44..0206b6125b 100644
> > --- a/include/hw/mem/nvdimm.h
> > +++ b/include/hw/mem/nvdimm.h
> > @@ -37,6 +37,12 @@
> >          }                                                     \
> >      } while (0)
> >  
> > +/* NVDIMM ACPI Methods */
> > +#define NVDIMM_METHOD_DSM   0
> > +#define NVDIMM_METHOD_LSI   0x100
> > +#define NVDIMM_METHOD_LSR   0x101
> > +#define NVDIMM_METHOD_LSW   0x102
> > +
> >  /*
> >   * The minimum label data size is required by NVDIMM Namespace
> >   * specification, see the chapter 2 Namespaces:
> 
> 



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods
  2022-07-01  9:23     ` Robert Hoo
@ 2022-07-19  2:46       ` Robert Hoo
  2022-07-19 11:32         ` Michael S. Tsirkin
  2022-07-21  8:58       ` Igor Mammedov
  1 sibling, 1 reply; 20+ messages in thread
From: Robert Hoo @ 2022-07-19  2:46 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

Ping...
On Fri, 2022-07-01 at 17:23 +0800, Robert Hoo wrote:
> On Thu, 2022-06-16 at 14:32 +0200, Igor Mammedov wrote:
> > On Mon, 30 May 2022 11:40:45 +0800
> > Robert Hoo <robert.hu@linux.intel.com> wrote:
> > 
> > > Recent ACPI spec [1] has defined NVDIMM Label Methods _LS{I,R,W},
> > > which
> > > depricates corresponding _DSM Functions defined by PMEM _DSM
> > > Interface spec
> > > [2].
> > > 
> > > In this implementation, we do 2 things
> > > 1. Generalize the QEMU<->ACPI BIOS NVDIMM interface, wrap it with
> > > ACPI
> > > method dispatch, _DSM is one of the branches. This also paves the
> > > way for
> > > adding other ACPI methods for NVDIMM.
> > > 2. Add _LS{I,R,W} method in each NVDIMM device in SSDT.
> > > ASL form of SSDT changes can be found in next test/qtest/bios-
> > > table-test
> > > commit message.
> > > 
> > > [1] ACPI Spec v6.4, 6.5.10 NVDIMM Label Methods
> > > https://uefi.org/sites/default/files/resources/ACPI_Spec_6_4_Jan22.pdf
> > > [2] Intel PMEM _DSM Interface Spec v2.0, 3.10 Deprecated
> > > Functions
> > > https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf
> > > 
> > > Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> > > Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> > > ---
> > >  hw/acpi/nvdimm.c        | 424 +++++++++++++++++++++++++++++++---
> > > --
> > > ----
> > 
> > This patch is too large and doing to many things to be reviewable.
> > It needs to be split into smaller distinct chunks.
> > (however hold your horses and read on)
> > 
> > The patch it is too intrusive and my hunch is that it breaks
> > ABI and needs a bunch of compat knobs to work properly and
> > that I'd like to avoid unless there is not other way around
> > the problem.
> 
> Is the ABI here you mentioned the "struct NvdimmMthdIn{}" stuff?
> and the compat knobs refers to related functions' input/output
> params?
> 
> My thoughts is that eventually, sooner or later, more ACPI methods
> will
> be implemented per request, although now we can play the trick of
> wrapper new methods over the pipe of old _DSM implementation.
> Though this changes a little on existing struct NvdimmDsmIn {}, it
> paves the way for the future; and actually the change is more an
> extension or generalization, not fundamentally changes the framework.
> 
> In short, my point is the change/generalization/extension will be
> inevitable, even if not present.
> > 
> > I was skeptical about this approach during v1 review and
> > now I'm pretty much sure it's over-engineered and we can
> > just repack data we receive from existing label _DSM functions
> > to provide _LS{I,R,W} like it was suggested in v1.
> > It will be much simpler and affect only AML side without
> > complicating ABI and without any compat cruft and will work
> > with ping-pong migration without any issues.
> 
> Ostensibly it may looks simpler, actually not, I think. The AML
> "common
> pipe" NCAL() is already complex, it packs all _DSMs and NFIT()
> function
> logics there, packing new stuff in/through it will be bug-prone.
> Though this time we can avert touching it, as the new ACPI methods
> deprecating old _DSM functionally is almost the same.
> How about next time? are we going to always packing new methods logic
> in NCAL()?
> My point is that we should implement new methods as itself, of
> course,
> as a general programming rule, we can/should abstract common
> routines,
> but not packing them in one large function.
> > 
> > 
> > >  include/hw/mem/nvdimm.h |   6 +
> > >  2 files changed, 338 insertions(+), 92 deletions(-)
> > > 
> > > diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> > > index 59b42afcf1..50ee85866b 100644
> > > --- a/hw/acpi/nvdimm.c
> > > +++ b/hw/acpi/nvdimm.c
> > > @@ -416,17 +416,22 @@ static void nvdimm_build_nfit(NVDIMMState
> > > *state, GArray *table_offsets,
> > >  
> > >  #define NVDIMM_DSM_MEMORY_SIZE      4096
> > >  
> > > -struct NvdimmDsmIn {
> > > +struct NvdimmMthdIn {
> > >      uint32_t handle;
> > > +    uint32_t method;
> > > +    uint8_t  args[4088];
> > > +} QEMU_PACKED;
> > > +typedef struct NvdimmMthdIn NvdimmMthdIn;
> > > +struct NvdimmDsmIn {
> > >      uint32_t revision;
> > >      uint32_t function;
> > >      /* the remaining size in the page is used by arg3. */
> > >      union {
> > > -        uint8_t arg3[4084];
> > > +        uint8_t arg3[4080];
> > >      };
> > >  } QEMU_PACKED;
> > >  typedef struct NvdimmDsmIn NvdimmDsmIn;
> > > -QEMU_BUILD_BUG_ON(sizeof(NvdimmDsmIn) !=
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +QEMU_BUILD_BUG_ON(sizeof(NvdimmMthdIn) !=
> > > NVDIMM_DSM_MEMORY_SIZE);
> > >  
> > >  struct NvdimmDsmOut {
> > >      /* the size of buffer filled by QEMU. */
> > > @@ -470,7 +475,8 @@ struct NvdimmFuncGetLabelDataIn {
> > >  } QEMU_PACKED;
> > >  typedef struct NvdimmFuncGetLabelDataIn
> > > NvdimmFuncGetLabelDataIn;
> > >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncGetLabelDataIn) +
> > > -                  offsetof(NvdimmDsmIn, arg3) >
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +                  offsetof(NvdimmDsmIn, arg3) +
> > > offsetof(NvdimmMthdIn, args) >
> > > +                  NVDIMM_DSM_MEMORY_SIZE);
> > >  
> > >  struct NvdimmFuncGetLabelDataOut {
> > >      /* the size of buffer filled by QEMU. */
> > > @@ -488,14 +494,16 @@ struct NvdimmFuncSetLabelDataIn {
> > >  } QEMU_PACKED;
> > >  typedef struct NvdimmFuncSetLabelDataIn
> > > NvdimmFuncSetLabelDataIn;
> > >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) +
> > > -                  offsetof(NvdimmDsmIn, arg3) >
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +                  offsetof(NvdimmDsmIn, arg3) +
> > > offsetof(NvdimmMthdIn, args) >
> > > +                  NVDIMM_DSM_MEMORY_SIZE);
> > >  
> > >  struct NvdimmFuncReadFITIn {
> > >      uint32_t offset; /* the offset into FIT buffer. */
> > >  } QEMU_PACKED;
> > >  typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn;
> > >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) +
> > > -                  offsetof(NvdimmDsmIn, arg3) >
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +                  offsetof(NvdimmDsmIn, arg3) +
> > > offsetof(NvdimmMthdIn, args) >
> > > +                  NVDIMM_DSM_MEMORY_SIZE);
> > >  
> > >  struct NvdimmFuncReadFITOut {
> > >      /* the size of buffer filled by QEMU. */
> > > @@ -636,7 +644,8 @@ static uint32_t
> > > nvdimm_get_max_xfer_label_size(void)
> > >       * the max data ACPI can write one time which is transferred
> > > by
> > >       * 'Set Namespace Label Data' function.
> > >       */
> > > -    max_set_size = dsm_memory_size - offsetof(NvdimmDsmIn, arg3)
> > > -
> > > +    max_set_size = dsm_memory_size - offsetof(NvdimmMthdIn,
> > > args)
> > > -
> > > +                   offsetof(NvdimmDsmIn, arg3) -
> > >                     sizeof(NvdimmFuncSetLabelDataIn);
> > >  
> > >      return MIN(max_get_size, max_set_size);
> > > @@ -697,16 +706,15 @@ static uint32_t
> > > nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
> > >  /*
> > >   * DSM Spec Rev1 4.5 Get Namespace Label Data (Function Index
> > > 5).
> > >   */
> > > -static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
> > > NvdimmDsmIn *in,
> > > -                                      hwaddr dsm_mem_addr)
> > > +static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
> > > +                                    NvdimmFuncGetLabelDataIn
> > > *get_label_data,
> > > +                                    hwaddr dsm_mem_addr)
> > >  {
> > >      NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
> > > -    NvdimmFuncGetLabelDataIn *get_label_data;
> > >      NvdimmFuncGetLabelDataOut *get_label_data_out;
> > >      uint32_t status;
> > >      int size;
> > >  
> > > -    get_label_data = (NvdimmFuncGetLabelDataIn *)in->arg3;
> > >      get_label_data->offset = le32_to_cpu(get_label_data-
> > > >offset);
> > >      get_label_data->length = le32_to_cpu(get_label_data-
> > > >length);
> > >  
> > > @@ -737,15 +745,13 @@ static void
> > > nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
> > >  /*
> > >   * DSM Spec Rev1 4.6 Set Namespace Label Data (Function Index
> > > 6).
> > >   */
> > > -static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
> > > NvdimmDsmIn *in,
> > > +static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
> > > +                                      NvdimmFuncSetLabelDataIn
> > > *set_label_data,
> > >                                        hwaddr dsm_mem_addr)
> > >  {
> > >      NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
> > > -    NvdimmFuncSetLabelDataIn *set_label_data;
> > >      uint32_t status;
> > >  
> > > -    set_label_data = (NvdimmFuncSetLabelDataIn *)in->arg3;
> > > -
> > >      set_label_data->offset = le32_to_cpu(set_label_data-
> > > >offset);
> > >      set_label_data->length = le32_to_cpu(set_label_data-
> > > >length);
> > >  
> > > @@ -760,19 +766,21 @@ static void
> > > nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
> > >      }
> > >  
> > >      assert(offsetof(NvdimmDsmIn, arg3) + sizeof(*set_label_data)
> > > +
> > > -                    set_label_data->length <=
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +           set_label_data->length <= NVDIMM_DSM_MEMORY_SIZE -
> > > +           offsetof(NvdimmMthdIn, args));
> > >  
> > >      nvc->write_label_data(nvdimm, set_label_data->in_buf,
> > >                            set_label_data->length,
> > > set_label_data-
> > > > offset);
> > > 
> > >      nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_SUCCESS,
> > > dsm_mem_addr);
> > >  }
> > >  
> > > -static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr
> > > dsm_mem_addr)
> > > +static void nvdimm_dsm_device(uint32_t nv_handle, NvdimmDsmIn
> > > *dsm_in,
> > > +                                    hwaddr dsm_mem_addr)
> > >  {
> > > -    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(in-
> > > > handle);
> > > 
> > > +    NVDIMMDevice *nvdimm =
> > > nvdimm_get_device_by_handle(nv_handle);
> > >  
> > >      /* See the comments in nvdimm_dsm_root(). */
> > > -    if (!in->function) {
> > > +    if (!dsm_in->function) {
> > >          uint32_t supported_func = 0;
> > >  
> > >          if (nvdimm && nvdimm->label_size) {
> > > @@ -794,7 +802,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn
> > > *in,
> > > hwaddr dsm_mem_addr)
> > >      }
> > >  
> > >      /* Encode DSM function according to DSM Spec Rev1. */
> > > -    switch (in->function) {
> > > +    switch (dsm_in->function) {
> > >      case 4 /* Get Namespace Label Size */:
> > >          if (nvdimm->label_size) {
> > >              nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
> > > @@ -803,13 +811,17 @@ static void nvdimm_dsm_device(NvdimmDsmIn
> > > *in, hwaddr dsm_mem_addr)
> > >          break;
> > >      case 5 /* Get Namespace Label Data */:
> > >          if (nvdimm->label_size) {
> > > -            nvdimm_dsm_get_label_data(nvdimm, in, dsm_mem_addr);
> > > +            nvdimm_dsm_get_label_data(nvdimm,
> > > +                                      (NvdimmFuncGetLabelDataIn
> > > *)dsm_in->arg3,
> > > +                                      dsm_mem_addr);
> > >              return;
> > >          }
> > >          break;
> > >      case 0x6 /* Set Namespace Label Data */:
> > >          if (nvdimm->label_size) {
> > > -            nvdimm_dsm_set_label_data(nvdimm, in, dsm_mem_addr);
> > > +            nvdimm_dsm_set_label_data(nvdimm,
> > > +                        (NvdimmFuncSetLabelDataIn *)dsm_in-
> > > >arg3,
> > > +                        dsm_mem_addr);
> > >              return;
> > >          }
> > >          break;
> > > @@ -819,67 +831,128 @@ static void nvdimm_dsm_device(NvdimmDsmIn
> > > *in, hwaddr dsm_mem_addr)
> > >  }
> > >  
> > >  static uint64_t
> > > -nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
> > > +nvdimm_method_read(void *opaque, hwaddr addr, unsigned size)
> > >  {
> > > -    nvdimm_debug("BUG: we never read _DSM IO Port.\n");
> > > +    nvdimm_debug("BUG: we never read NVDIMM Method IO Port.\n");
> > >      return 0;
> > >  }
> > >  
> > >  static void
> > > -nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val,
> > > unsigned
> > > size)
> > > +nvdimm_dsm_handle(void *opaque, NvdimmMthdIn *method_in, hwaddr
> > > dsm_mem_addr)
> > >  {
> > >      NVDIMMState *state = opaque;
> > > -    NvdimmDsmIn *in;
> > > -    hwaddr dsm_mem_addr = val;
> > > +    NvdimmDsmIn *dsm_in = (NvdimmDsmIn *)method_in->args;
> > >  
> > >      nvdimm_debug("dsm memory address 0x%" HWADDR_PRIx ".\n",
> > > dsm_mem_addr);
> > >  
> > > -    /*
> > > -     * The DSM memory is mapped to guest address space so an
> > > evil
> > > guest
> > > -     * can change its content while we are doing DSM emulation.
> > > Avoid
> > > -     * this by copying DSM memory to QEMU local memory.
> > > -     */
> > > -    in = g_new(NvdimmDsmIn, 1);
> > > -    cpu_physical_memory_read(dsm_mem_addr, in, sizeof(*in));
> > > -
> > > -    in->revision = le32_to_cpu(in->revision);
> > > -    in->function = le32_to_cpu(in->function);
> > > -    in->handle = le32_to_cpu(in->handle);
> > > -
> > > -    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> > > in->revision,
> > > -                 in->handle, in->function);
> > > +    dsm_in->revision = le32_to_cpu(dsm_in->revision);
> > > +    dsm_in->function = le32_to_cpu(dsm_in->function);
> > >  
> > > +    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> > > +                 dsm_in->revision, method_in->handle, dsm_in-
> > > > function);
> > > 
> > >      /*
> > >       * Current NVDIMM _DSM Spec supports Rev1 and Rev2
> > >       * Intel® OptanePersistent Memory Module DSM Interface,
> > > Revision 2.0
> > >       */
> > > -    if (in->revision != 0x1 && in->revision != 0x2) {
> > > +    if (dsm_in->revision != 0x1 && dsm_in->revision != 0x2) {
> > >          nvdimm_debug("Revision 0x%x is not supported, expect 0x1
> > > or 0x2.\n",
> > > -                     in->revision);
> > > +                     dsm_in->revision);
> > >          nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT,
> > > dsm_mem_addr);
> > > -        goto exit;
> > > +        return;
> > >      }
> > >  
> > > -    if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> > > -        nvdimm_dsm_handle_reserved_root_method(state, in,
> > > dsm_mem_addr);
> > > -        goto exit;
> > > +    if (method_in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> > > +        nvdimm_dsm_handle_reserved_root_method(state, dsm_in,
> > > dsm_mem_addr);
> > > +        return;
> > >      }
> > >  
> > >       /* Handle 0 is reserved for NVDIMM Root Device. */
> > > -    if (!in->handle) {
> > > -        nvdimm_dsm_root(in, dsm_mem_addr);
> > > -        goto exit;
> > > +    if (!method_in->handle) {
> > > +        nvdimm_dsm_root(dsm_in, dsm_mem_addr);
> > > +        return;
> > >      }
> > >  
> > > -    nvdimm_dsm_device(in, dsm_mem_addr);
> > > +    nvdimm_dsm_device(method_in->handle, dsm_in, dsm_mem_addr);
> > > +}
> > >  
> > > -exit:
> > > -    g_free(in);
> > > +static void nvdimm_lsi_handle(uint32_t nv_handle, hwaddr
> > > dsm_mem_addr)
> > > +{
> > > +    NVDIMMDevice *nvdimm =
> > > nvdimm_get_device_by_handle(nv_handle);
> > > +
> > > +    if (nvdimm->label_size) {
> > > +        nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
> > > +    }
> > > +
> > > +    return;
> > > +}
> > > +
> > > +static void nvdimm_lsr_handle(uint32_t nv_handle,
> > > +                                    void *data,
> > > +                                    hwaddr dsm_mem_addr)
> > > +{
> > > +    NVDIMMDevice *nvdimm =
> > > nvdimm_get_device_by_handle(nv_handle);
> > > +    NvdimmFuncGetLabelDataIn *get_label_data = data;
> > > +
> > > +    if (nvdimm->label_size) {
> > > +        nvdimm_dsm_get_label_data(nvdimm, get_label_data,
> > > dsm_mem_addr);
> > > +    }
> > > +    return;
> > > +}
> > > +
> > > +static void nvdimm_lsw_handle(uint32_t nv_handle,
> > > +                                    void *data,
> > > +                                    hwaddr dsm_mem_addr)
> > > +{
> > > +    NVDIMMDevice *nvdimm =
> > > nvdimm_get_device_by_handle(nv_handle);
> > > +    NvdimmFuncSetLabelDataIn *set_label_data = data;
> > > +
> > > +    if (nvdimm->label_size) {
> > > +        nvdimm_dsm_set_label_data(nvdimm, set_label_data,
> > > dsm_mem_addr);
> > > +    }
> > > +    return;
> > > +}
> > > +
> > > +static void
> > > +nvdimm_method_write(void *opaque, hwaddr addr, uint64_t val,
> > > unsigned size)
> > > +{
> > > +    NvdimmMthdIn *method_in;
> > > +    hwaddr dsm_mem_addr = val;
> > > +
> > > +    /*
> > > +     * The DSM memory is mapped to guest address space so an
> > > evil
> > > guest
> > > +     * can change its content while we are doing DSM emulation.
> > > Avoid
> > > +     * this by copying DSM memory to QEMU local memory.
> > > +     */
> > > +    method_in = g_new(NvdimmMthdIn, 1);
> > > +    cpu_physical_memory_read(dsm_mem_addr, method_in,
> > > sizeof(*method_in));
> > > +
> > > +    method_in->handle = le32_to_cpu(method_in->handle);
> > > +    method_in->method = le32_to_cpu(method_in->method);
> > > +
> > > +    switch (method_in->method) {
> > > +    case NVDIMM_METHOD_DSM:
> > > +        nvdimm_dsm_handle(opaque, method_in, dsm_mem_addr);
> > > +        break;
> > > +    case NVDIMM_METHOD_LSI:
> > > +        nvdimm_lsi_handle(method_in->handle, dsm_mem_addr);
> > > +        break;
> > > +    case NVDIMM_METHOD_LSR:
> > > +        nvdimm_lsr_handle(method_in->handle, method_in->args,
> > > dsm_mem_addr);
> > > +        break;
> > > +    case NVDIMM_METHOD_LSW:
> > > +        nvdimm_lsw_handle(method_in->handle, method_in->args,
> > > dsm_mem_addr);
> > > +        break;
> > > +    default:
> > > +        nvdimm_debug("%s: Unkown method 0x%x\n", __func__,
> > > method_in->method);
> > > +        break;
> > > +    }
> > > +
> > > +    g_free(method_in);
> > >  }
> > >  
> > > -static const MemoryRegionOps nvdimm_dsm_ops = {
> > > -    .read = nvdimm_dsm_read,
> > > -    .write = nvdimm_dsm_write,
> > > +static const MemoryRegionOps nvdimm_method_ops = {
> > > +    .read = nvdimm_method_read,
> > > +    .write = nvdimm_method_write,
> > >      .endianness = DEVICE_LITTLE_ENDIAN,
> > >      .valid = {
> > >          .min_access_size = 4,
> > > @@ -899,12 +972,12 @@ void nvdimm_init_acpi_state(NVDIMMState
> > > *state, MemoryRegion *io,
> > >                              FWCfgState *fw_cfg, Object *owner)
> > >  {
> > >      state->dsm_io = dsm_io;
> > > -    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops,
> > > state,
> > > +    memory_region_init_io(&state->io_mr, owner,
> > > &nvdimm_method_ops, state,
> > >                            "nvdimm-acpi-io", dsm_io.bit_width >>
> > > 3);
> > >      memory_region_add_subregion(io, dsm_io.address, &state-
> > > > io_mr);
> > > 
> > >  
> > >      state->dsm_mem = g_array_new(false, true /* clear */, 1);
> > > -    acpi_data_push(state->dsm_mem, sizeof(NvdimmDsmIn));
> > > +    acpi_data_push(state->dsm_mem, sizeof(NvdimmMthdIn));
> > >      fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem-
> > > > data,
> > > 
> > >                      state->dsm_mem->len);
> > >  
> > > @@ -918,13 +991,22 @@ void nvdimm_init_acpi_state(NVDIMMState
> > > *state, MemoryRegion *io,
> > >  #define NVDIMM_DSM_IOPORT       "NPIO"
> > >  
> > >  #define NVDIMM_DSM_NOTIFY       "NTFI"
> > > +#define NVDIMM_DSM_METHOD       "MTHD"
> > >  #define NVDIMM_DSM_HANDLE       "HDLE"
> > >  #define NVDIMM_DSM_REVISION     "REVS"
> > >  #define NVDIMM_DSM_FUNCTION     "FUNC"
> > >  #define NVDIMM_DSM_ARG3         "FARG"
> > >  
> > > -#define NVDIMM_DSM_OUT_BUF_SIZE "RLEN"
> > > -#define NVDIMM_DSM_OUT_BUF      "ODAT"
> > > +#define NVDIMM_DSM_OFFSET       "OFST"
> > > +#define NVDIMM_DSM_TRANS_LEN    "TRSL"
> > > +#define NVDIMM_DSM_IN_BUFF      "IDAT"
> > > +
> > > +#define NVDIMM_DSM_OUT_BUF_SIZE     "RLEN"
> > > +#define NVDIMM_DSM_OUT_BUF          "ODAT"
> > > +#define NVDIMM_DSM_OUT_STATUS       "STUS"
> > > +#define NVDIMM_DSM_OUT_LSA_SIZE     "SIZE"
> > > +#define NVDIMM_DSM_OUT_MAX_TRANS    "MAXT"
> > > +
> > >  
> > >  #define NVDIMM_DSM_RFIT_STATUS  "RSTA"
> > >  
> > > @@ -938,7 +1020,6 @@ static void nvdimm_build_common_dsm(Aml
> > > *dev,
> > >      Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf,
> > > *dsm_out_buf_size;
> > >      Aml *whilectx, *offset;
> > >      uint8_t byte_list[1];
> > > -    AmlRegionSpace rs;
> > >  
> > >      method = aml_method(NVDIMM_COMMON_DSM, 5, AML_SERIALIZED);
> > >      uuid = aml_arg(0);
> > > @@ -949,37 +1030,15 @@ static void nvdimm_build_common_dsm(Aml
> > > *dev,
> > >  
> > >      aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > dsm_mem));
> > >  
> > > -    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
> > > -        rs = AML_SYSTEM_IO;
> > > -    } else {
> > > -        rs = AML_SYSTEM_MEMORY;
> > > -    }
> > > -
> > > -    /* map DSM memory and IO into ACPI namespace. */
> > > -    aml_append(method, aml_operation_region(NVDIMM_DSM_IOPORT,
> > > rs,
> > > -               aml_int(nvdimm_state->dsm_io.address),
> > > -               nvdimm_state->dsm_io.bit_width >> 3));
> > >      aml_append(method, aml_operation_region(NVDIMM_DSM_MEMORY,
> > > -               AML_SYSTEM_MEMORY, dsm_mem,
> > > sizeof(NvdimmDsmIn)));
> > > -
> > > -    /*
> > > -     * DSM notifier:
> > > -     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and
> > > notify QEMU to
> > > -     *                    emulate the access.
> > > -     *
> > > -     * It is the IO port so that accessing them will cause VM-
> > > exit, the
> > > -     * control will be transferred to QEMU.
> > > -     */
> > > -    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > -                      AML_PRESERVE);
> > > -    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
> > > -               nvdimm_state->dsm_io.bit_width));
> > > -    aml_append(method, field);
> > > +               AML_SYSTEM_MEMORY, dsm_mem,
> > > sizeof(NvdimmMthdIn)));
> > >  
> > >      /*
> > >       * DSM input:
> > >       * NVDIMM_DSM_HANDLE: store device's handle, it's zero if
> > > the
> > > _DSM call
> > >       *                    happens on NVDIMM Root Device.
> > > +     * NVDIMM_DSM_METHOD: ACPI method indicator, to distinguish
> > > _DSM and
> > > +     *                    other ACPI methods.
> > >       * NVDIMM_DSM_REVISION: store the Arg1 of _DSM call.
> > >       * NVDIMM_DSM_FUNCTION: store the Arg2 of _DSM call.
> > >       * NVDIMM_DSM_ARG3: store the Arg3 of _DSM call which is a
> > > Package
> > > @@ -991,13 +1050,16 @@ static void nvdimm_build_common_dsm(Aml
> > > *dev,
> > >      field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > >                        AML_PRESERVE);
> > >      aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > > -               sizeof(typeof_field(NvdimmDsmIn, handle)) *
> > > BITS_PER_BYTE));
> > > +               sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > > BITS_PER_BYTE));
> > > +    aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > > +               sizeof(typeof_field(NvdimmMthdIn, method)) *
> > > BITS_PER_BYTE));
> > >      aml_append(field, aml_named_field(NVDIMM_DSM_REVISION,
> > >                 sizeof(typeof_field(NvdimmDsmIn, revision)) *
> > > BITS_PER_BYTE));
> > >      aml_append(field, aml_named_field(NVDIMM_DSM_FUNCTION,
> > >                 sizeof(typeof_field(NvdimmDsmIn, function)) *
> > > BITS_PER_BYTE));
> > >      aml_append(field, aml_named_field(NVDIMM_DSM_ARG3,
> > > -         (sizeof(NvdimmDsmIn) - offsetof(NvdimmDsmIn, arg3)) *
> > > BITS_PER_BYTE));
> > > +         (sizeof(NvdimmMthdIn) - offsetof(NvdimmMthdIn, args) -
> > > +          offsetof(NvdimmDsmIn, arg3)) * BITS_PER_BYTE));
> > >      aml_append(method, field);
> > >  
> > >      /*
> > > @@ -1065,6 +1127,7 @@ static void nvdimm_build_common_dsm(Aml
> > > *dev,
> > >       * it reserves 0 for root device and is the handle for
> > > NVDIMM
> > > devices.
> > >       * See the comments in nvdimm_slot_to_handle().
> > >       */
> > > +    aml_append(method, aml_store(aml_int(0),
> > > aml_name(NVDIMM_DSM_METHOD)));
> > >      aml_append(method, aml_store(handle,
> > > aml_name(NVDIMM_DSM_HANDLE)));
> > >      aml_append(method, aml_store(aml_arg(1),
> > > aml_name(NVDIMM_DSM_REVISION)));
> > >      aml_append(method, aml_store(function,
> > > aml_name(NVDIMM_DSM_FUNCTION)));
> > > @@ -1250,6 +1313,7 @@ static void nvdimm_build_fit(Aml *dev)
> > >  static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t
> > > ram_slots)
> > >  {
> > >      uint32_t slot;
> > > +    Aml *method, *pkg, *field;
> > >  
> > >      for (slot = 0; slot < ram_slots; slot++) {
> > >          uint32_t handle = nvdimm_slot_to_handle(slot);
> > > @@ -1266,6 +1330,155 @@ static void
> > > nvdimm_build_nvdimm_devices(Aml
> > > *root_dev, uint32_t ram_slots)
> > >           * table NFIT or _FIT.
> > >           */
> > >          aml_append(nvdimm_dev, aml_name_decl("_ADR",
> > > aml_int(handle)));
> > > +        aml_append(nvdimm_dev,
> > > aml_operation_region(NVDIMM_DSM_MEMORY,
> > > +                   AML_SYSTEM_MEMORY,
> > > aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > +                   sizeof(NvdimmMthdIn)));
> > > +
> > > +        /* ACPI 6.4: 6.5.10 NVDIMM Label Methods, _LS{I,R,W} */
> > > +
> > > +        /* Begin of _LSI Block */
> > > +        method = aml_method("_LSI", 0, AML_SERIALIZED);
> > > +        /* _LSI Input field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > > BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > > BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        /* _LSI Output field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field,
> > > aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut
> > > ,
> > > len)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut
> > > ,
> > > +                   func_ret_status)) * BITS_PER_BYTE));
> > > +        aml_append(field,
> > > aml_named_field(NVDIMM_DSM_OUT_LSA_SIZE,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut
> > > ,
> > > label_size)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field,
> > > aml_named_field(NVDIMM_DSM_OUT_MAX_TRANS,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut
> > > ,
> > > max_xfer)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        aml_append(method, aml_store(aml_int(handle),
> > > +                                      aml_name(NVDIMM_DSM_HANDLE
> > > ))
> > > );
> > > +        aml_append(method, aml_store(aml_int(0x100),
> > > +                                      aml_name(NVDIMM_DSM_METHOD
> > > ))
> > > );
> > > +        aml_append(method,
> > > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > +                                      aml_name(NVDIMM_DSM_NOTIFY
> > > ))
> > > );
> > > +
> > > +        pkg = aml_package(3);
> > > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
> > > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_LSA_SIZE));
> > > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_MAX_TRANS));
> > > +
> > > +        aml_append(method, aml_name_decl("RPKG", pkg));
> > > +
> > > +        aml_append(method, aml_return(aml_name("RPKG")));
> > > +        aml_append(nvdimm_dev, method); /* End of _LSI Block */
> > > +
> > > +
> > > +        /* Begin of _LSR Block */
> > > +        method = aml_method("_LSR", 2, AML_SERIALIZED);
> > > +
> > > +        /* _LSR Input field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn,
> > > offset)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn,
> > > length)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        /* _LSR Output field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field,
> > > aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut
> > > ,
> > > len)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut
> > > ,
> > > +                   func_ret_status)) * BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF,
> > > +                   (NVDIMM_DSM_MEMORY_SIZE -
> > > +                    offsetof(NvdimmFuncGetLabelDataOut,
> > > out_buf))
> > > *
> > > +                    BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        aml_append(method, aml_store(aml_int(handle),
> > > +                                      aml_name(NVDIMM_DSM_HANDLE
> > > ))
> > > );
> > > +        aml_append(method, aml_store(aml_int(0x101),
> > > +                                      aml_name(NVDIMM_DSM_METHOD
> > > ))
> > > );
> > > +        aml_append(method, aml_store(aml_arg(0),
> > > aml_name(NVDIMM_DSM_OFFSET)));
> > > +        aml_append(method, aml_store(aml_arg(1),
> > > +                                      aml_name(NVDIMM_DSM_TRANS_
> > > LE
> > > N)));
> > > +        aml_append(method,
> > > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > +                                      aml_name(NVDIMM_DSM_NOTIFY
> > > ))
> > > );
> > > +
> > > +        aml_append(method, aml_store(aml_shiftleft(aml_arg(1),
> > > aml_int(3)),
> > > +                                         aml_local(1)));
> > > +        aml_append(method,
> > > aml_create_field(aml_name(NVDIMM_DSM_OUT_BUF),
> > > +                   aml_int(0), aml_local(1), "OBUF"));
> > > +
> > > +        pkg = aml_package(2);
> > > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
> > > +        aml_append(pkg, aml_name("OBUF"));
> > > +        aml_append(method, aml_name_decl("RPKG", pkg));
> > > +
> > > +        aml_append(method, aml_return(aml_name("RPKG")));
> > > +        aml_append(nvdimm_dev, method); /* End of _LSR Block */
> > > +
> > > +        /* Begin of _LSW Block */
> > > +        method = aml_method("_LSW", 3, AML_SERIALIZED);
> > > +        /* _LSW Input field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > > BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > > BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
> > > +                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn,
> > > offset)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
> > > +                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn,
> > > length)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_IN_BUFF,
> > > 32640));
> > > +        aml_append(method, field);
> > > +
> > > +        /* _LSW Output field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field,
> > > aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > > +                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut
> > > ,
> > > len)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > > +                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut
> > > ,
> > > +                   func_ret_status)) * BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        aml_append(method, aml_store(aml_int(handle),
> > > aml_name(NVDIMM_DSM_HANDLE)));
> > > +        aml_append(method, aml_store(aml_int(0x102),
> > > aml_name(NVDIMM_DSM_METHOD)));
> > > +        aml_append(method, aml_store(aml_arg(0),
> > > aml_name(NVDIMM_DSM_OFFSET)));
> > > +        aml_append(method, aml_store(aml_arg(1),
> > > aml_name(NVDIMM_DSM_TRANS_LEN)));
> > > +        aml_append(method, aml_store(aml_arg(2),
> > > aml_name(NVDIMM_DSM_IN_BUFF)));
> > > +        aml_append(method,
> > > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > +                                      aml_name(NVDIMM_DSM_NOTIFY
> > > ))
> > > );
> > > +
> > > +        aml_append(method,
> > > aml_return(aml_name(NVDIMM_DSM_OUT_STATUS)));
> > > +        aml_append(nvdimm_dev, method); /* End of _LSW Block */
> > >  
> > >          nvdimm_build_device_dsm(nvdimm_dev, handle);
> > >          aml_append(root_dev, nvdimm_dev);
> > > @@ -1278,7 +1491,8 @@ static void nvdimm_build_ssdt(GArray
> > > *table_offsets, GArray *table_data,
> > >                                uint32_t ram_slots, const char
> > > *oem_id)
> > >  {
> > >      int mem_addr_offset;
> > > -    Aml *ssdt, *sb_scope, *dev;
> > > +    Aml *ssdt, *sb_scope, *dev, *field;
> > > +    AmlRegionSpace rs;
> > >      AcpiTable table = { .sig = "SSDT", .rev = 1,
> > >                          .oem_id = oem_id, .oem_table_id =
> > > "NVDIMM"
> > > };
> > >  
> > > @@ -1286,6 +1500,9 @@ static void nvdimm_build_ssdt(GArray
> > > *table_offsets, GArray *table_data,
> > >  
> > >      acpi_table_begin(&table, table_data);
> > >      ssdt = init_aml_allocator();
> > > +
> > > +    mem_addr_offset = build_append_named_dword(table_data,
> > > +                                               NVDIMM_ACPI_MEM_A
> > > DD
> > > R);
> > >      sb_scope = aml_scope("\\_SB");
> > >  
> > >      dev = aml_device("NVDR");
> > > @@ -1303,6 +1520,31 @@ static void nvdimm_build_ssdt(GArray
> > > *table_offsets, GArray *table_data,
> > >       */
> > >      aml_append(dev, aml_name_decl("_HID",
> > > aml_string("ACPI0012")));
> > >  
> > > +    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
> > > +        rs = AML_SYSTEM_IO;
> > > +    } else {
> > > +        rs = AML_SYSTEM_MEMORY;
> > > +    }
> > > +
> > > +    /* map DSM memory and IO into ACPI namespace. */
> > > +    aml_append(dev, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
> > > +               aml_int(nvdimm_state->dsm_io.address),
> > > +               nvdimm_state->dsm_io.bit_width >> 3));
> > > +
> > > +    /*
> > > +     * DSM notifier:
> > > +     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and
> > > notify QEMU to
> > > +     *                    emulate the access.
> > > +     *
> > > +     * It is the IO port so that accessing them will cause VM-
> > > exit, the
> > > +     * control will be transferred to QEMU.
> > > +     */
> > > +    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                      AML_PRESERVE);
> > > +    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
> > > +               nvdimm_state->dsm_io.bit_width));
> > > +    aml_append(dev, field);
> > > +
> > >      nvdimm_build_common_dsm(dev, nvdimm_state);
> > >  
> > >      /* 0 is reserved for root device. */
> > > @@ -1316,12 +1558,10 @@ static void nvdimm_build_ssdt(GArray
> > > *table_offsets, GArray *table_data,
> > >  
> > >      /* copy AML table into ACPI tables blob and patch header
> > > there
> > > */
> > >      g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf-
> > > > len);
> > > 
> > > -    mem_addr_offset = build_append_named_dword(table_data,
> > > -                                               NVDIMM_ACPI_MEM_A
> > > DD
> > > R);
> > >  
> > >      bios_linker_loader_alloc(linker,
> > >                               NVDIMM_DSM_MEM_FILE, nvdimm_state-
> > > > dsm_mem,
> > > 
> > > -                             sizeof(NvdimmDsmIn), false /* high
> > > memory */);
> > > +                             sizeof(NvdimmMthdIn), false /* high
> > > memory */);
> > >      bios_linker_loader_add_pointer(linker,
> > >          ACPI_BUILD_TABLE_FILE, mem_addr_offset,
> > > sizeof(uint32_t),
> > >          NVDIMM_DSM_MEM_FILE, 0);
> > > diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> > > index cf8f59be44..0206b6125b 100644
> > > --- a/include/hw/mem/nvdimm.h
> > > +++ b/include/hw/mem/nvdimm.h
> > > @@ -37,6 +37,12 @@
> > >          }                                                     \
> > >      } while (0)
> > >  
> > > +/* NVDIMM ACPI Methods */
> > > +#define NVDIMM_METHOD_DSM   0
> > > +#define NVDIMM_METHOD_LSI   0x100
> > > +#define NVDIMM_METHOD_LSR   0x101
> > > +#define NVDIMM_METHOD_LSW   0x102
> > > +
> > >  /*
> > >   * The minimum label data size is required by NVDIMM Namespace
> > >   * specification, see the chapter 2 Namespaces:
> > 
> > 



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods
  2022-07-19  2:46       ` Robert Hoo
@ 2022-07-19 11:32         ` Michael S. Tsirkin
  0 siblings, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2022-07-19 11:32 UTC (permalink / raw)
  To: Robert Hoo
  Cc: Igor Mammedov, xiaoguangrong.eric, ani, dan.j.williams,
	jingqi.liu, qemu-devel, robert.hu

On Tue, Jul 19, 2022 at 10:46:38AM +0800, Robert Hoo wrote:
> Ping...

Igor could you respond? It's been 3 weeks ...



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods
  2022-07-01  9:23     ` Robert Hoo
  2022-07-19  2:46       ` Robert Hoo
@ 2022-07-21  8:58       ` Igor Mammedov
  2022-07-27  5:22         ` Robert Hoo
  1 sibling, 1 reply; 20+ messages in thread
From: Igor Mammedov @ 2022-07-21  8:58 UTC (permalink / raw)
  To: Robert Hoo
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Fri, 01 Jul 2022 17:23:04 +0800
Robert Hoo <robert.hu@linux.intel.com> wrote:

> On Thu, 2022-06-16 at 14:32 +0200, Igor Mammedov wrote:
> > On Mon, 30 May 2022 11:40:45 +0800
> > Robert Hoo <robert.hu@linux.intel.com> wrote:
> >   
> > > Recent ACPI spec [1] has defined NVDIMM Label Methods _LS{I,R,W},
> > > which
> > > depricates corresponding _DSM Functions defined by PMEM _DSM
> > > Interface spec
> > > [2].
> > > 
> > > In this implementation, we do 2 things
> > > 1. Generalize the QEMU<->ACPI BIOS NVDIMM interface, wrap it with
> > > ACPI
> > > method dispatch, _DSM is one of the branches. This also paves the
> > > way for
> > > adding other ACPI methods for NVDIMM.
> > > 2. Add _LS{I,R,W} method in each NVDIMM device in SSDT.
> > > ASL form of SSDT changes can be found in next test/qtest/bios-
> > > table-test
> > > commit message.
> > > 
> > > [1] ACPI Spec v6.4, 6.5.10 NVDIMM Label Methods
> > > https://uefi.org/sites/default/files/resources/ACPI_Spec_6_4_Jan22.pdf
> > > [2] Intel PMEM _DSM Interface Spec v2.0, 3.10 Deprecated Functions
> > > https://pmem.io/documents/IntelOptanePMem_DSM_Interface-V2.0.pdf
> > > 
> > > Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> > > Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> > > ---
> > >  hw/acpi/nvdimm.c        | 424 +++++++++++++++++++++++++++++++-----
> > > ----  
> > 
> > This patch is too large and doing to many things to be reviewable.
> > It needs to be split into smaller distinct chunks.
> > (however hold your horses and read on)
> > 
> > The patch it is too intrusive and my hunch is that it breaks
> > ABI and needs a bunch of compat knobs to work properly and
> > that I'd like to avoid unless there is not other way around
> > the problem.  
> 
> Is the ABI here you mentioned the "struct NvdimmMthdIn{}" stuff?
> and the compat knobs refers to related functions' input/output params?

ABI are structures that guest and QEMU pass through information
between each other. And knobs in this case would be compat variable[s]
to keep old behavior in place for old machine types.

> My thoughts is that eventually, sooner or later, more ACPI methods will
> be implemented per request, although now we can play the trick of
> wrapper new methods over the pipe of old _DSM implementation.
> Though this changes a little on existing struct NvdimmDsmIn {}, it
> paves the way for the future; and actually the change is more an
> extension or generalization, not fundamentally changes the framework.
> 
> In short, my point is the change/generalization/extension will be
> inevitable, even if not present.

Expanding ABI (interface between host&guest) has 2 drawbacks
 * it exposes more attack surface of VMM to hostile guest
   and rises chances that vulnerability would slip through
   review/testing
 * migration wise, QEMU has to support any ABI for years
   and not only latest an greatest interface but also old
   ones to keep guest started on older QEMU working across
   migration, so any ABI change should be considered very
   carefully before being implemented otherwise it all
   quickly snowballs in unsupportable mess of compat
   variables smeared across host/guest.
   Reducing exposed ABI and constant need to expand it
   was a reason why we have moved ACPI code from firmware
   into QEMU, so we could describe hardware without costs
   associated with of maintaining ABI.

There might be need to extend ABI eventually, but not in this case.

> > I was skeptical about this approach during v1 review and
> > now I'm pretty much sure it's over-engineered and we can
> > just repack data we receive from existing label _DSM functions
> > to provide _LS{I,R,W} like it was suggested in v1.
> > It will be much simpler and affect only AML side without
> > complicating ABI and without any compat cruft and will work
> > with ping-pong migration without any issues.  
> 
> Ostensibly it may looks simpler, actually not, I think. The AML "common
> pipe" NCAL() is already complex, it packs all _DSMs and NFIT() function
> logics there, packing new stuff in/through it will be bug-prone.
> Though this time we can avert touching it, as the new ACPI methods
> deprecating old _DSM functionally is almost the same.
> How about next time? are we going to always packing new methods logic
> in NCAL()?
> My point is that we should implement new methods as itself, of course,
> as a general programming rule, we can/should abstract common routines,
> but not packing them in one large function.
> > 
> >   
> > >  include/hw/mem/nvdimm.h |   6 +
> > >  2 files changed, 338 insertions(+), 92 deletions(-)
> > > 
> > > diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c
> > > index 59b42afcf1..50ee85866b 100644
> > > --- a/hw/acpi/nvdimm.c
> > > +++ b/hw/acpi/nvdimm.c
> > > @@ -416,17 +416,22 @@ static void nvdimm_build_nfit(NVDIMMState
> > > *state, GArray *table_offsets,
> > >  
> > >  #define NVDIMM_DSM_MEMORY_SIZE      4096
> > >  
> > > -struct NvdimmDsmIn {
> > > +struct NvdimmMthdIn {
> > >      uint32_t handle;
> > > +    uint32_t method;
> > > +    uint8_t  args[4088];
> > > +} QEMU_PACKED;
> > > +typedef struct NvdimmMthdIn NvdimmMthdIn;
> > > +struct NvdimmDsmIn {
> > >      uint32_t revision;
> > >      uint32_t function;
> > >      /* the remaining size in the page is used by arg3. */
> > >      union {
> > > -        uint8_t arg3[4084];
> > > +        uint8_t arg3[4080];
> > >      };
> > >  } QEMU_PACKED;
> > >  typedef struct NvdimmDsmIn NvdimmDsmIn;
> > > -QEMU_BUILD_BUG_ON(sizeof(NvdimmDsmIn) != NVDIMM_DSM_MEMORY_SIZE);
> > > +QEMU_BUILD_BUG_ON(sizeof(NvdimmMthdIn) != NVDIMM_DSM_MEMORY_SIZE);
> > >  
> > >  struct NvdimmDsmOut {
> > >      /* the size of buffer filled by QEMU. */
> > > @@ -470,7 +475,8 @@ struct NvdimmFuncGetLabelDataIn {
> > >  } QEMU_PACKED;
> > >  typedef struct NvdimmFuncGetLabelDataIn NvdimmFuncGetLabelDataIn;
> > >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncGetLabelDataIn) +
> > > -                  offsetof(NvdimmDsmIn, arg3) >
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +                  offsetof(NvdimmDsmIn, arg3) +
> > > offsetof(NvdimmMthdIn, args) >
> > > +                  NVDIMM_DSM_MEMORY_SIZE);
> > >  
> > >  struct NvdimmFuncGetLabelDataOut {
> > >      /* the size of buffer filled by QEMU. */
> > > @@ -488,14 +494,16 @@ struct NvdimmFuncSetLabelDataIn {
> > >  } QEMU_PACKED;
> > >  typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn;
> > >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) +
> > > -                  offsetof(NvdimmDsmIn, arg3) >
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +                  offsetof(NvdimmDsmIn, arg3) +
> > > offsetof(NvdimmMthdIn, args) >
> > > +                  NVDIMM_DSM_MEMORY_SIZE);
> > >  
> > >  struct NvdimmFuncReadFITIn {
> > >      uint32_t offset; /* the offset into FIT buffer. */
> > >  } QEMU_PACKED;
> > >  typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn;
> > >  QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) +
> > > -                  offsetof(NvdimmDsmIn, arg3) >
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +                  offsetof(NvdimmDsmIn, arg3) +
> > > offsetof(NvdimmMthdIn, args) >
> > > +                  NVDIMM_DSM_MEMORY_SIZE);
> > >  
> > >  struct NvdimmFuncReadFITOut {
> > >      /* the size of buffer filled by QEMU. */
> > > @@ -636,7 +644,8 @@ static uint32_t
> > > nvdimm_get_max_xfer_label_size(void)
> > >       * the max data ACPI can write one time which is transferred
> > > by
> > >       * 'Set Namespace Label Data' function.
> > >       */
> > > -    max_set_size = dsm_memory_size - offsetof(NvdimmDsmIn, arg3) -
> > > +    max_set_size = dsm_memory_size - offsetof(NvdimmMthdIn, args)
> > > -
> > > +                   offsetof(NvdimmDsmIn, arg3) -
> > >                     sizeof(NvdimmFuncSetLabelDataIn);
> > >  
> > >      return MIN(max_get_size, max_set_size);
> > > @@ -697,16 +706,15 @@ static uint32_t
> > > nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm,
> > >  /*
> > >   * DSM Spec Rev1 4.5 Get Namespace Label Data (Function Index 5).
> > >   */
> > > -static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
> > > NvdimmDsmIn *in,
> > > -                                      hwaddr dsm_mem_addr)
> > > +static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm,
> > > +                                    NvdimmFuncGetLabelDataIn
> > > *get_label_data,
> > > +                                    hwaddr dsm_mem_addr)
> > >  {
> > >      NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
> > > -    NvdimmFuncGetLabelDataIn *get_label_data;
> > >      NvdimmFuncGetLabelDataOut *get_label_data_out;
> > >      uint32_t status;
> > >      int size;
> > >  
> > > -    get_label_data = (NvdimmFuncGetLabelDataIn *)in->arg3;
> > >      get_label_data->offset = le32_to_cpu(get_label_data->offset);
> > >      get_label_data->length = le32_to_cpu(get_label_data->length);
> > >  
> > > @@ -737,15 +745,13 @@ static void
> > > nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
> > >  /*
> > >   * DSM Spec Rev1 4.6 Set Namespace Label Data (Function Index 6).
> > >   */
> > > -static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
> > > NvdimmDsmIn *in,
> > > +static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm,
> > > +                                      NvdimmFuncSetLabelDataIn
> > > *set_label_data,
> > >                                        hwaddr dsm_mem_addr)
> > >  {
> > >      NVDIMMClass *nvc = NVDIMM_GET_CLASS(nvdimm);
> > > -    NvdimmFuncSetLabelDataIn *set_label_data;
> > >      uint32_t status;
> > >  
> > > -    set_label_data = (NvdimmFuncSetLabelDataIn *)in->arg3;
> > > -
> > >      set_label_data->offset = le32_to_cpu(set_label_data->offset);
> > >      set_label_data->length = le32_to_cpu(set_label_data->length);
> > >  
> > > @@ -760,19 +766,21 @@ static void
> > > nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in,
> > >      }
> > >  
> > >      assert(offsetof(NvdimmDsmIn, arg3) + sizeof(*set_label_data) +
> > > -                    set_label_data->length <=
> > > NVDIMM_DSM_MEMORY_SIZE);
> > > +           set_label_data->length <= NVDIMM_DSM_MEMORY_SIZE -
> > > +           offsetof(NvdimmMthdIn, args));
> > >  
> > >      nvc->write_label_data(nvdimm, set_label_data->in_buf,
> > >                            set_label_data->length, set_label_data-  
> > > >offset);  
> > >      nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_SUCCESS,
> > > dsm_mem_addr);
> > >  }
> > >  
> > > -static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr
> > > dsm_mem_addr)
> > > +static void nvdimm_dsm_device(uint32_t nv_handle, NvdimmDsmIn
> > > *dsm_in,
> > > +                                    hwaddr dsm_mem_addr)
> > >  {
> > > -    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(in-  
> > > >handle);  
> > > +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> > >  
> > >      /* See the comments in nvdimm_dsm_root(). */
> > > -    if (!in->function) {
> > > +    if (!dsm_in->function) {
> > >          uint32_t supported_func = 0;
> > >  
> > >          if (nvdimm && nvdimm->label_size) {
> > > @@ -794,7 +802,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in,
> > > hwaddr dsm_mem_addr)
> > >      }
> > >  
> > >      /* Encode DSM function according to DSM Spec Rev1. */
> > > -    switch (in->function) {
> > > +    switch (dsm_in->function) {
> > >      case 4 /* Get Namespace Label Size */:
> > >          if (nvdimm->label_size) {
> > >              nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
> > > @@ -803,13 +811,17 @@ static void nvdimm_dsm_device(NvdimmDsmIn
> > > *in, hwaddr dsm_mem_addr)
> > >          break;
> > >      case 5 /* Get Namespace Label Data */:
> > >          if (nvdimm->label_size) {
> > > -            nvdimm_dsm_get_label_data(nvdimm, in, dsm_mem_addr);
> > > +            nvdimm_dsm_get_label_data(nvdimm,
> > > +                                      (NvdimmFuncGetLabelDataIn
> > > *)dsm_in->arg3,
> > > +                                      dsm_mem_addr);
> > >              return;
> > >          }
> > >          break;
> > >      case 0x6 /* Set Namespace Label Data */:
> > >          if (nvdimm->label_size) {
> > > -            nvdimm_dsm_set_label_data(nvdimm, in, dsm_mem_addr);
> > > +            nvdimm_dsm_set_label_data(nvdimm,
> > > +                        (NvdimmFuncSetLabelDataIn *)dsm_in->arg3,
> > > +                        dsm_mem_addr);
> > >              return;
> > >          }
> > >          break;
> > > @@ -819,67 +831,128 @@ static void nvdimm_dsm_device(NvdimmDsmIn
> > > *in, hwaddr dsm_mem_addr)
> > >  }
> > >  
> > >  static uint64_t
> > > -nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size)
> > > +nvdimm_method_read(void *opaque, hwaddr addr, unsigned size)
> > >  {
> > > -    nvdimm_debug("BUG: we never read _DSM IO Port.\n");
> > > +    nvdimm_debug("BUG: we never read NVDIMM Method IO Port.\n");
> > >      return 0;
> > >  }
> > >  
> > >  static void
> > > -nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned
> > > size)
> > > +nvdimm_dsm_handle(void *opaque, NvdimmMthdIn *method_in, hwaddr
> > > dsm_mem_addr)
> > >  {
> > >      NVDIMMState *state = opaque;
> > > -    NvdimmDsmIn *in;
> > > -    hwaddr dsm_mem_addr = val;
> > > +    NvdimmDsmIn *dsm_in = (NvdimmDsmIn *)method_in->args;
> > >  
> > >      nvdimm_debug("dsm memory address 0x%" HWADDR_PRIx ".\n",
> > > dsm_mem_addr);
> > >  
> > > -    /*
> > > -     * The DSM memory is mapped to guest address space so an evil
> > > guest
> > > -     * can change its content while we are doing DSM emulation.
> > > Avoid
> > > -     * this by copying DSM memory to QEMU local memory.
> > > -     */
> > > -    in = g_new(NvdimmDsmIn, 1);
> > > -    cpu_physical_memory_read(dsm_mem_addr, in, sizeof(*in));
> > > -
> > > -    in->revision = le32_to_cpu(in->revision);
> > > -    in->function = le32_to_cpu(in->function);
> > > -    in->handle = le32_to_cpu(in->handle);
> > > -
> > > -    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> > > in->revision,
> > > -                 in->handle, in->function);
> > > +    dsm_in->revision = le32_to_cpu(dsm_in->revision);
> > > +    dsm_in->function = le32_to_cpu(dsm_in->function);
> > >  
> > > +    nvdimm_debug("Revision 0x%x Handler 0x%x Function 0x%x.\n",
> > > +                 dsm_in->revision, method_in->handle, dsm_in-  
> > > >function);  
> > >      /*
> > >       * Current NVDIMM _DSM Spec supports Rev1 and Rev2
> > >       * Intel® OptanePersistent Memory Module DSM Interface,
> > > Revision 2.0
> > >       */
> > > -    if (in->revision != 0x1 && in->revision != 0x2) {
> > > +    if (dsm_in->revision != 0x1 && dsm_in->revision != 0x2) {
> > >          nvdimm_debug("Revision 0x%x is not supported, expect 0x1
> > > or 0x2.\n",
> > > -                     in->revision);
> > > +                     dsm_in->revision);
> > >          nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT,
> > > dsm_mem_addr);
> > > -        goto exit;
> > > +        return;
> > >      }
> > >  
> > > -    if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> > > -        nvdimm_dsm_handle_reserved_root_method(state, in,
> > > dsm_mem_addr);
> > > -        goto exit;
> > > +    if (method_in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) {
> > > +        nvdimm_dsm_handle_reserved_root_method(state, dsm_in,
> > > dsm_mem_addr);
> > > +        return;
> > >      }
> > >  
> > >       /* Handle 0 is reserved for NVDIMM Root Device. */
> > > -    if (!in->handle) {
> > > -        nvdimm_dsm_root(in, dsm_mem_addr);
> > > -        goto exit;
> > > +    if (!method_in->handle) {
> > > +        nvdimm_dsm_root(dsm_in, dsm_mem_addr);
> > > +        return;
> > >      }
> > >  
> > > -    nvdimm_dsm_device(in, dsm_mem_addr);
> > > +    nvdimm_dsm_device(method_in->handle, dsm_in, dsm_mem_addr);
> > > +}
> > >  
> > > -exit:
> > > -    g_free(in);
> > > +static void nvdimm_lsi_handle(uint32_t nv_handle, hwaddr
> > > dsm_mem_addr)
> > > +{
> > > +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> > > +
> > > +    if (nvdimm->label_size) {
> > > +        nvdimm_dsm_label_size(nvdimm, dsm_mem_addr);
> > > +    }
> > > +
> > > +    return;
> > > +}
> > > +
> > > +static void nvdimm_lsr_handle(uint32_t nv_handle,
> > > +                                    void *data,
> > > +                                    hwaddr dsm_mem_addr)
> > > +{
> > > +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> > > +    NvdimmFuncGetLabelDataIn *get_label_data = data;
> > > +
> > > +    if (nvdimm->label_size) {
> > > +        nvdimm_dsm_get_label_data(nvdimm, get_label_data,
> > > dsm_mem_addr);
> > > +    }
> > > +    return;
> > > +}
> > > +
> > > +static void nvdimm_lsw_handle(uint32_t nv_handle,
> > > +                                    void *data,
> > > +                                    hwaddr dsm_mem_addr)
> > > +{
> > > +    NVDIMMDevice *nvdimm = nvdimm_get_device_by_handle(nv_handle);
> > > +    NvdimmFuncSetLabelDataIn *set_label_data = data;
> > > +
> > > +    if (nvdimm->label_size) {
> > > +        nvdimm_dsm_set_label_data(nvdimm, set_label_data,
> > > dsm_mem_addr);
> > > +    }
> > > +    return;
> > > +}
> > > +
> > > +static void
> > > +nvdimm_method_write(void *opaque, hwaddr addr, uint64_t val,
> > > unsigned size)
> > > +{
> > > +    NvdimmMthdIn *method_in;
> > > +    hwaddr dsm_mem_addr = val;
> > > +
> > > +    /*
> > > +     * The DSM memory is mapped to guest address space so an evil
> > > guest
> > > +     * can change its content while we are doing DSM emulation.
> > > Avoid
> > > +     * this by copying DSM memory to QEMU local memory.
> > > +     */
> > > +    method_in = g_new(NvdimmMthdIn, 1);
> > > +    cpu_physical_memory_read(dsm_mem_addr, method_in,
> > > sizeof(*method_in));
> > > +
> > > +    method_in->handle = le32_to_cpu(method_in->handle);
> > > +    method_in->method = le32_to_cpu(method_in->method);
> > > +
> > > +    switch (method_in->method) {
> > > +    case NVDIMM_METHOD_DSM:
> > > +        nvdimm_dsm_handle(opaque, method_in, dsm_mem_addr);
> > > +        break;
> > > +    case NVDIMM_METHOD_LSI:
> > > +        nvdimm_lsi_handle(method_in->handle, dsm_mem_addr);
> > > +        break;
> > > +    case NVDIMM_METHOD_LSR:
> > > +        nvdimm_lsr_handle(method_in->handle, method_in->args,
> > > dsm_mem_addr);
> > > +        break;
> > > +    case NVDIMM_METHOD_LSW:
> > > +        nvdimm_lsw_handle(method_in->handle, method_in->args,
> > > dsm_mem_addr);
> > > +        break;
> > > +    default:
> > > +        nvdimm_debug("%s: Unkown method 0x%x\n", __func__,
> > > method_in->method);
> > > +        break;
> > > +    }
> > > +
> > > +    g_free(method_in);
> > >  }
> > >  
> > > -static const MemoryRegionOps nvdimm_dsm_ops = {
> > > -    .read = nvdimm_dsm_read,
> > > -    .write = nvdimm_dsm_write,
> > > +static const MemoryRegionOps nvdimm_method_ops = {
> > > +    .read = nvdimm_method_read,
> > > +    .write = nvdimm_method_write,
> > >      .endianness = DEVICE_LITTLE_ENDIAN,
> > >      .valid = {
> > >          .min_access_size = 4,
> > > @@ -899,12 +972,12 @@ void nvdimm_init_acpi_state(NVDIMMState
> > > *state, MemoryRegion *io,
> > >                              FWCfgState *fw_cfg, Object *owner)
> > >  {
> > >      state->dsm_io = dsm_io;
> > > -    memory_region_init_io(&state->io_mr, owner, &nvdimm_dsm_ops,
> > > state,
> > > +    memory_region_init_io(&state->io_mr, owner,
> > > &nvdimm_method_ops, state,
> > >                            "nvdimm-acpi-io", dsm_io.bit_width >>
> > > 3);
> > >      memory_region_add_subregion(io, dsm_io.address, &state-  
> > > >io_mr);  
> > >  
> > >      state->dsm_mem = g_array_new(false, true /* clear */, 1);
> > > -    acpi_data_push(state->dsm_mem, sizeof(NvdimmDsmIn));
> > > +    acpi_data_push(state->dsm_mem, sizeof(NvdimmMthdIn));
> > >      fw_cfg_add_file(fw_cfg, NVDIMM_DSM_MEM_FILE, state->dsm_mem-  
> > > >data,  
> > >                      state->dsm_mem->len);
> > >  
> > > @@ -918,13 +991,22 @@ void nvdimm_init_acpi_state(NVDIMMState
> > > *state, MemoryRegion *io,
> > >  #define NVDIMM_DSM_IOPORT       "NPIO"
> > >  
> > >  #define NVDIMM_DSM_NOTIFY       "NTFI"
> > > +#define NVDIMM_DSM_METHOD       "MTHD"
> > >  #define NVDIMM_DSM_HANDLE       "HDLE"
> > >  #define NVDIMM_DSM_REVISION     "REVS"
> > >  #define NVDIMM_DSM_FUNCTION     "FUNC"
> > >  #define NVDIMM_DSM_ARG3         "FARG"
> > >  
> > > -#define NVDIMM_DSM_OUT_BUF_SIZE "RLEN"
> > > -#define NVDIMM_DSM_OUT_BUF      "ODAT"
> > > +#define NVDIMM_DSM_OFFSET       "OFST"
> > > +#define NVDIMM_DSM_TRANS_LEN    "TRSL"
> > > +#define NVDIMM_DSM_IN_BUFF      "IDAT"
> > > +
> > > +#define NVDIMM_DSM_OUT_BUF_SIZE     "RLEN"
> > > +#define NVDIMM_DSM_OUT_BUF          "ODAT"
> > > +#define NVDIMM_DSM_OUT_STATUS       "STUS"
> > > +#define NVDIMM_DSM_OUT_LSA_SIZE     "SIZE"
> > > +#define NVDIMM_DSM_OUT_MAX_TRANS    "MAXT"
> > > +
> > >  
> > >  #define NVDIMM_DSM_RFIT_STATUS  "RSTA"
> > >  
> > > @@ -938,7 +1020,6 @@ static void nvdimm_build_common_dsm(Aml *dev,
> > >      Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf,
> > > *dsm_out_buf_size;
> > >      Aml *whilectx, *offset;
> > >      uint8_t byte_list[1];
> > > -    AmlRegionSpace rs;
> > >  
> > >      method = aml_method(NVDIMM_COMMON_DSM, 5, AML_SERIALIZED);
> > >      uuid = aml_arg(0);
> > > @@ -949,37 +1030,15 @@ static void nvdimm_build_common_dsm(Aml
> > > *dev,
> > >  
> > >      aml_append(method, aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > dsm_mem));
> > >  
> > > -    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
> > > -        rs = AML_SYSTEM_IO;
> > > -    } else {
> > > -        rs = AML_SYSTEM_MEMORY;
> > > -    }
> > > -
> > > -    /* map DSM memory and IO into ACPI namespace. */
> > > -    aml_append(method, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
> > > -               aml_int(nvdimm_state->dsm_io.address),
> > > -               nvdimm_state->dsm_io.bit_width >> 3));
> > >      aml_append(method, aml_operation_region(NVDIMM_DSM_MEMORY,
> > > -               AML_SYSTEM_MEMORY, dsm_mem, sizeof(NvdimmDsmIn)));
> > > -
> > > -    /*
> > > -     * DSM notifier:
> > > -     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and
> > > notify QEMU to
> > > -     *                    emulate the access.
> > > -     *
> > > -     * It is the IO port so that accessing them will cause VM-
> > > exit, the
> > > -     * control will be transferred to QEMU.
> > > -     */
> > > -    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > -                      AML_PRESERVE);
> > > -    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
> > > -               nvdimm_state->dsm_io.bit_width));
> > > -    aml_append(method, field);
> > > +               AML_SYSTEM_MEMORY, dsm_mem, sizeof(NvdimmMthdIn)));
> > >  
> > >      /*
> > >       * DSM input:
> > >       * NVDIMM_DSM_HANDLE: store device's handle, it's zero if the
> > > _DSM call
> > >       *                    happens on NVDIMM Root Device.
> > > +     * NVDIMM_DSM_METHOD: ACPI method indicator, to distinguish
> > > _DSM and
> > > +     *                    other ACPI methods.
> > >       * NVDIMM_DSM_REVISION: store the Arg1 of _DSM call.
> > >       * NVDIMM_DSM_FUNCTION: store the Arg2 of _DSM call.
> > >       * NVDIMM_DSM_ARG3: store the Arg3 of _DSM call which is a
> > > Package
> > > @@ -991,13 +1050,16 @@ static void nvdimm_build_common_dsm(Aml
> > > *dev,
> > >      field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > >                        AML_PRESERVE);
> > >      aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > > -               sizeof(typeof_field(NvdimmDsmIn, handle)) *
> > > BITS_PER_BYTE));
> > > +               sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > > BITS_PER_BYTE));
> > > +    aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > > +               sizeof(typeof_field(NvdimmMthdIn, method)) *
> > > BITS_PER_BYTE));
> > >      aml_append(field, aml_named_field(NVDIMM_DSM_REVISION,
> > >                 sizeof(typeof_field(NvdimmDsmIn, revision)) *
> > > BITS_PER_BYTE));
> > >      aml_append(field, aml_named_field(NVDIMM_DSM_FUNCTION,
> > >                 sizeof(typeof_field(NvdimmDsmIn, function)) *
> > > BITS_PER_BYTE));
> > >      aml_append(field, aml_named_field(NVDIMM_DSM_ARG3,
> > > -         (sizeof(NvdimmDsmIn) - offsetof(NvdimmDsmIn, arg3)) *
> > > BITS_PER_BYTE));
> > > +         (sizeof(NvdimmMthdIn) - offsetof(NvdimmMthdIn, args) -
> > > +          offsetof(NvdimmDsmIn, arg3)) * BITS_PER_BYTE));
> > >      aml_append(method, field);
> > >  
> > >      /*
> > > @@ -1065,6 +1127,7 @@ static void nvdimm_build_common_dsm(Aml *dev,
> > >       * it reserves 0 for root device and is the handle for NVDIMM
> > > devices.
> > >       * See the comments in nvdimm_slot_to_handle().
> > >       */
> > > +    aml_append(method, aml_store(aml_int(0),
> > > aml_name(NVDIMM_DSM_METHOD)));
> > >      aml_append(method, aml_store(handle,
> > > aml_name(NVDIMM_DSM_HANDLE)));
> > >      aml_append(method, aml_store(aml_arg(1),
> > > aml_name(NVDIMM_DSM_REVISION)));
> > >      aml_append(method, aml_store(function,
> > > aml_name(NVDIMM_DSM_FUNCTION)));
> > > @@ -1250,6 +1313,7 @@ static void nvdimm_build_fit(Aml *dev)
> > >  static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t
> > > ram_slots)
> > >  {
> > >      uint32_t slot;
> > > +    Aml *method, *pkg, *field;
> > >  
> > >      for (slot = 0; slot < ram_slots; slot++) {
> > >          uint32_t handle = nvdimm_slot_to_handle(slot);
> > > @@ -1266,6 +1330,155 @@ static void nvdimm_build_nvdimm_devices(Aml
> > > *root_dev, uint32_t ram_slots)
> > >           * table NFIT or _FIT.
> > >           */
> > >          aml_append(nvdimm_dev, aml_name_decl("_ADR",
> > > aml_int(handle)));
> > > +        aml_append(nvdimm_dev,
> > > aml_operation_region(NVDIMM_DSM_MEMORY,
> > > +                   AML_SYSTEM_MEMORY,
> > > aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > +                   sizeof(NvdimmMthdIn)));
> > > +
> > > +        /* ACPI 6.4: 6.5.10 NVDIMM Label Methods, _LS{I,R,W} */
> > > +
> > > +        /* Begin of _LSI Block */
> > > +        method = aml_method("_LSI", 0, AML_SERIALIZED);
> > > +        /* _LSI Input field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > > BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > > BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        /* _LSI Output field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> > > len)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> > > +                   func_ret_status)) * BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_LSA_SIZE,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> > > label_size)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field,
> > > aml_named_field(NVDIMM_DSM_OUT_MAX_TRANS,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelSizeOut,
> > > max_xfer)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        aml_append(method, aml_store(aml_int(handle),
> > > +                                      aml_name(NVDIMM_DSM_HANDLE))
> > > );
> > > +        aml_append(method, aml_store(aml_int(0x100),
> > > +                                      aml_name(NVDIMM_DSM_METHOD))
> > > );
> > > +        aml_append(method,
> > > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > +                                      aml_name(NVDIMM_DSM_NOTIFY))
> > > );
> > > +
> > > +        pkg = aml_package(3);
> > > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
> > > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_LSA_SIZE));
> > > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_MAX_TRANS));
> > > +
> > > +        aml_append(method, aml_name_decl("RPKG", pkg));
> > > +
> > > +        aml_append(method, aml_return(aml_name("RPKG")));
> > > +        aml_append(nvdimm_dev, method); /* End of _LSI Block */
> > > +
> > > +
> > > +        /* Begin of _LSR Block */
> > > +        method = aml_method("_LSR", 2, AML_SERIALIZED);
> > > +
> > > +        /* _LSR Input field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn,
> > > offset)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataIn,
> > > length)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        /* _LSR Output field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut,
> > > len)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > > +                   sizeof(typeof_field(NvdimmFuncGetLabelDataOut,
> > > +                   func_ret_status)) * BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF,
> > > +                   (NVDIMM_DSM_MEMORY_SIZE -
> > > +                    offsetof(NvdimmFuncGetLabelDataOut, out_buf))
> > > *
> > > +                    BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        aml_append(method, aml_store(aml_int(handle),
> > > +                                      aml_name(NVDIMM_DSM_HANDLE))
> > > );
> > > +        aml_append(method, aml_store(aml_int(0x101),
> > > +                                      aml_name(NVDIMM_DSM_METHOD))
> > > );
> > > +        aml_append(method, aml_store(aml_arg(0),
> > > aml_name(NVDIMM_DSM_OFFSET)));
> > > +        aml_append(method, aml_store(aml_arg(1),
> > > +                                      aml_name(NVDIMM_DSM_TRANS_LE
> > > N)));
> > > +        aml_append(method,
> > > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > +                                      aml_name(NVDIMM_DSM_NOTIFY))
> > > );
> > > +
> > > +        aml_append(method, aml_store(aml_shiftleft(aml_arg(1),
> > > aml_int(3)),
> > > +                                         aml_local(1)));
> > > +        aml_append(method,
> > > aml_create_field(aml_name(NVDIMM_DSM_OUT_BUF),
> > > +                   aml_int(0), aml_local(1), "OBUF"));
> > > +
> > > +        pkg = aml_package(2);
> > > +        aml_append(pkg, aml_name(NVDIMM_DSM_OUT_STATUS));
> > > +        aml_append(pkg, aml_name("OBUF"));
> > > +        aml_append(method, aml_name_decl("RPKG", pkg));
> > > +
> > > +        aml_append(method, aml_return(aml_name("RPKG")));
> > > +        aml_append(nvdimm_dev, method); /* End of _LSR Block */
> > > +
> > > +        /* Begin of _LSW Block */
> > > +        method = aml_method("_LSW", 3, AML_SERIALIZED);
> > > +        /* _LSW Input field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_HANDLE,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, handle)) *
> > > BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_METHOD,
> > > +                   sizeof(typeof_field(NvdimmMthdIn, method)) *
> > > BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OFFSET,
> > > +                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn,
> > > offset)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_TRANS_LEN,
> > > +                   sizeof(typeof_field(NvdimmFuncSetLabelDataIn,
> > > length)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_IN_BUFF,
> > > 32640));
> > > +        aml_append(method, field);
> > > +
> > > +        /* _LSW Output field */
> > > +        field = aml_field(NVDIMM_DSM_MEMORY, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                          AML_PRESERVE);
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_BUF_SIZE,
> > > +                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut,
> > > len)) *
> > > +                   BITS_PER_BYTE));
> > > +        aml_append(field, aml_named_field(NVDIMM_DSM_OUT_STATUS,
> > > +                   sizeof(typeof_field(NvdimmDsmFuncNoPayloadOut,
> > > +                   func_ret_status)) * BITS_PER_BYTE));
> > > +        aml_append(method, field);
> > > +
> > > +        aml_append(method, aml_store(aml_int(handle),
> > > aml_name(NVDIMM_DSM_HANDLE)));
> > > +        aml_append(method, aml_store(aml_int(0x102),
> > > aml_name(NVDIMM_DSM_METHOD)));
> > > +        aml_append(method, aml_store(aml_arg(0),
> > > aml_name(NVDIMM_DSM_OFFSET)));
> > > +        aml_append(method, aml_store(aml_arg(1),
> > > aml_name(NVDIMM_DSM_TRANS_LEN)));
> > > +        aml_append(method, aml_store(aml_arg(2),
> > > aml_name(NVDIMM_DSM_IN_BUFF)));
> > > +        aml_append(method,
> > > aml_store(aml_name(NVDIMM_ACPI_MEM_ADDR),
> > > +                                      aml_name(NVDIMM_DSM_NOTIFY))
> > > );
> > > +
> > > +        aml_append(method,
> > > aml_return(aml_name(NVDIMM_DSM_OUT_STATUS)));
> > > +        aml_append(nvdimm_dev, method); /* End of _LSW Block */
> > >  
> > >          nvdimm_build_device_dsm(nvdimm_dev, handle);
> > >          aml_append(root_dev, nvdimm_dev);
> > > @@ -1278,7 +1491,8 @@ static void nvdimm_build_ssdt(GArray
> > > *table_offsets, GArray *table_data,
> > >                                uint32_t ram_slots, const char
> > > *oem_id)
> > >  {
> > >      int mem_addr_offset;
> > > -    Aml *ssdt, *sb_scope, *dev;
> > > +    Aml *ssdt, *sb_scope, *dev, *field;
> > > +    AmlRegionSpace rs;
> > >      AcpiTable table = { .sig = "SSDT", .rev = 1,
> > >                          .oem_id = oem_id, .oem_table_id = "NVDIMM"
> > > };
> > >  
> > > @@ -1286,6 +1500,9 @@ static void nvdimm_build_ssdt(GArray
> > > *table_offsets, GArray *table_data,
> > >  
> > >      acpi_table_begin(&table, table_data);
> > >      ssdt = init_aml_allocator();
> > > +
> > > +    mem_addr_offset = build_append_named_dword(table_data,
> > > +                                               NVDIMM_ACPI_MEM_ADD
> > > R);
> > >      sb_scope = aml_scope("\\_SB");
> > >  
> > >      dev = aml_device("NVDR");
> > > @@ -1303,6 +1520,31 @@ static void nvdimm_build_ssdt(GArray
> > > *table_offsets, GArray *table_data,
> > >       */
> > >      aml_append(dev, aml_name_decl("_HID",
> > > aml_string("ACPI0012")));
> > >  
> > > +    if (nvdimm_state->dsm_io.space_id == AML_AS_SYSTEM_IO) {
> > > +        rs = AML_SYSTEM_IO;
> > > +    } else {
> > > +        rs = AML_SYSTEM_MEMORY;
> > > +    }
> > > +
> > > +    /* map DSM memory and IO into ACPI namespace. */
> > > +    aml_append(dev, aml_operation_region(NVDIMM_DSM_IOPORT, rs,
> > > +               aml_int(nvdimm_state->dsm_io.address),
> > > +               nvdimm_state->dsm_io.bit_width >> 3));
> > > +
> > > +    /*
> > > +     * DSM notifier:
> > > +     * NVDIMM_DSM_NOTIFY: write the address of DSM memory and
> > > notify QEMU to
> > > +     *                    emulate the access.
> > > +     *
> > > +     * It is the IO port so that accessing them will cause VM-
> > > exit, the
> > > +     * control will be transferred to QEMU.
> > > +     */
> > > +    field = aml_field(NVDIMM_DSM_IOPORT, AML_DWORD_ACC,
> > > AML_NOLOCK,
> > > +                      AML_PRESERVE);
> > > +    aml_append(field, aml_named_field(NVDIMM_DSM_NOTIFY,
> > > +               nvdimm_state->dsm_io.bit_width));
> > > +    aml_append(dev, field);
> > > +
> > >      nvdimm_build_common_dsm(dev, nvdimm_state);
> > >  
> > >      /* 0 is reserved for root device. */
> > > @@ -1316,12 +1558,10 @@ static void nvdimm_build_ssdt(GArray
> > > *table_offsets, GArray *table_data,
> > >  
> > >      /* copy AML table into ACPI tables blob and patch header there
> > > */
> > >      g_array_append_vals(table_data, ssdt->buf->data, ssdt->buf-  
> > > >len);  
> > > -    mem_addr_offset = build_append_named_dword(table_data,
> > > -                                               NVDIMM_ACPI_MEM_ADD
> > > R);
> > >  
> > >      bios_linker_loader_alloc(linker,
> > >                               NVDIMM_DSM_MEM_FILE, nvdimm_state-  
> > > >dsm_mem,  
> > > -                             sizeof(NvdimmDsmIn), false /* high
> > > memory */);
> > > +                             sizeof(NvdimmMthdIn), false /* high
> > > memory */);
> > >      bios_linker_loader_add_pointer(linker,
> > >          ACPI_BUILD_TABLE_FILE, mem_addr_offset, sizeof(uint32_t),
> > >          NVDIMM_DSM_MEM_FILE, 0);
> > > diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h
> > > index cf8f59be44..0206b6125b 100644
> > > --- a/include/hw/mem/nvdimm.h
> > > +++ b/include/hw/mem/nvdimm.h
> > > @@ -37,6 +37,12 @@
> > >          }                                                     \
> > >      } while (0)
> > >  
> > > +/* NVDIMM ACPI Methods */
> > > +#define NVDIMM_METHOD_DSM   0
> > > +#define NVDIMM_METHOD_LSI   0x100
> > > +#define NVDIMM_METHOD_LSR   0x101
> > > +#define NVDIMM_METHOD_LSW   0x102
> > > +
> > >  /*
> > >   * The minimum label data size is required by NVDIMM Namespace
> > >   * specification, see the chapter 2 Namespaces:  
> > 
> >   
> 



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods
  2022-07-21  8:58       ` Igor Mammedov
@ 2022-07-27  5:22         ` Robert Hoo
  2022-07-28 14:30           ` Igor Mammedov
  0 siblings, 1 reply; 20+ messages in thread
From: Robert Hoo @ 2022-07-27  5:22 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Thu, 2022-07-21 at 10:58 +0200, Igor Mammedov wrote:
[...]
Thanks Igor for review.
> > > The patch it is too intrusive and my hunch is that it breaks
> > > ABI and needs a bunch of compat knobs to work properly and
> > > that I'd like to avoid unless there is not other way around
> > > the problem.  
> > 
> > Is the ABI here you mentioned the "struct NvdimmMthdIn{}" stuff?
> > and the compat knobs refers to related functions' input/output
> > params?
> 
> ABI are structures that guest and QEMU pass through information
> between each other. And knobs in this case would be compat
> variable[s]
> to keep old behavior in place for old machine types.

My humble opinion:
The changes of the compat variable(s) here don't break the ABI, the ABI
between guest and host/qemu is the ACPI spec which we don't change and
fully conform to it; actually we're implementing it.
e.g. with these patches, old guest can boot up with no difference nor
changes.
> 
> > My thoughts is that eventually, sooner or later, more ACPI methods
> > will
> > be implemented per request, although now we can play the trick of
> > wrapper new methods over the pipe of old _DSM implementation.
> > Though this changes a little on existing struct NvdimmDsmIn {}, it
> > paves the way for the future; and actually the change is more an
> > extension or generalization, not fundamentally changes the
> > framework.
> > 
> > In short, my point is the change/generalization/extension will be
> > inevitable, even if not present.
> 
> Expanding ABI (interface between host&guest) has 2 drawbacks
>  * it exposes more attack surface of VMM to hostile guest
>    and rises chances that vulnerability would slip through
>    review/testing

This patch doesn't increase attach surface, I think.

>  * migration wise, QEMU has to support any ABI for years
>    and not only latest an greatest interface but also old
>    ones to keep guest started on older QEMU working across
>    migration, so any ABI change should be considered very
>    carefully before being implemented otherwise it all
>    quickly snowballs in unsupportable mess of compat
>    variables smeared across host/guest.
>    Reducing exposed ABI and constant need to expand it
>    was a reason why we have moved ACPI code from firmware
>    into QEMU, so we could describe hardware without costs
>    associated with of maintaining ABI.

Yeah, migration is the only broken thing. With this patch, guest ACPI
table changes, live guest migrate between new and old qemus will have
problem. But I think this is not the only example of such kind of
problem. How about other similar cases?

In fact, the point of our contention is around this 
https://www.qemu.org/docs/master/specs/acpi_nvdimm.html, whether or not
change the implementation protocol by this patch. The protocol was for
_DSM only. Unless we're not going to support any ACPI methods, it
should be updated, and the _LS{I,R,W} are ACPI methods, we can play the
trick in this special case, but definitely not next time.

I suggest to do it now, nevertheless, you maintainers make the final
decision.

> 
> There might be need to extend ABI eventually, but not in this case.
> 
> > > I was skeptical about this approach during v1 review and
> > > now I'm pretty much sure it's over-engineered and we can
> > > just repack data we receive from existing label _DSM functions
> > > to provide _LS{I,R,W} like it was suggested in v1.
> > > It will be much simpler and affect only AML side without
> > > complicating ABI and without any compat cruft and will work
> > > with ping-pong migration without any issues.  
> > 
> > Ostensibly it may looks simpler, actually not, I think. The AML
> > "common
> > pipe" NCAL() is already complex, it packs all _DSMs and NFIT()
> > function
> > logics there, packing new stuff in/through it will be bug-prone.
> > Though this time we can avert touching it, as the new ACPI methods
> > deprecating old _DSM functionally is almost the same.
> > How about next time? are we going to always packing new methods
> > logic
> > in NCAL()?
> > My point is that we should implement new methods as itself, of
> > course,
> > as a general programming rule, we can/should abstract common
> > routines,
> > but not packing them in one large function.
> > > 
> > >   
[...]



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods
  2022-07-27  5:22         ` Robert Hoo
@ 2022-07-28 14:30           ` Igor Mammedov
  0 siblings, 0 replies; 20+ messages in thread
From: Igor Mammedov @ 2022-07-28 14:30 UTC (permalink / raw)
  To: Robert Hoo
  Cc: mst, xiaoguangrong.eric, ani, dan.j.williams, jingqi.liu,
	qemu-devel, robert.hu

On Wed, 27 Jul 2022 13:22:34 +0800
Robert Hoo <robert.hu@linux.intel.com> wrote:

> On Thu, 2022-07-21 at 10:58 +0200, Igor Mammedov wrote:
> [...]
> Thanks Igor for review.
> > > > The patch it is too intrusive and my hunch is that it breaks
> > > > ABI and needs a bunch of compat knobs to work properly and
> > > > that I'd like to avoid unless there is not other way around
> > > > the problem.    
> > > 
> > > Is the ABI here you mentioned the "struct NvdimmMthdIn{}" stuff?
> > > and the compat knobs refers to related functions' input/output
> > > params?  
> > 
> > ABI are structures that guest and QEMU pass through information
> > between each other. And knobs in this case would be compat
> > variable[s]
> > to keep old behavior in place for old machine types.  
> 
> My humble opinion:
> The changes of the compat variable(s) here don't break the ABI, the ABI
> between guest and host/qemu is the ACPI spec which we don't change and
> fully conform to it; actually we're implementing it.
> e.g. with these patches, old guest can boot up with no difference nor
> changes.

it's not about booting but about migration.
boot on old QEMU and then migrate to one with your patches,
then make guest use _DSM again. You will see that migrated
guest still uses _old_ ACPI tables/AML and ABI in new QEMU
_must_ be compatible with that.

As for the patch, it's too big, and looking at it I wasn't
able to convince myself that it's correct.

 
> >   
> > > My thoughts is that eventually, sooner or later, more ACPI methods
> > > will
> > > be implemented per request, although now we can play the trick of
> > > wrapper new methods over the pipe of old _DSM implementation.
> > > Though this changes a little on existing struct NvdimmDsmIn {}, it
> > > paves the way for the future; and actually the change is more an
> > > extension or generalization, not fundamentally changes the
> > > framework.
> > > 
> > > In short, my point is the change/generalization/extension will be
> > > inevitable, even if not present.  
> > 
> > Expanding ABI (interface between host&guest) has 2 drawbacks
> >  * it exposes more attack surface of VMM to hostile guest
> >    and rises chances that vulnerability would slip through
> >    review/testing  
> 
> This patch doesn't increase attach surface, I think.
> 
> >  * migration wise, QEMU has to support any ABI for years
> >    and not only latest an greatest interface but also old
> >    ones to keep guest started on older QEMU working across
> >    migration, so any ABI change should be considered very
> >    carefully before being implemented otherwise it all
> >    quickly snowballs in unsupportable mess of compat
> >    variables smeared across host/guest.
> >    Reducing exposed ABI and constant need to expand it
> >    was a reason why we have moved ACPI code from firmware
> >    into QEMU, so we could describe hardware without costs
> >    associated with of maintaining ABI.  
> 
> Yeah, migration is the only broken thing. With this patch, guest ACPI
> table changes, live guest migrate between new and old qemus will have
> problem. But I think this is not the only example of such kind of
> problem. How about other similar cases?

Upstream policy for version-ed machine types (pc-*/q35-*/...),
forward migration _must_ work.
If you consider your device should e supported/usable downstream,
you also need take in account backward migration as well.


> In fact, the point of our contention is around this 
> https://www.qemu.org/docs/master/specs/acpi_nvdimm.html, whether or not
> change the implementation protocol by this patch. The protocol was for
> _DSM only. Unless we're not going to support any ACPI methods, it
> should be updated, and the _LS{I,R,W} are ACPI methods, we can play the
> trick in this special case, but definitely not next time.
> 
> I suggest to do it now, nevertheless, you maintainers make the final
> decision.

Not for this case (i.e. make patches minimal, touching only AML side
and reusing data that QEMU already provides via MMIO).

If ABI needs extending in future, that should be discussed separately
when there is actual need for it. 

> > 
> > There might be need to extend ABI eventually, but not in this case.
> >   
> > > > I was skeptical about this approach during v1 review and
> > > > now I'm pretty much sure it's over-engineered and we can
> > > > just repack data we receive from existing label _DSM functions
> > > > to provide _LS{I,R,W} like it was suggested in v1.
> > > > It will be much simpler and affect only AML side without
> > > > complicating ABI and without any compat cruft and will work
> > > > with ping-pong migration without any issues.    
> > > 
> > > Ostensibly it may looks simpler, actually not, I think. The AML
> > > "common
> > > pipe" NCAL() is already complex, it packs all _DSMs and NFIT()
> > > function
> > > logics there, packing new stuff in/through it will be bug-prone.
> > > Though this time we can avert touching it, as the new ACPI methods
> > > deprecating old _DSM functionally is almost the same.
> > > How about next time? are we going to always packing new methods
> > > logic
> > > in NCAL()?
> > > My point is that we should implement new methods as itself, of
> > > course,
> > > as a general programming rule, we can/should abstract common
> > > routines,
> > > but not packing them in one large function.  
> > > > 
> > > >     
> [...]
> 



^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2022-07-28 14:55 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-30  3:40 [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Robert Hoo
2022-05-30  3:40 ` [QEMU PATCH v2 1/6] tests/acpi: allow SSDT changes Robert Hoo
2022-06-16 11:24   ` Igor Mammedov
2022-05-30  3:40 ` [QEMU PATCH v2 2/6] acpi/ssdt: Fix aml_or() and aml_and() in if clause Robert Hoo
2022-05-30  3:40 ` [QEMU PATCH v2 3/6] acpi/nvdimm: NVDIMM _DSM Spec supports revision 2 Robert Hoo
2022-06-16 11:38   ` Igor Mammedov
2022-07-01  8:31     ` Robert Hoo
2022-05-30  3:40 ` [QEMU PATCH v2 4/6] nvdimm: Implement ACPI NVDIMM Label Methods Robert Hoo
2022-06-16 12:32   ` Igor Mammedov
2022-07-01  9:23     ` Robert Hoo
2022-07-19  2:46       ` Robert Hoo
2022-07-19 11:32         ` Michael S. Tsirkin
2022-07-21  8:58       ` Igor Mammedov
2022-07-27  5:22         ` Robert Hoo
2022-07-28 14:30           ` Igor Mammedov
2022-05-30  3:40 ` [QEMU PATCH v2 5/6] test/acpi/bios-tables-test: SSDT: update golden master binaries Robert Hoo
2022-05-30  3:40 ` [QEMU PATCH v2 6/6] acpi/nvdimm: Define trace events for NVDIMM and substitute nvdimm_debug() Robert Hoo
2022-06-16 12:35   ` Igor Mammedov
2022-07-01  8:35     ` Robert Hoo
2022-06-06  6:26 ` [QEMU PATCH v2 0/6] Support ACPI NVDIMM Label Methods Hu, Robert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.