qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/10] x86: fix cpu hotplug with secure boot
@ 2020-09-07 11:23 Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 01/10] x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features Igor Mammedov
                   ` (11 more replies)
  0 siblings, 12 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

v5:
  - fix hotplug on Windows when there is more than 256 possible CPUs
    (Windows isn't able to handle VarPackage over 255 elements
     so process CPUs in batches)
  - fix off-by-one in package length (Laszlo)
  - fix not selecting CPU before clearing insert event (Laszlo)
  - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus, zero) (Laszlo)
  - split 'x68: acpi: trigger SMI before sending hotplug Notify event to OSPM'
    in samller chunks (Laszlo)
  - fix comment to match spec (Laszlo)
  - reorder aml_lor() and aml_land() in header (Laszlo)
v4:
  - fix 5.2 machine types so they won't apply pc_compat_5_1 (Laszlo)
v3:
  - rebase on top of "[PATCH v2] hw: add compat machines for 5.2"
    so apply that before this patch
v2:
  - AML: clean is_inserted flag only after SMI callback
  - make x-smi-cpu-hotunplug false by default
  - massage error hint on not supported unplug
v1:
  - fix typos and some phrases (Laszlo)
  - add unplug check (Laszlo)
  - redo AML scan logic to avoid race when adding multiple CPUs

CPU hotplug with Secure Boot was not really supported and firmware wasn't aware
of hotplugged CPUs (which might lead to guest crashes). During 4.2 we introduced
locked SMI handler RAM arrea to make sure that guest OS wasn't able to inject
its own SMI handler and OVMF added initial CPU hotplug support.

This series is QEMU part of that support which lets QMVF tell QEMU that
CPU hotplug with SMI broadcast enabled is supported so that QEMU would be able
to prevent hotplug in case it's not supported and trigger SMI on hotplug when
it's necessary.

Igor Mammedov (10):
  x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features
  x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is
    in use
  x86: cpuhp: refuse cpu hot-unplug request earlier if not supported
  acpi: add aml_land() and aml_break() primitives
  tests: acpi: mark to be changed tables in
    bios-tables-test-allowed-diff
  x86: ich9: expose "smi_negotiated_features" as a QOM property
  x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp
  x86: acpi: introduce the PCI0.SMI0 ACPI device
  x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
  tests: acpi: update acpi blobs with new AML

 include/hw/acpi/aml-build.h       |   2 +
 include/hw/acpi/cpu.h             |   1 +
 include/hw/i386/ich9.h            |   4 +
 hw/acpi/aml-build.c               |  16 +++
 hw/acpi/cpu.c                     | 156 ++++++++++++++++++++++++------
 hw/acpi/ich9.c                    |  24 ++++-
 hw/i386/acpi-build.c              |  35 ++++++-
 hw/i386/pc.c                      |  15 ++-
 hw/isa/lpc_ich9.c                 |  16 +++
 tests/data/acpi/pc/DSDT           | Bin 4934 -> 5060 bytes
 tests/data/acpi/pc/DSDT.acpihmat  | Bin 6258 -> 6385 bytes
 tests/data/acpi/pc/DSDT.bridge    | Bin 6793 -> 6919 bytes
 tests/data/acpi/pc/DSDT.cphp      | Bin 5397 -> 5524 bytes
 tests/data/acpi/pc/DSDT.dimmpxm   | Bin 6587 -> 6714 bytes
 tests/data/acpi/pc/DSDT.ipmikcs   | Bin 5006 -> 5132 bytes
 tests/data/acpi/pc/DSDT.memhp     | Bin 6293 -> 6419 bytes
 tests/data/acpi/pc/DSDT.numamem   | Bin 4940 -> 5066 bytes
 tests/data/acpi/q35/DSDT          | Bin 7678 -> 7804 bytes
 tests/data/acpi/q35/DSDT.acpihmat | Bin 9002 -> 9129 bytes
 tests/data/acpi/q35/DSDT.bridge   | Bin 7695 -> 7821 bytes
 tests/data/acpi/q35/DSDT.cphp     | Bin 8141 -> 8268 bytes
 tests/data/acpi/q35/DSDT.dimmpxm  | Bin 9331 -> 9458 bytes
 tests/data/acpi/q35/DSDT.ipmibt   | Bin 7753 -> 7879 bytes
 tests/data/acpi/q35/DSDT.memhp    | Bin 9037 -> 9163 bytes
 tests/data/acpi/q35/DSDT.mmio64   | Bin 8808 -> 8934 bytes
 tests/data/acpi/q35/DSDT.numamem  | Bin 7684 -> 7810 bytes
 tests/data/acpi/q35/DSDT.tis      | Bin 8283 -> 8409 bytes
 27 files changed, 239 insertions(+), 30 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v5 01/10] x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 02/10] x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is in use Igor Mammedov
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

It will allow firmware to notify QEMU that firmware requires SMI
being triggered on CPU hot[un]plug, so that it would be able to account
for hotplugged CPU and relocate it to new SMM base and/or safely remove
CPU on unplug.

Using negotiated features, follow up patches will insert SMI upcall
into AML code, to make sure that firmware processes hotplug before
guest OS would attempt to use new CPU.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
---
v4:
  - fix 5.2 machine types so they won't apply pc_compat_5_1
     (Laszlo Ersek <lersek@redhat.com>)
v3:
  - rebase on top of "[PATCH v2] hw: add compat machines for 5.2"
    so apply that before this patch
v2:
  - rebase on top of 5.1 (move compat values to 5.1 machine)
  - make "x-smi-cpu-hotunplug" false by default (Laszlo Ersek <lersek@redhat.com>)

fixup
---
 include/hw/i386/ich9.h |  2 ++
 hw/i386/pc.c           |  4 +++-
 hw/isa/lpc_ich9.c      | 13 +++++++++++++
 3 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/hw/i386/ich9.h b/include/hw/i386/ich9.h
index a98d10b252..d1bb3f7bf0 100644
--- a/include/hw/i386/ich9.h
+++ b/include/hw/i386/ich9.h
@@ -247,5 +247,7 @@ typedef struct ICH9LPCState {
 
 /* bit positions used in fw_cfg SMI feature negotiation */
 #define ICH9_LPC_SMI_F_BROADCAST_BIT            0
+#define ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT          1
+#define ICH9_LPC_SMI_F_CPU_HOT_UNPLUG_BIT       2
 
 #endif /* HW_ICH9_H */
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index d11daacc23..f32e66c1b2 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -97,7 +97,9 @@
 #include "trace.h"
 #include CONFIG_DEVICES
 
-GlobalProperty pc_compat_5_1[] = {};
+GlobalProperty pc_compat_5_1[] = {
+    { "ICH9-LPC", "x-smi-cpu-hotplug", "off" },
+};
 const size_t pc_compat_5_1_len = G_N_ELEMENTS(pc_compat_5_1);
 
 GlobalProperty pc_compat_5_0[] = {
diff --git a/hw/isa/lpc_ich9.c b/hw/isa/lpc_ich9.c
index cd6e169d47..19f32bed3e 100644
--- a/hw/isa/lpc_ich9.c
+++ b/hw/isa/lpc_ich9.c
@@ -373,6 +373,15 @@ static void smi_features_ok_callback(void *opaque)
         /* guest requests invalid features, leave @features_ok at zero */
         return;
     }
+    if (!(guest_features & BIT_ULL(ICH9_LPC_SMI_F_BROADCAST_BIT)) &&
+        guest_features & (BIT_ULL(ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT) |
+                          BIT_ULL(ICH9_LPC_SMI_F_CPU_HOT_UNPLUG_BIT))) {
+        /*
+         * cpu hot-[un]plug with SMI requires SMI broadcast,
+         * leave @features_ok at zero
+         */
+        return;
+    }
 
     /* valid feature subset requested, lock it down, report success */
     lpc->smi_negotiated_features = guest_features;
@@ -747,6 +756,10 @@ static Property ich9_lpc_properties[] = {
     DEFINE_PROP_BOOL("noreboot", ICH9LPCState, pin_strap.spkr_hi, true),
     DEFINE_PROP_BIT64("x-smi-broadcast", ICH9LPCState, smi_host_features,
                       ICH9_LPC_SMI_F_BROADCAST_BIT, true),
+    DEFINE_PROP_BIT64("x-smi-cpu-hotplug", ICH9LPCState, smi_host_features,
+                      ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT, true),
+    DEFINE_PROP_BIT64("x-smi-cpu-hotunplug", ICH9LPCState, smi_host_features,
+                      ICH9_LPC_SMI_F_CPU_HOT_UNPLUG_BIT, false),
     DEFINE_PROP_END_OF_LIST(),
 };
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 02/10] x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is in use
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 01/10] x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 03/10] x86: cpuhp: refuse cpu hot-unplug request earlier if not supported Igor Mammedov
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

There were reports of guest crash on CPU hotplug, when using q35 machine
type and OVMF with SMM, due to hotplugged CPU trying to process SMI at
default SMI handler location without it being relocated by firmware first.

Fix it by refusing hotplug if firmware hasn't negotiated CPU hotplug with
SMI support while SMI broadcast is in use.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
---
v1:
   fix typos an use suggested wording in commit and error msg
   s/secure boot/smm/; s/hotplug SMI/hotplug with SMI/
      (Laszlo Ersek <lersek@redhat.com>)
---
 hw/acpi/ich9.c | 12 +++++++++++-
 hw/i386/pc.c   | 11 +++++++++++
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
index 6a19070cec..0acc9a3107 100644
--- a/hw/acpi/ich9.c
+++ b/hw/acpi/ich9.c
@@ -408,10 +408,20 @@ void ich9_pm_device_pre_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev,
     ICH9LPCState *lpc = ICH9_LPC_DEVICE(hotplug_dev);
 
     if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM) &&
-        !lpc->pm.acpi_memory_hotplug.is_enabled)
+        !lpc->pm.acpi_memory_hotplug.is_enabled) {
         error_setg(errp,
                    "memory hotplug is not enabled: %s.memory-hotplug-support "
                    "is not set", object_get_typename(OBJECT(lpc)));
+    } else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
+        uint64_t negotiated = lpc->smi_negotiated_features;
+
+        if (negotiated & BIT_ULL(ICH9_LPC_SMI_F_BROADCAST_BIT) &&
+            !(negotiated & BIT_ULL(ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT))) {
+            error_setg(errp, "cpu hotplug with SMI wasn't enabled by firmware");
+            error_append_hint(errp, "update machine type to newer than 5.1 "
+                "and firmware that suppors CPU hotplug with SMM");
+        }
+    }
 }
 
 void ich9_pm_device_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev,
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index f32e66c1b2..1d4955485b 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1500,6 +1500,17 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
         return;
     }
 
+    if (pcms->acpi_dev) {
+        Error *local_err = NULL;
+
+        hotplug_handler_pre_plug(HOTPLUG_HANDLER(pcms->acpi_dev), dev,
+                                 &local_err);
+        if (local_err) {
+            error_propagate(errp, local_err);
+            return;
+        }
+    }
+
     init_topo_info(&topo_info, x86ms);
 
     env->nr_dies = x86ms->smp_dies;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 03/10] x86: cpuhp: refuse cpu hot-unplug request earlier if not supported
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 01/10] x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 02/10] x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is in use Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 04/10] acpi: add aml_land() and aml_break() primitives Igor Mammedov
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

CPU hot-unplug with SMM requires firmware participation to prevent
guest crash (i.e. CPU can be removed only after OS _and_ firmware
were prepared for the action).
Previous patches introduced ICH9_LPC_SMI_F_CPU_HOT_UNPLUG_BIT
feature bit, which is advertised by firmware when it has support
for CPU hot-unplug. Use it to check if guest is able to handle
unplug and make device_del fail gracefully if hot-unplug feature
hasn't been negotiated.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
---
v2:
 - fix typo in commit message
 - drop 5.1 version from hint message (Laszlo)
---
 hw/acpi/ich9.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
index 0acc9a3107..95cb0f935b 100644
--- a/hw/acpi/ich9.c
+++ b/hw/acpi/ich9.c
@@ -460,6 +460,18 @@ void ich9_pm_device_unplug_request_cb(HotplugHandler *hotplug_dev,
                                       errp);
     } else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU) &&
                !lpc->pm.cpu_hotplug_legacy) {
+        uint64_t negotiated = lpc->smi_negotiated_features;
+
+        if (negotiated & BIT_ULL(ICH9_LPC_SMI_F_BROADCAST_BIT) &&
+            !(negotiated & BIT_ULL(ICH9_LPC_SMI_F_CPU_HOT_UNPLUG_BIT))) {
+            error_setg(errp, "cpu hot-unplug with SMI wasn't enabled "
+                             "by firmware");
+            error_append_hint(errp, "update machine type to a version having "
+                                    "x-smi-cpu-hotunplug=on and firmware that "
+                                    "supports CPU hot-unplug with SMM");
+            return;
+        }
+
         acpi_cpu_unplug_request_cb(hotplug_dev, &lpc->pm.cpuhp_state,
                                    dev, errp);
     } else {
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 04/10] acpi: add aml_land() and aml_break() primitives
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (2 preceding siblings ...)
  2020-09-07 11:23 ` [PATCH v5 03/10] x86: cpuhp: refuse cpu hot-unplug request earlier if not supported Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 05/10] tests: acpi: mark to be changed tables in bios-tables-test-allowed-diff Igor Mammedov
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel
  Cc: Philippe Mathieu-Daudé, boris.ostrovsky, lersek, aaron.young

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
---
v5:
 - fix comment to match spec (Laszlo Ersek <lersek@redhat.com>)
 - reorder aml_lor() and aml_land() in header (Laszlo Ersek <lersek@redhat.com>)
---
 include/hw/acpi/aml-build.h |  2 ++
 hw/acpi/aml-build.c         | 16 ++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index d27da03d64..fe0055fffb 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -290,6 +290,7 @@ Aml *aml_to_buffer(Aml *src, Aml *dst);
 Aml *aml_store(Aml *val, Aml *target);
 Aml *aml_and(Aml *arg1, Aml *arg2, Aml *dst);
 Aml *aml_or(Aml *arg1, Aml *arg2, Aml *dst);
+Aml *aml_land(Aml *arg1, Aml *arg2);
 Aml *aml_lor(Aml *arg1, Aml *arg2);
 Aml *aml_shiftleft(Aml *arg1, Aml *count);
 Aml *aml_shiftright(Aml *arg1, Aml *count, Aml *dst);
@@ -300,6 +301,7 @@ Aml *aml_increment(Aml *arg);
 Aml *aml_decrement(Aml *arg);
 Aml *aml_index(Aml *arg1, Aml *idx);
 Aml *aml_notify(Aml *arg1, Aml *arg2);
+Aml *aml_break(void);
 Aml *aml_call0(const char *method);
 Aml *aml_call1(const char *method, Aml *arg1);
 Aml *aml_call2(const char *method, Aml *arg1, Aml *arg2);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index f6fbc9b95d..3792ba96ce 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -556,6 +556,15 @@ Aml *aml_or(Aml *arg1, Aml *arg2, Aml *dst)
     return build_opcode_2arg_dst(0x7D /* OrOp */, arg1, arg2, dst);
 }
 
+/* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefLAnd */
+Aml *aml_land(Aml *arg1, Aml *arg2)
+{
+    Aml *var = aml_opcode(0x90 /* LAndOp */);
+    aml_append(var, arg1);
+    aml_append(var, arg2);
+    return var;
+}
+
 /* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefLOr */
 Aml *aml_lor(Aml *arg1, Aml *arg2)
 {
@@ -629,6 +638,13 @@ Aml *aml_notify(Aml *arg1, Aml *arg2)
     return var;
 }
 
+/* ACPI 1.0b: 16.2.5.3 Type 1 Opcodes Encoding: DefBreak */
+Aml *aml_break(void)
+{
+    Aml *var = aml_opcode(0xa5 /* BreakOp */);
+    return var;
+}
+
 /* helper to call method without argument */
 Aml *aml_call0(const char *method)
 {
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 05/10] tests: acpi: mark to be changed tables in bios-tables-test-allowed-diff
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (3 preceding siblings ...)
  2020-09-07 11:23 ` [PATCH v5 04/10] acpi: add aml_land() and aml_break() primitives Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 06/10] x86: ich9: expose "smi_negotiated_features" as a QOM property Igor Mammedov
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

... to let tests pass until binary blobs are updated with new AML

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 tests/qtest/bios-tables-test-allowed-diff.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
index dfb8523c8b..dba32d5613 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1 +1,20 @@
 /* List of comma-separated changed AML files to ignore */
+"tests/data/acpi/pc/DSDT",
+"tests/data/acpi/q35/DSDT",
+"tests/data/acpi/q35/DSDT.tis",
+"tests/data/acpi/q35/DSDT.bridge",
+"tests/data/acpi/q35/DSDT.mmio64",
+"tests/data/acpi/q35/DSDT.ipmibt",
+"tests/data/acpi/q35/DSDT.cphp",
+"tests/data/acpi/q35/DSDT.memhp",
+"tests/data/acpi/q35/DSDT.numamem",
+"tests/data/acpi/q35/DSDT.dimmpxm",
+"tests/data/acpi/q35/DSDT.acpihmat",
+"tests/data/acpi/pc/DSDT.bridge",
+"tests/data/acpi/pc/DSDT.ipmikcs",
+"tests/data/acpi/pc/DSDT.cphp",
+"tests/data/acpi/pc/DSDT.memhp",
+"tests/data/acpi/pc/DSDT.numamem",
+"tests/data/acpi/pc/DSDT.dimmpxm",
+"tests/data/acpi/pc/DSDT.acpihmat",
+"tests/data/acpi/pc/DSDT.acpihmat",
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 06/10] x86: ich9: expose "smi_negotiated_features" as a QOM property
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (4 preceding siblings ...)
  2020-09-07 11:23 ` [PATCH v5 05/10] tests: acpi: mark to be changed tables in bios-tables-test-allowed-diff Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 12:49   ` Laszlo Ersek
  2020-09-07 11:23 ` [PATCH v5 07/10] x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp Igor Mammedov
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

Expose the "smi_negotiated_features" field of ICH9LPCState as
a QOM property.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
---
 include/hw/i386/ich9.h | 2 ++
 hw/isa/lpc_ich9.c      | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/include/hw/i386/ich9.h b/include/hw/i386/ich9.h
index d1bb3f7bf0..0f43ef2481 100644
--- a/include/hw/i386/ich9.h
+++ b/include/hw/i386/ich9.h
@@ -245,6 +245,8 @@ typedef struct ICH9LPCState {
 #define ICH9_SMB_HST_D1                         0x06
 #define ICH9_SMB_HOST_BLOCK_DB                  0x07
 
+#define ICH9_LPC_SMI_NEGOTIATED_FEAT_PROP "x-smi-negotiated-features"
+
 /* bit positions used in fw_cfg SMI feature negotiation */
 #define ICH9_LPC_SMI_F_BROADCAST_BIT            0
 #define ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT          1
diff --git a/hw/isa/lpc_ich9.c b/hw/isa/lpc_ich9.c
index 19f32bed3e..8124d20338 100644
--- a/hw/isa/lpc_ich9.c
+++ b/hw/isa/lpc_ich9.c
@@ -647,6 +647,9 @@ static void ich9_lpc_initfn(Object *obj)
                                   &acpi_enable_cmd, OBJ_PROP_FLAG_READ);
     object_property_add_uint8_ptr(OBJECT(lpc), ACPI_PM_PROP_ACPI_DISABLE_CMD,
                                   &acpi_disable_cmd, OBJ_PROP_FLAG_READ);
+    object_property_add_uint64_ptr(obj, ICH9_LPC_SMI_NEGOTIATED_FEAT_PROP,
+                                   &lpc->smi_negotiated_features,
+                                   OBJ_PROP_FLAG_READ);
 
     ich9_pm_add_properties(obj, &lpc->pm);
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 07/10] x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (5 preceding siblings ...)
  2020-09-07 11:23 ` [PATCH v5 06/10] x86: ich9: expose "smi_negotiated_features" as a QOM property Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 12:49   ` Laszlo Ersek
  2020-09-07 11:23 ` [PATCH v5 08/10] x86: acpi: introduce the PCI0.SMI0 ACPI device Igor Mammedov
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

Translate the "CPU hotplug with SMI" feature bit, from the property
added in the last patch, to a dedicated boolean in AcpiPmInfo.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
---
 hw/i386/acpi-build.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 7a5a8b3521..e61c17a484 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -95,6 +95,7 @@ typedef struct AcpiPmInfo {
     bool s3_disabled;
     bool s4_disabled;
     bool pcihp_bridge_en;
+    bool smi_on_cpuhp;
     uint8_t s4_val;
     AcpiFadtData fadt;
     uint16_t cpu_hp_io_base;
@@ -194,6 +195,7 @@ static void acpi_get_pm_info(MachineState *machine, AcpiPmInfo *pm)
     pm->cpu_hp_io_base = 0;
     pm->pcihp_io_base = 0;
     pm->pcihp_io_len = 0;
+    pm->smi_on_cpuhp = false;
 
     assert(obj);
     init_common_fadt_data(machine, obj, &pm->fadt);
@@ -207,12 +209,16 @@ static void acpi_get_pm_info(MachineState *machine, AcpiPmInfo *pm)
             object_property_get_uint(obj, ACPI_PCIHP_IO_LEN_PROP, NULL);
     }
     if (lpc) {
+        uint64_t smi_features = object_property_get_uint(lpc,
+            ICH9_LPC_SMI_NEGOTIATED_FEAT_PROP, NULL);
         struct AcpiGenericAddress r = { .space_id = AML_AS_SYSTEM_IO,
             .bit_width = 8, .address = ICH9_RST_CNT_IOPORT };
         pm->fadt.reset_reg = r;
         pm->fadt.reset_val = 0xf;
         pm->fadt.flags |= 1 << ACPI_FADT_F_RESET_REG_SUP;
         pm->cpu_hp_io_base = ICH9_CPU_HOTPLUG_IO_BASE;
+        pm->smi_on_cpuhp =
+            !!(smi_features & BIT_ULL(ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT));
     }
 
     /* The above need not be conditional on machine type because the reset port
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 08/10] x86: acpi: introduce the PCI0.SMI0 ACPI device
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (6 preceding siblings ...)
  2020-09-07 11:23 ` [PATCH v5 07/10] x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 12:49   ` Laszlo Ersek
  2020-09-07 11:23 ` [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM Igor Mammedov
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

When CPU hotplug with SMI has been negotiated, describe the SMI
register block in the DSDT. Pass the ACPI name of the SMI control
register to build_cpus_aml(), so that CPU_SCAN_METHOD can access the
register in the next patch.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
---
 include/hw/acpi/cpu.h |  1 +
 hw/i386/acpi-build.c  | 29 ++++++++++++++++++++++++++++-
 2 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h
index 62f0278ba2..0eeedaa491 100644
--- a/include/hw/acpi/cpu.h
+++ b/include/hw/acpi/cpu.h
@@ -50,6 +50,7 @@ void cpu_hotplug_hw_init(MemoryRegion *as, Object *owner,
 typedef struct CPUHotplugFeatures {
     bool acpi_1_compatible;
     bool has_legacy_cphp;
+    const char *smi_path;
 } CPUHotplugFeatures;
 
 void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index e61c17a484..7a33ef193e 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1521,6 +1521,32 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         aml_append(dev, aml_name_decl("_UID", aml_int(0)));
         aml_append(dev, build_q35_osc_method());
         aml_append(sb_scope, dev);
+
+        if (pm->smi_on_cpuhp) {
+            /* reserve SMI block resources, IO ports 0xB2, 0xB3 */
+            dev = aml_device("PCI0.SMI0");
+            aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A06")));
+            aml_append(dev, aml_name_decl("_UID", aml_string("SMI resources")));
+            crs = aml_resource_template();
+            aml_append(crs,
+                aml_io(
+                       AML_DECODE16,
+                       ACPI_PORT_SMI_CMD,
+                       ACPI_PORT_SMI_CMD,
+                       1,
+                       2)
+            );
+            aml_append(dev, aml_name_decl("_CRS", crs));
+            aml_append(dev, aml_operation_region("SMIR", AML_SYSTEM_IO,
+                aml_int(ACPI_PORT_SMI_CMD), 2));
+            field = aml_field("SMIR", AML_BYTE_ACC, AML_NOLOCK,
+                              AML_WRITE_AS_ZEROS);
+            aml_append(field, aml_named_field("SMIC", 8));
+            aml_append(field, aml_reserved_field(8));
+            aml_append(dev, field);
+            aml_append(sb_scope, dev);
+        }
+
         aml_append(dsdt, sb_scope);
 
         build_hpet_aml(dsdt);
@@ -1536,7 +1562,8 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
         build_legacy_cpu_hotplug_aml(dsdt, machine, pm->cpu_hp_io_base);
     } else {
         CPUHotplugFeatures opts = {
-            .acpi_1_compatible = true, .has_legacy_cphp = true
+            .acpi_1_compatible = true, .has_legacy_cphp = true,
+            .smi_path = pm->smi_on_cpuhp ? "\\_SB.PCI0.SMI0.SMIC" : NULL,
         };
         build_cpus_aml(dsdt, machine, opts, pm->cpu_hp_io_base,
                        "\\_SB.PCI0", "\\_GPE._E02");
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (7 preceding siblings ...)
  2020-09-07 11:23 ` [PATCH v5 08/10] x86: acpi: introduce the PCI0.SMI0 ACPI device Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-07 15:17   ` Laszlo Ersek
  2020-09-08 14:24   ` [PATCH v5 9/10] fixup! " Igor Mammedov
  2020-09-07 11:23 ` [PATCH v5 10/10] tests: acpi: update acpi blobs with new AML Igor Mammedov
                   ` (2 subsequent siblings)
  11 siblings, 2 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

In case firmware has negotiated CPU hotplug SMI feature, generate
AML to describe SMI IO port region and send SMI to firmware
on each CPU hotplug SCI in case new CPUs were hotplugged.

Since new CPUs can be hotplugged while CPU_SCAN_METHOD is running
we can't send SMI before new CPUs are fetched from QEMU as it
could cause sending Notify to a CPU that firmware hasn't seen yet.
So fetch new CPUs into local cache first, then send SMI and
after that send Notify events to cached CPUs. This should ensure
that Notify is sent only to CPUs which were processed by firmware
first.
Any CPUs that were hotplugged after caching will be processed
by the next CPU_SCAN_METHOD, when pending SCI is handled.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
v5:
 - fix hotplug on Windows when there is more than 256 possible CPUs
   (Windows isn't able to handle VarPackage over 255 elements
    so process CPUs in batches)
 - (Laszlo Ersek <lersek@redhat.com>)
   - fix off-by-one in package length
   - fix not selecting CPU before clearing insert event
   - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus, zero)
   - split off in separate patches:
     - making smi_negotiated_features a property
     - introduction of AcpiPmInfo.smi_on_cpuhp
     - introduction of PCI0.SMI0 ACPI device
v2:
 - clear insert event after firmware has returned
   control from SMI. (Laszlo Ersek <lersek@redhat.com>)
v1:
 - make sure that Notify is sent only to CPUs seen by FW
 - (Laszlo Ersek <lersek@redhat.com>)
    - use macro instead of x-smi-negotiated-features
    - style fixes
    - reserve whole SMI block (0xB2, 0xB3)
v0:
 - s/aml_string/aml_eisaid/
   /fixes Windows BSOD, on nonsense value/ (Laszlo Ersek <lersek@redhat.com>)
 - put SMI device under PCI0 like the rest of hotplug IO port
 - do not generate SMI AML if CPU hotplug SMI feature hasn't been negotiated
---
 hw/acpi/cpu.c | 156 +++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 129 insertions(+), 27 deletions(-)

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index 3d6a500fb7..1283972001 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -14,6 +14,8 @@
 #define ACPI_CPU_CMD_DATA_OFFSET_RW 8
 #define ACPI_CPU_CMD_DATA2_OFFSET_R 0
 
+#define OVMF_CPUHP_SMI_CMD 4
+
 enum {
     CPHP_GET_NEXT_CPU_WITH_EVENT_CMD = 0,
     CPHP_OST_EVENT_CMD = 1,
@@ -321,6 +323,7 @@ const VMStateDescription vmstate_cpu_hotplug = {
 #define CPU_NOTIFY_METHOD "CTFY"
 #define CPU_EJECT_METHOD  "CEJ0"
 #define CPU_OST_METHOD    "COST"
+#define CPU_ADDED_LIST    "CNEW"
 
 #define CPU_ENABLED       "CPEN"
 #define CPU_SELECTOR      "CSEL"
@@ -465,42 +468,141 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
 
         method = aml_method(CPU_SCAN_METHOD, 0, AML_SERIALIZED);
         {
+            const uint8_t max_cpus_per_pass = 255;
             Aml *else_ctx;
-            Aml *while_ctx;
+            Aml *while_ctx, *while_ctx2;
             Aml *has_event = aml_local(0);
             Aml *dev_chk = aml_int(1);
             Aml *eject_req = aml_int(3);
             Aml *next_cpu_cmd = aml_int(CPHP_GET_NEXT_CPU_WITH_EVENT_CMD);
+            Aml *num_added_cpus = aml_local(1);
+            Aml *cpu_idx = aml_local(2);
+            Aml *uid = aml_local(3);
+            Aml *has_job = aml_local(4);
+            Aml *new_cpus = aml_name(CPU_ADDED_LIST);
 
             aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
-            aml_append(method, aml_store(one, has_event));
-            while_ctx = aml_while(aml_equal(has_event, one));
+
+            /*
+             * Windows versions newer than XP (including Windows 10/Windows
+             * Server 2019), do support* VarPackageOp but, it is cripled to hold
+             * the same elements number as old PackageOp.
+             * For compatibility with Windows XP (so it won't crash) use ACPI1.0
+             * PackageOp which can hold max 255 elements.
+             *
+             * use named package as old Windows don't support it in local var
+             */
+            aml_append(method, aml_name_decl(CPU_ADDED_LIST,
+                                             aml_package(max_cpus_per_pass)));
+
+            aml_append(method, aml_store(zero, uid));
+            aml_append(method, aml_store(one, has_job));
+            /*
+             * CPU_ADDED_LIST can hold limited number of elements, outer loop
+             * allows to process CPUs in batches which let us to handle more
+             * CPUs than CPU_ADDED_LIST can hold.
+             */
+            while_ctx2 = aml_while(aml_equal(has_job, one));
             {
-                 /* clear loop exit condition, ins_evt/rm_evt checks
-                  * will set it to 1 while next_cpu_cmd returns a CPU
-                  * with events */
-                 aml_append(while_ctx, aml_store(zero, has_event));
-                 aml_append(while_ctx, aml_store(next_cpu_cmd, cpu_cmd));
-                 ifctx = aml_if(aml_equal(ins_evt, one));
-                 {
-                     aml_append(ifctx,
-                         aml_call2(CPU_NOTIFY_METHOD, cpu_data, dev_chk));
-                     aml_append(ifctx, aml_store(one, ins_evt));
-                     aml_append(ifctx, aml_store(one, has_event));
-                 }
-                 aml_append(while_ctx, ifctx);
-                 else_ctx = aml_else();
-                 ifctx = aml_if(aml_equal(rm_evt, one));
-                 {
-                     aml_append(ifctx,
-                         aml_call2(CPU_NOTIFY_METHOD, cpu_data, eject_req));
-                     aml_append(ifctx, aml_store(one, rm_evt));
-                     aml_append(ifctx, aml_store(one, has_event));
-                 }
-                 aml_append(else_ctx, ifctx);
-                 aml_append(while_ctx, else_ctx);
+                aml_append(while_ctx2, aml_store(zero, has_job));
+
+                aml_append(while_ctx2, aml_store(one, has_event));
+                aml_append(while_ctx2, aml_store(zero, num_added_cpus));
+
+                /*
+                 * Scan CPUs, till there are CPUs with events or
+                 * CPU_ADDED_LIST capacity is exhausted
+                 */
+                while_ctx = aml_while(aml_land(aml_equal(has_event, one),
+                                      aml_lless(uid, aml_int(arch_ids->len))));
+                {
+                     /*
+                      * clear loop exit condition, ins_evt/rm_evt checks will
+                      * set it to 1 while next_cpu_cmd returns a CPU with events
+                      */
+                     aml_append(while_ctx, aml_store(zero, has_event));
+
+                     aml_append(while_ctx, aml_store(uid, cpu_selector));
+                     aml_append(while_ctx, aml_store(next_cpu_cmd, cpu_cmd));
+
+                     /*
+                      * wrap around case, scan is complete, exit loop.
+                      * It happens since events are not cleared in scan loop,
+                      * so next_cpu_cmd continues to find already processed CPUs
+                      */
+                     ifctx = aml_if(aml_lless(cpu_data, uid));
+                     {
+                         aml_append(ifctx, aml_break());
+                     }
+                     aml_append(while_ctx, ifctx);
+
+                     /*
+                      * if CPU_ADDED_LIST is full, exit inner loop and process
+                      * collected CPUs
+                      */
+                     ifctx = aml_if(
+                         aml_equal(num_added_cpus, aml_int(max_cpus_per_pass)));
+                     {
+                         aml_append(ifctx, aml_store(one, has_job));
+                         aml_append(ifctx, aml_break());
+                     }
+                     aml_append(while_ctx, ifctx);
+
+                     aml_append(while_ctx, aml_store(cpu_data, uid));
+                     ifctx = aml_if(aml_equal(ins_evt, one));
+                     {
+                         /* cache added CPUs to Notify/Wakeup later */
+                         aml_append(ifctx, aml_store(uid,
+                             aml_index(new_cpus, num_added_cpus)));
+                         aml_append(ifctx, aml_increment(num_added_cpus));
+                         aml_append(ifctx, aml_store(one, has_event));
+                     }
+                     aml_append(while_ctx, ifctx);
+                     else_ctx = aml_else();
+                     ifctx = aml_if(aml_equal(rm_evt, one));
+                     {
+                         aml_append(ifctx,
+                             aml_call2(CPU_NOTIFY_METHOD, uid, eject_req));
+                         aml_append(ifctx, aml_store(one, rm_evt));
+                         aml_append(ifctx, aml_store(one, has_event));
+                     }
+                     aml_append(else_ctx, ifctx);
+                     aml_append(while_ctx, else_ctx);
+                     aml_append(while_ctx, aml_increment(uid));
+                }
+                aml_append(while_ctx2, while_ctx);
+
+                /*
+                 * in case FW negotiated ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT,
+                 * make upcall to FW, so it can pull in new CPUs before
+                 * OS is notified and wakes them up
+                 */
+                if (opts.smi_path) {
+                    ifctx = aml_if(aml_lgreater(num_added_cpus, zero));
+                    {
+                        aml_append(ifctx, aml_store(aml_int(OVMF_CPUHP_SMI_CMD),
+                            aml_name("%s", opts.smi_path)));
+                    }
+                    aml_append(while_ctx2, ifctx);
+                }
+
+                /* Notify OSPM about new CPUs and clear insert events */
+                aml_append(while_ctx2, aml_store(zero, cpu_idx));
+                while_ctx = aml_while(aml_lless(cpu_idx, num_added_cpus));
+                {
+                    aml_append(while_ctx,
+                        aml_store(aml_derefof(aml_index(new_cpus, cpu_idx)),
+                                  uid));
+                    aml_append(while_ctx,
+                        aml_call2(CPU_NOTIFY_METHOD, uid, dev_chk));
+                    aml_append(while_ctx, aml_store(uid, aml_debug()));
+                    aml_append(while_ctx, aml_store(uid, cpu_selector));
+                    aml_append(while_ctx, aml_store(one, ins_evt));
+                    aml_append(while_ctx, aml_increment(cpu_idx));
+                }
+                aml_append(while_ctx2, while_ctx);
             }
-            aml_append(method, while_ctx);
+            aml_append(method, while_ctx2);
             aml_append(method, aml_release(ctrl_lock));
         }
         aml_append(cpus_dev, method);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 10/10] tests: acpi: update acpi blobs with new AML
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (8 preceding siblings ...)
  2020-09-07 11:23 ` [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM Igor Mammedov
@ 2020-09-07 11:23 ` Igor Mammedov
  2020-09-08 14:29 ` [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
  2020-09-21 11:46 ` Igor Mammedov
  11 siblings, 0 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-07 11:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young

here is diff against tests/data/acpi/q35/DSDT
for currently shipped ovmf binary.
(once firmware blob is updated, it will negotiate CPU hotplug
feature which will ad extra hunk sending SMI and Q35 tests will
need to be updated), but otherwise diff shows new CPU hotplug
AML that is shared between q35 and pc machines.

             Method (CSCN, 0, Serialized)
             {
                 Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF)
-                Local0 = One
-                While ((Local0 == One))
-                {
-                    Local0 = Zero
-                    \_SB.PCI0.PRES.CCMD = Zero
-                    If ((\_SB.PCI0.PRES.CINS == One))
+                Name (CNEW, Package (0xFF){})
+                Local3 = Zero
+                Local4 = One
+                While ((Local4 == One))
+                {
+                    Local4 = Zero
+                    Local0 = One
+                    Local1 = Zero
+                    While (((Local0 == One) && (Local3 < One)))
                     {
-                        CTFY (\_SB.PCI0.PRES.CDAT, One)
-                        \_SB.PCI0.PRES.CINS = One
-                        Local0 = One
+                        Local0 = Zero
+                        \_SB.PCI0.PRES.CSEL = Local3
+                        \_SB.PCI0.PRES.CCMD = Zero
+                        If ((\_SB.PCI0.PRES.CDAT < Local3))
+                        {
+                            Break
+                        }
+
+                        If ((Local1 == 0xFF))
+                        {
+                            Local4 = One
+                            Break
+                        }
+
+                        Local3 = \_SB.PCI0.PRES.CDAT
+                        If ((\_SB.PCI0.PRES.CINS == One))
+                        {
+                            CNEW [Local1] = Local3
+                            Local1++
+                            Local0 = One
+                        }
+                        ElseIf ((\_SB.PCI0.PRES.CRMV == One))
+                        {
+                            CTFY (Local3, 0x03)
+                            \_SB.PCI0.PRES.CRMV = One
+                            Local0 = One
+                        }
+
+                        Local3++
                     }
-                    ElseIf ((\_SB.PCI0.PRES.CRMV == One))
+
+                    Local2 = Zero
+                    While ((Local2 < Local1))
                     {
-                        CTFY (\_SB.PCI0.PRES.CDAT, 0x03)
-                        \_SB.PCI0.PRES.CRMV = One
-                        Local0 = One
+                        Local3 = DerefOf (CNEW [Local2])
+                        CTFY (Local3, One)
+                        Debug = Local3
+                        \_SB.PCI0.PRES.CSEL = Local3
+                        \_SB.PCI0.PRES.CINS = One
+                        Local2++
                     }
                 }

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 tests/qtest/bios-tables-test-allowed-diff.h |  19 -------------------
 tests/data/acpi/pc/DSDT                     | Bin 4934 -> 5060 bytes
 tests/data/acpi/pc/DSDT.acpihmat            | Bin 6258 -> 6385 bytes
 tests/data/acpi/pc/DSDT.bridge              | Bin 6793 -> 6919 bytes
 tests/data/acpi/pc/DSDT.cphp                | Bin 5397 -> 5524 bytes
 tests/data/acpi/pc/DSDT.dimmpxm             | Bin 6587 -> 6714 bytes
 tests/data/acpi/pc/DSDT.ipmikcs             | Bin 5006 -> 5132 bytes
 tests/data/acpi/pc/DSDT.memhp               | Bin 6293 -> 6419 bytes
 tests/data/acpi/pc/DSDT.numamem             | Bin 4940 -> 5066 bytes
 tests/data/acpi/q35/DSDT                    | Bin 7678 -> 7804 bytes
 tests/data/acpi/q35/DSDT.acpihmat           | Bin 9002 -> 9129 bytes
 tests/data/acpi/q35/DSDT.bridge             | Bin 7695 -> 7821 bytes
 tests/data/acpi/q35/DSDT.cphp               | Bin 8141 -> 8268 bytes
 tests/data/acpi/q35/DSDT.dimmpxm            | Bin 9331 -> 9458 bytes
 tests/data/acpi/q35/DSDT.ipmibt             | Bin 7753 -> 7879 bytes
 tests/data/acpi/q35/DSDT.memhp              | Bin 9037 -> 9163 bytes
 tests/data/acpi/q35/DSDT.mmio64             | Bin 8808 -> 8934 bytes
 tests/data/acpi/q35/DSDT.numamem            | Bin 7684 -> 7810 bytes
 tests/data/acpi/q35/DSDT.tis                | Bin 8283 -> 8409 bytes
 19 files changed, 19 deletions(-)

diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
index dba32d5613..dfb8523c8b 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1,20 +1 @@
 /* List of comma-separated changed AML files to ignore */
-"tests/data/acpi/pc/DSDT",
-"tests/data/acpi/q35/DSDT",
-"tests/data/acpi/q35/DSDT.tis",
-"tests/data/acpi/q35/DSDT.bridge",
-"tests/data/acpi/q35/DSDT.mmio64",
-"tests/data/acpi/q35/DSDT.ipmibt",
-"tests/data/acpi/q35/DSDT.cphp",
-"tests/data/acpi/q35/DSDT.memhp",
-"tests/data/acpi/q35/DSDT.numamem",
-"tests/data/acpi/q35/DSDT.dimmpxm",
-"tests/data/acpi/q35/DSDT.acpihmat",
-"tests/data/acpi/pc/DSDT.bridge",
-"tests/data/acpi/pc/DSDT.ipmikcs",
-"tests/data/acpi/pc/DSDT.cphp",
-"tests/data/acpi/pc/DSDT.memhp",
-"tests/data/acpi/pc/DSDT.numamem",
-"tests/data/acpi/pc/DSDT.dimmpxm",
-"tests/data/acpi/pc/DSDT.acpihmat",
-"tests/data/acpi/pc/DSDT.acpihmat",
diff --git a/tests/data/acpi/pc/DSDT b/tests/data/acpi/pc/DSDT
index b121bb5bc124be522e233516efb17cdc94de5a75..4ca46e5a2bdb1dfab79dd8630aeeb9a386d8b30e 100644
GIT binary patch
delta 323
zcmX@6c0`@aCD<k8h%f^K<C%?ITiBSq>?R*zTg~KaJUN&BFO!?t<Qxu}dJbnl*Ki@G
z{{;-m1&k?+T!bd4FcvVR6fh<fFeEPW;h8WwfpKawP$;1wIYyr)KG-Qfz}eFvAjmb?
zIoQ>wfB{|5+1F)(_*8UJ7srs~r3*MGCvyD<TC=nOCSMOT3#4Fy%4C>0OxV*en6V(a
z1LVj=hSEf!vlf~!FvU<7<QvB59O4$4%*9;5h;9Z@5~!~<xqu;Qkt@s8q{M<|uz^Vo
e$<Ev$O+Xu>4GWSd>u{*D0xc{}+HA|o$prw{_+k$L

delta 193
zcmX@2eoT$aCD<jzO_+g!@xVr|Eo@9aW|I%Ft!DDqo}A16m&t=~at?<KUjbvnB1i7Y
z35*2{2?Y$34Y`E<7Pzxaj?rg{4|a+VaP~9^2yzW}_Vf#8bPjQgL>G5)3}Gx_L>C3B
l2AaFjn`MCq%VZl)NnLaWKy6&inEHZz!+`oW@8;y>0szAaHLw5x

diff --git a/tests/data/acpi/pc/DSDT.acpihmat b/tests/data/acpi/pc/DSDT.acpihmat
index b0dbb943f4cea83a5adde23aefa54f1678c560a1..35a74bce8cc152ecb615cb38c4b7f63c7c7d3ab3 100644
GIT binary patch
delta 304
zcmexl@X?UVCD<k8qXYv3W9LS$Eo@AlE|U+it!DDE+?>nK!N}<@<{a$o#}TbOnUO=-
z>HmKYXFu0)A*TNY49NwIDT`c%CZ{kKFr*YPCKNCvF7oA>Fgby7YBCp70YgGTa*RGp
ze6Uk|fU~DTK#*&&bFiz=<oO(8_2N^}Wn3IXl9w*voSew@A872-0&MaNR3@Vfd-?@4
z79@9o+?2>rnh12oLh}Wt7|Mct!x)`I+#-{?m<t$V^e3BeiYS&Q7ce9(a%GvClvvOV
kHX(^28LS9sVzgmF^5i;BRaT${rAd?HIm9-{af<K*0E&TJnE(I)

delta 209
zcmexp_{o6FCD<jTNP>ZZv2P>S7B(g?o5=^*Rx|k;Zq8-rVC3}Ra}IX)<A_$C%*Y|E
z_WyqYW5Oaw?#T&^1q=xV3^DpF@xe~<0nVNV0YR?8&d$Cr3*1>ID{_iTIET1JqD#9t
vhA>Xvz$xkvG-aVT%K{IU$>>Uge8U)VXysxqV8o&vXx8L-4zbO5IYsyZgN-`x

diff --git a/tests/data/acpi/pc/DSDT.bridge b/tests/data/acpi/pc/DSDT.bridge
index 7b6c7a47875fc73b03fbe88807890f3867ddba1a..803d7a8e839ea8b7ac33c4490459ddaede584269 100644
GIT binary patch
delta 323
zcmeA)Z8zg`33dr#mu6sK6xqnNg^kI}Zt?-P)l9y|lXKbsGP#LO&f$=$=WzCO4Hshi
zU%-%Dz?ibgMQCygV*x`-0b@b|L*gPIo(Yo^7^fx!g%S#qWAs_#gPr07oIMQ!f?R{0
zgI#?J7|;cseO(raPem7XaSTabx`1<XBG-SQHA@R%^7SyYKnfP9OooZWggyO&84Hp-
zK#ojgC`|-9YoYl9Qw(K6zF~~cA#Rb$T+9WG=w<*Vf%;053mB3Xxw1@6N-Ssw8<@n9
d?92_)1hgUAupoJ|4u>i$(8AKB&9<CAf&i#*V2}U+

delta 193
zcmZoS>onzZ33dtTlwx3D<lM-$g^kI_Z1MrN)lB}{lXKbsGI{V#&f$>ZD_~4m<j6fa
zfw6!gp@3nsA(yb<0(X|lG5Rd=!A|i3&YlJVL9W5ho_@iM&LM7*=;AJpA&do#=%PT?
kKyw#*vn=pnnQX%;sf(@vsEvymQ(us87*OBl-JCvx05hyLF#rGn

diff --git a/tests/data/acpi/pc/DSDT.cphp b/tests/data/acpi/pc/DSDT.cphp
index c0e8aa5b32d84f39e5d6c9a5024505f818707c12..8bab2f506409f2b025a63d8b91c7bfdaa931e626 100644
GIT binary patch
delta 251
zcmbQLHAS1tCD<ioiYNmEqrpb5Eo@9)(UT9bt!DBK*qqC5$;j?5<{a$o$1&NTLtK}`
z+0Qjxi0OX;LvjIQ$|6^x$tjEl3@HVS2?Y#^i+p(|Oio~&n#{#kz>rXoJo!Jn=;TWr
zB23~_Co^&i^CmA{z&SaQ>pxKE(gKi(OY#Dh$uasY@xe~<0nVNV0YR?8&Yph3j0MRZ
zAd3<iN)v%bFEn3ZilHpXH;mCa#4R#;ay+M$Vrg;#L((EwmZ?dJ1<hc+NeszgX`orr
ZhCsKh=Tv0{8eN(+c>;&n=6=pLZUB`uOrrn*

delta 220
zcmbQDJyna#CD<iIRFr{%(Q+f#7B(jD;K>KrRx|l|Y|dr3WMuc?a}IX)<CyHvArZis
zu*i{nasp!kLqY*Vj6O?zuv2`1v!_8okZZ8Bv#-kncb3WM!k&J?jLso$k?7(sjv<Vb
sA99NN7ceF)^k!M$fvGjfHw>3nF6IJ8bQ6Kffo4sfz#+DokE@Lv0MS-FWdHyG

diff --git a/tests/data/acpi/pc/DSDT.dimmpxm b/tests/data/acpi/pc/DSDT.dimmpxm
index 1649953b6cccb933e4a440dc56507dc9197c4a8a..e015b4594c96a6e0f34c0668e3383b9a91dff38e 100644
GIT binary patch
delta 308
zcmdmOyvu~kCD<jzN{WGjQF0^K7B(j5fXN5gRx`P|Zq8*_U}SO^o7~SKQ_tb-=Nc}=
z^uK^1xqvZck*m<;6vhIElmf<t0*1szzC05qCooP;=3*&eNGM2-(PxPdc8U*h_B03x
zat(G4cJ(P>Ko@lOby*-j6<yTDF(i5E0?x^aT>pXgEG>Y_^TW*oDOjK~`8<bcdO>mr
z$W@69rHMdyEHqzWIvL%#Am1=X=McBZWG?0cMl6y*eWl3-3`vVzS*9i>7BqtmOkzj|
c>tQTNjy5bvp6t(|%IfJCT$;2wfiqeF0OOlnivR!s

delta 210
zcmdmGvfG%;CD<iow<H4tqvA%cEo@9K9+MBSt!8q!-JHv=z{upmH@TlfroMnNVUZ*E
z<OIe7hJ*r!7=4!bV5j&1XHSEGAlG1LXJ3~E?ktnhg+2X(8J$DiBGJWN977lj7|}(6
os)4pF^k!M$fnn6-0!~hGbPYgbxR?tVCns_W1F3|~_c@~l0B9gOg#Z8m

diff --git a/tests/data/acpi/pc/DSDT.ipmikcs b/tests/data/acpi/pc/DSDT.ipmikcs
index 92748d49dcd418e4a734da47e8d5c0268aedfc29..ca6630e39f60ebd5c056f57c4c03fdb9d5467577 100644
GIT binary patch
delta 304
zcmeBE@6q6L33dtL5n*6pjNi!hhmFb0ZZZe^Y9?Rf$y?d~*1L&02Rr+5L@USWv&08G
z#RoWh8UzHn20I7%c>n*;;q2!cF2wY|fFZenF=dg9(Bu@x0)~_V#)JZf#6><l6DB7x
zPE7_1B@`s18xZX3Q^0^O=<MsVKzu4fG#Fx;i(^Rg(gmE86S@8a?O9rYP=JtMpfVX<
z*wZhVu^_nv<j6#Z(nO%M7Md?Goovr3$(TGjn^Q`$G`WBwX^|_-)TG3MX0W;>hGgdu
cw@9FylcNm_k|*!xRAmJkQ<}87m6MYT0IdjJEdT%j

delta 202
zcmeCt=u_u%33dtT6J}sw4B5!_hmFa{Y%&M?Y9@c}$y?d~vU~732Rr+5Ojh8M;ATu%
z<j6fafw6!gVKP6bSiL*T<QRRH_+Y2_0B28wfFRdkXHUOiM&}T>NOW-*#}LK>Ms!i2
n>H@}uh2AU+JTQ$4@(sgf6c;ndtjS9_MJ6xg65TAp^@S4v*7-NU

diff --git a/tests/data/acpi/pc/DSDT.memhp b/tests/data/acpi/pc/DSDT.memhp
index 4026772906e910af514beb76de6e4cca0bc2171b..43f4e114e2cc48c68c35af47303fa87c9255db40 100644
GIT binary patch
delta 323
zcmbPgIN6BHCD<iISdxK(aqmX1Eo@8<_LC2=t!8pInVif1m&r|Rat?<~J%_WOYq${8
z{{n{O0>+d@E<%%27z-Fu3K$a#7!nux@JyJTz&JG-D3nl;9HY+?AM6w#;OuD-5ab%{
z9PH{-z<@63?CY{Xd@8!Ai(^Rg(gmE86S@8atyx+Cldp%F1yZm;Wim`0ChX}K%vg}z
z0diy_Lun$=Sqse<m|`dk@(p8j4snZ2=3*{jL^lH{3Dj4bT)>dD$dzSkQer_f*uW%)
eWM^)WCZG+`h6Tx!bvRU6ffkk~ZMNl%;0FMI?qN#+

delta 193
zcmbPiG}VyHCD<iosssZA<ED*VTiBSK%qJgUTg~LEGdY+2FOvt~<Qxtez5>RCMULE)
z6Br8^5(*e58*&NzEpTU<9HY+?AM6w#;OuD-5ab%{?CBTG=p5n}i7xKq7{XY<h%O3L
l4K#P5H_HMKmdQ4plDg;$fZDj2G4%!ch5_|$-pv`o4*;<`HS7QY

diff --git a/tests/data/acpi/pc/DSDT.numamem b/tests/data/acpi/pc/DSDT.numamem
index 4d9ba337a82954af094e739b8a83467f89a37cc0..ba8f7e6c33f9eb0f7a080144fcb4a27d36aa04ae 100644
GIT binary patch
delta 323
zcmX@3c1oSgCD<k8lrRGWW7$ToEo@8<_LC2=t!8pInVif1m&r|Rat?<~J%_WOYq${8
z{{n{O0>+d@E<%%27z-Fu3K$a#7!nux@JyJTz&JG-D3nl;9HY+?AM6w#;OuD-5ab%{
z9PH{-z<@63?CY{Xd@8!Ai(^Rg(gmE86S@8atyx+Cldp%F1yZm;Wim`0ChX}K%vg}z
z0diy_Lun$=Sqse<m|`dk@(p8j4snZ2=3*{jL^lH{3Dj4bT)>dD$dzSkQer_f*uW%)
eWM^)WCZG+`h6Tx!bvRU6ffkk~ZMNm)=K=t6nPI;G

delta 193
zcmX@5enySUCD<jzN0@<uF>52&7B(g)^T`L;Rx`QkOwMKh%jCf~Ifp}ruYfUOkt6ry
z1jYh}gaU@ihFrpa3*1>I$LO=f2Rp?FIC~ld1i1z~d-?@4I)}JLqKmsYhA<W|qKg7m
l1I=CN&9cCQWwH&Yq%OJwpf)aMOnpJVVL*MGcXRS{0RV_rHDCY$

diff --git a/tests/data/acpi/q35/DSDT b/tests/data/acpi/q35/DSDT
index bba8884073a27427b88ac0d733c9c87330a59366..d6c26940b1a68d4184f6f2335924099aa28b130a 100644
GIT binary patch
delta 249
zcmexo{l|vOCD<jTMvj4jar;KD2@*_Rc9Z8wtY-2xp6n_4m)%XwIoR2cWAb}RaUBk4
zKi6;}rvC*D$pwrli(G^zr!W>Uq!chF6fh(%^5K~<Ie~F%GEgX?AbIk9NzutgQX)*^
zQztK!66Q@_x`1<XBG-SQ#-#-y5trlzDwAXMS>l78;scyL4FZB(gPlG7f*A{vJ3#g%
zGL$9)O<riez!XDSkZ%~HbBJ4H^5p+gQi`R?1q?}xTv?_jB^ES;^(HYSgQbCHMH>R$
W;xDbr3N*SjX|jQo*k*m{^GpEqI!~+s

delta 203
zcmexk^Us>gCD<k8pDY6d<C=|J6C{{?%qGu~Sk2_GJ=s(8FS`eybFi}?$K>~t63UDT
ziyXNpComQ;Bor{j=(EHJJH-b$dm01;xduBs`?@S}XPNv_N>su*#4QqC+Ql)1adLyS
rsDA-t!a{GB1s*Ju(Uk=GhB4yM%Ees3h($TjtjPvaVw=}WpJxI9Z#6rr

diff --git a/tests/data/acpi/q35/DSDT.acpihmat b/tests/data/acpi/q35/DSDT.acpihmat
index 9cac92418b5fcc2767dc74603d599642b59623fe..2b67045d6add69574499b97d938cca01949231b4 100644
GIT binary patch
delta 288
zcmZ4Gw$h!;CD<ior7{BpW64IY2@*`6E|cd-tY-4D-0UgI!N~3|<{a$o$1!=Lq<B4t
zv!83Y5YztxhU5aqltr#WlT#QA7*Yxt6ABm-7y0r`n4G{kHJOX4fFYqEIYyr)KG-Qf
zz}eFvAjmb?IoQ>wfB{|5+1F)(_*8UJ7srs~r3*MGCvyD<+OxC(DsKii3#4Fy%4BqX
zo_@iM1<4&CS0*x)CIa2H(0qaE<a<(*jLDO^q@@%~lM5J<7P+!aO-d|i2CGYANOlf!
bi%bUUi#9Aso*X5u$_g~5G-<PnG#dv1HTG7>

delta 203
zcmZ4KzRHcuCD<iIOPPUzv2r8V1PLZDo5^z|Rx|k;ZuXSqU}X2;a}IX)<Cr{AQi7W?
zVUZ*E<OIe7hJ?xMrNrvpStiHmv&08G#RoWh8UzHn20MHD1v5H_xJ9CiyEuk07BHfV
o0#z3<CM@)3S>S<bRFH2NE~B`ZL1s;MkrtWkC?mRghxA8w05}9VZU6uP

diff --git a/tests/data/acpi/q35/DSDT.bridge b/tests/data/acpi/q35/DSDT.bridge
index f08b7245f59aad491fcaa60e2bab1085c369ea1c..0f6c9c68c81cddd2126eea8a7c336b8667202223 100644
GIT binary patch
delta 249
zcmeCT>9yr@33dtTm1AIF4ByB#L4wK4Zt@(7)l9y|lRYK>vb%{n2Rr+5OnxsZuEXK%
z=Nc}=^uK^1xqvZck&Dpe6vhIElmf<t0*1szK0Fg9CooP;1_~t<Bu}0%DLT1GN`y&#
z>g0t|!o0~#7jRBa<oXZPxU>Ky;*z{TWpa!@OMI|Xe1Nm3K|qjeu(PLMFk?Y-2gsg8
zhSEf!$qUUFm|`dk@(p8j4snZ2p8Q`*O0hJ#fFWs-E6dcR#DZq9-Xw-(ur$!DXhWb|
W{H0Y{fku}mO*W7c+pI7BfC&KA`%aPo

delta 203
zcmeCR?YH4_33dtLmt$aH^xnudL4wK0Z1Nn5)lB}{lRYK>vU~732Rr+5OnxsZq0E@D
z$dP+;0%HL~LIFdJK1+PCQ+$B4r$IoFYp}Dkugd~=mdPKbL?xU<+#=DXT^vIgCpSoo
q`WG-JEc9ks;K4E(T}hB{7$Xj?T+9WGSd;_Jnrt8?wt21e1112)r8$rQ

diff --git a/tests/data/acpi/q35/DSDT.cphp b/tests/data/acpi/q35/DSDT.cphp
index 57d859cef9fa16a8f125c4b338611c8472699f38..d7bedee7ff638f11b3bb84ef960364b409a49cce 100644
GIT binary patch
delta 306
zcmX?Wf5w5!CD<jzM}dKXQF<fS1PLau=*e>=Rx|kqZ1$A2WUO}=a}IX)<A_#{(PxPd
zc8U*h_B03xat(G4@bUitpTpVDHC%}4e*r^s0b|M{SE0!%j0FrS1&j#=42g?;c_vIw
zV4RxF#a6(OP>_snLa?h(0Ry_Av#-kn@u_gp5U6b~jv>iQ7jRBa<oXY^XlVgl0h0Uz
zmC5MBo_@iM1<4&CS0*x)CIa2H(0qaEWPWK$#^lM?(o%}0$ps8ai(FZzCM6a$gViN5
fBs+(=MJ5CFMH?0*PwtjhWd#~jnzT7ox`G7&GO=B7

delta 199
zcmX@(aMqs7CD<k8tULn)qv}Sk2@*`+!IS4mtY-4_*z753$;j@(=N#<p$1(Y%v;;R}
z!XiiR$q9@F3<;ATNQu?EvrLZBXNeDXiVtx1GzbWC4R-eQ3ubf<af?J3cX141EMP<z
k1*$G!Ojzj6vcLn=s36}kTt;y*gUp&-DlM}4igX1F09eU5u>b%7

diff --git a/tests/data/acpi/q35/DSDT.dimmpxm b/tests/data/acpi/q35/DSDT.dimmpxm
index 9d5bd5744e2ba2e0f6126c3aba0bb36af865e499..13e80ae2e5c7606a7260d4cb3ab776488d0697d6 100644
GIT binary patch
delta 325
zcmezD@yV0RCD<k8lL`X^<MWMN6C{|N118UrSk2_-y4h1wfsx5wY;v}gOg)FQpKG`f
z)Bgg7<O0T&MXo}VQy2>vQVJLo3K$X>`SMJdoWM9WnTw@>A)z2SMxP}<*eO21+0!5(
z$TiqG*wv?i0bS79*JXkDRCG}n$B^Wu3pghya{UL|v$OyzUk^76q+o%{WORL=e!+|d
z$sHh9CNh*J0^POHe1R#3vLN3uM&}T>$Yd_&0!DN*fRaFcrO5>hNsC-rrY0p8G=mLH
eVo2rzYXaI3ZCH>zSzAh#6=-2;(q<cJb4~zS_+p&^

delta 194
zcmez5`PqZZCD<jTScQRs@!dwQ2@*^$9+T%ttY&hz-Rvo;z{upmH#u8MhOdAzVUZ*E
z<OIe7hJ*r!$%Zn*ehb`LCdcTr#0NXY2RM5g1O&MTJA3*CGdhR3MWTzlIEFA5FrteB
mRRhgk=*_aggJrUfw4^S&0-!c7W=wrSzF|Opn|Di_a{>Sy^)^ue

diff --git a/tests/data/acpi/q35/DSDT.ipmibt b/tests/data/acpi/q35/DSDT.ipmibt
index 5cd11de6a8fe47324e5f922823a22746882f19f5..9a1b635dab776fb25e378a00e6ca0cadf9902c25 100644
GIT binary patch
delta 286
zcmX?UbKI88CD<k8xEuom<LixFw<Vao>?S{#Sk2^XJb8-bUv@V!=U`_)j>#7##p^ko
z{anL^nEn?qBo{EIEOHT=oWfYZkW#>yP{5G5$cJab<OIg4$v~lmg5(%|miS<&_yA{5
zgMc8{VCP_0p8^JSL1$l=1>#fDMO_?2l9w*voSew@A85_e0+_rh%q)<C1uBza;xJ)P
zzhK6K<PMM{6B$YqfzDcJzQA;{oU|lk@?<Y*DaF#{0*0hTt}IiN5(}Ea>XI0eokQFr
afo@KYHY`Y<JWE=Y6=+Op(&iNDr%V8%Ct5cE

delta 182
zcmX?Zd(wu>CD<jzQ;vaw@%~1x+Y(GZW|N;wtY-4po;*eJFS`eybFi}?$K;EW65Nal
ziyXNpComQ;BusuMCFbYOGC4+{B|g|GKET=2ARx#!*xA!Bn9(`JEfQVa#W94jfDv63
lsJehLVWBt60uPqSlN+UlC2(n;Tq`X#xj{yB^LObBOaQB)Hktqc

diff --git a/tests/data/acpi/q35/DSDT.memhp b/tests/data/acpi/q35/DSDT.memhp
index 05a7a73ec43130d5c3018bb462fd84981bfb151c..55ce4e2293624c2c0725c3bbaaa7ec29acfccfc9 100644
GIT binary patch
delta 249
zcmX@>cG{iGCD<k8v@!z&<I0U(6C{`%>?hBWSk2^YGTBq|FT0zVbFi}?$K>~t;yN78
zey-s{O#cfQk_#A97P$ybPGKxyNGV`UC}2oj<ij&zasuPjWS~$&LGtAJlA@D~q(qp+
zr%qldCCr<=bOGn&M6UlpjY|taA}+}bR3^vhv&08G#RoWh8UzHn20MHD1v3^TcYy3k
zWGGDpn!M0_fhmTvAm1=X=McBZ<jMb~q!de&3mB3Xxw1@6N-Ssw>rG-v21^6YiZ%qg
W#a~*L6=-y6(qscEvCaC@avT8KEKcbF

delta 203
zcmX@@e%6i4CD<jzSDAr<aqdR02@*_B=9A}0tY&i6nd~X~m)(QUIoR2cWAb}R31!BF
zMULE)6Br8^5(*e%^jYGAo#F$WJq-eaT!WpReO(r~vrPUdB`V<@;ueW6?cx~1IJrSu
q)W3i+VWBt60uPqS=t_co!x(XB<zg;i#G)K%)?@=IvCV6x<v0MCY&quu

diff --git a/tests/data/acpi/q35/DSDT.mmio64 b/tests/data/acpi/q35/DSDT.mmio64
index efd3f1188f2b55da1514212d4be081a61c2a96e9..99b7b2ae4ba36b8ca7901626c1561d29100087d2 100644
GIT binary patch
delta 249
zcmaFi^30XXCD<k8nGyp7W9mk(2@*^W_LJvGtY&gHnd~X~m)%XwIoR2cWAb}RaUBk4
zKi6;}rvC*D$pwrli(G^zr!W>Uq!chF6fh(%^5K~<Ie~F%GEgX?AbIk9NzutgQX)*^
zQztK!66Q@_x`1<XBG-SQ#-#-y5trlzDwAXMS>l78;scyL4FZB(gPlG7f*A{vJ3#g%
zGL$9)O<riez!XDSkZ%~HbBJ4H^5p+gQi`R?1q?}xTv?_jB^ES;^(HYSgQbCHMH>R$
W;xDbr3N*SjX|jQo*k*lcOLhR{%1;Xb

delta 203
zcmaFn`oe|FCD<h-Ly3WbF>)i<1PLZ5^T~50Rx`QkO!k!g%kIJF9PI4JG5Niugfe5o
zB1i7Y35*2{2?Y!>`YiFmPVoWGo(2IyuEEaEzAg*gStfs!5|wZcaf?Kkc5w`0oZKKS
q>R-T^u+W=jfd|WEbR|K)VT?GmaxoV$Vo?q>YqEiq*ygp;mh1qbnmP#p

diff --git a/tests/data/acpi/q35/DSDT.numamem b/tests/data/acpi/q35/DSDT.numamem
index 1978b55f1255402bf9bade0b91150b5cb49789a4..2b2433cc13ec3110abbc8440a0b1ad8c487edb6c 100644
GIT binary patch
delta 249
zcmZp%X|m;V33dr-l4D?COx(ydL4wJ_e)1fN)lAMNlRYK>vb%{n2Rr+5OnxsZuEXK%
z=Nc}=^uK^1xqvZck&Dpe6vhIElmf<t0*1szK0Fg9CooP;1_~t<Bu}0%DLT1GN`y&#
z>g0t|!o0~#7jRBa<oXZPxU>Ky;*z{TWpa!@OMI|Xe1Nm3K|qjeu(PLMFk?Y-2gsg8
zhSEf!$qUUFm|`dk@(p8j4snZ2p8Q`*O0hJ#fFWs-E6dcR#DZq9-Xw-(ur$!DXhWb|
W{H0Y{fku}mO*W7c+pI5rl?ec^mQG#(

delta 203
zcmZp&ZL#5U33dr#kz-(B4Bf~zL4wK2eDWNL)l9BBlRYK>vU~732Rr+5OnxsZq0E@D
z$dP+;0%HL~LIFdJK1+PCQ+$B4r$IoFYp}Dkugd~=mdPKbL?xU<+#=DXT^vIgCpSoo
q`WG-JEc9ks;K4E(T}hB{7$Xj?T+9WGSd;_Jnrt8?wt21eRVDzYo;h3q

diff --git a/tests/data/acpi/q35/DSDT.tis b/tests/data/acpi/q35/DSDT.tis
index 638de3872673d17b1958497d0e62c83653de1602..dd06ee4c348930b0684827ca05747b4f65dbd6b2 100644
GIT binary patch
delta 249
zcmccZaMO{?CD<k8rUC;4<AjY|6C{|t>?Y5VSk2^XJlRw7FT0zVbFi}?$K>~t;yN78
zey-s{O#cfQk_#A97P$ybPGKxyNGV`UC}2oj<ij&zasuPjWS~$&LGtAJlA@D~q(qp+
zr%qldCCr<=bOGn&M6UlpjY|taA}+}bR3^vhv&08G#RoWh8UzHn20MHD1v3^TcYy3k
zWGGDpn!M0_fhmTvAm1=X=McBZ<jMb~q!de&3mB3Xxw1@6N-Ssw>rG-v21^6YiZ%qg
W#a~*L6=-y6(qscEvCaC@I;;Tg4o?UG

delta 203
zcmccVc-w)?CD<h-T7iLqv1KFI1PLY|v&nNLRx|l)Pxh4j%kIJF9PI4JG5Niugfe5o
zB1i7Y35*2{2?Y!>`YiFmPVoWGo(2IyuEEaEzAg*gStfs!5|wZcaf?Kkc5w`0oZKKS
q>R-T^u+W=jfd|WEbR|K)VT?GmaxoV$Vo?q>YqEiq*ygp;I;;SvwmJp?

-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 06/10] x86: ich9: expose "smi_negotiated_features" as a QOM property
  2020-09-07 11:23 ` [PATCH v5 06/10] x86: ich9: expose "smi_negotiated_features" as a QOM property Igor Mammedov
@ 2020-09-07 12:49   ` Laszlo Ersek
  0 siblings, 0 replies; 25+ messages in thread
From: Laszlo Ersek @ 2020-09-07 12:49 UTC (permalink / raw)
  To: Igor Mammedov, qemu-devel; +Cc: boris.ostrovsky, aaron.young

On 09/07/20 13:23, Igor Mammedov wrote:
> Expose the "smi_negotiated_features" field of ICH9LPCState as
> a QOM property.
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
> ---
>  include/hw/i386/ich9.h | 2 ++
>  hw/isa/lpc_ich9.c      | 3 +++
>  2 files changed, 5 insertions(+)
> 
> diff --git a/include/hw/i386/ich9.h b/include/hw/i386/ich9.h
> index d1bb3f7bf0..0f43ef2481 100644
> --- a/include/hw/i386/ich9.h
> +++ b/include/hw/i386/ich9.h
> @@ -245,6 +245,8 @@ typedef struct ICH9LPCState {
>  #define ICH9_SMB_HST_D1                         0x06
>  #define ICH9_SMB_HOST_BLOCK_DB                  0x07
>  
> +#define ICH9_LPC_SMI_NEGOTIATED_FEAT_PROP "x-smi-negotiated-features"
> +
>  /* bit positions used in fw_cfg SMI feature negotiation */
>  #define ICH9_LPC_SMI_F_BROADCAST_BIT            0
>  #define ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT          1
> diff --git a/hw/isa/lpc_ich9.c b/hw/isa/lpc_ich9.c
> index 19f32bed3e..8124d20338 100644
> --- a/hw/isa/lpc_ich9.c
> +++ b/hw/isa/lpc_ich9.c
> @@ -647,6 +647,9 @@ static void ich9_lpc_initfn(Object *obj)
>                                    &acpi_enable_cmd, OBJ_PROP_FLAG_READ);
>      object_property_add_uint8_ptr(OBJECT(lpc), ACPI_PM_PROP_ACPI_DISABLE_CMD,
>                                    &acpi_disable_cmd, OBJ_PROP_FLAG_READ);
> +    object_property_add_uint64_ptr(obj, ICH9_LPC_SMI_NEGOTIATED_FEAT_PROP,
> +                                   &lpc->smi_negotiated_features,
> +                                   OBJ_PROP_FLAG_READ);
>  
>      ich9_pm_add_properties(obj, &lpc->pm);
>  }
> 

Tested-by: Laszlo Ersek <lersek@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 07/10] x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp
  2020-09-07 11:23 ` [PATCH v5 07/10] x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp Igor Mammedov
@ 2020-09-07 12:49   ` Laszlo Ersek
  0 siblings, 0 replies; 25+ messages in thread
From: Laszlo Ersek @ 2020-09-07 12:49 UTC (permalink / raw)
  To: Igor Mammedov, qemu-devel; +Cc: boris.ostrovsky, aaron.young

On 09/07/20 13:23, Igor Mammedov wrote:
> Translate the "CPU hotplug with SMI" feature bit, from the property
> added in the last patch, to a dedicated boolean in AcpiPmInfo.
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
> ---
>  hw/i386/acpi-build.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 7a5a8b3521..e61c17a484 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -95,6 +95,7 @@ typedef struct AcpiPmInfo {
>      bool s3_disabled;
>      bool s4_disabled;
>      bool pcihp_bridge_en;
> +    bool smi_on_cpuhp;
>      uint8_t s4_val;
>      AcpiFadtData fadt;
>      uint16_t cpu_hp_io_base;
> @@ -194,6 +195,7 @@ static void acpi_get_pm_info(MachineState *machine, AcpiPmInfo *pm)
>      pm->cpu_hp_io_base = 0;
>      pm->pcihp_io_base = 0;
>      pm->pcihp_io_len = 0;
> +    pm->smi_on_cpuhp = false;
>  
>      assert(obj);
>      init_common_fadt_data(machine, obj, &pm->fadt);
> @@ -207,12 +209,16 @@ static void acpi_get_pm_info(MachineState *machine, AcpiPmInfo *pm)
>              object_property_get_uint(obj, ACPI_PCIHP_IO_LEN_PROP, NULL);
>      }
>      if (lpc) {
> +        uint64_t smi_features = object_property_get_uint(lpc,
> +            ICH9_LPC_SMI_NEGOTIATED_FEAT_PROP, NULL);
>          struct AcpiGenericAddress r = { .space_id = AML_AS_SYSTEM_IO,
>              .bit_width = 8, .address = ICH9_RST_CNT_IOPORT };
>          pm->fadt.reset_reg = r;
>          pm->fadt.reset_val = 0xf;
>          pm->fadt.flags |= 1 << ACPI_FADT_F_RESET_REG_SUP;
>          pm->cpu_hp_io_base = ICH9_CPU_HOTPLUG_IO_BASE;
> +        pm->smi_on_cpuhp =
> +            !!(smi_features & BIT_ULL(ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT));
>      }
>  
>      /* The above need not be conditional on machine type because the reset port
> 

Tested-by: Laszlo Ersek <lersek@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 08/10] x86: acpi: introduce the PCI0.SMI0 ACPI device
  2020-09-07 11:23 ` [PATCH v5 08/10] x86: acpi: introduce the PCI0.SMI0 ACPI device Igor Mammedov
@ 2020-09-07 12:49   ` Laszlo Ersek
  0 siblings, 0 replies; 25+ messages in thread
From: Laszlo Ersek @ 2020-09-07 12:49 UTC (permalink / raw)
  To: Igor Mammedov, qemu-devel; +Cc: boris.ostrovsky, aaron.young

On 09/07/20 13:23, Igor Mammedov wrote:
> When CPU hotplug with SMI has been negotiated, describe the SMI
> register block in the DSDT. Pass the ACPI name of the SMI control
> register to build_cpus_aml(), so that CPU_SCAN_METHOD can access the
> register in the next patch.
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
> ---
>  include/hw/acpi/cpu.h |  1 +
>  hw/i386/acpi-build.c  | 29 ++++++++++++++++++++++++++++-
>  2 files changed, 29 insertions(+), 1 deletion(-)
> 
> diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h
> index 62f0278ba2..0eeedaa491 100644
> --- a/include/hw/acpi/cpu.h
> +++ b/include/hw/acpi/cpu.h
> @@ -50,6 +50,7 @@ void cpu_hotplug_hw_init(MemoryRegion *as, Object *owner,
>  typedef struct CPUHotplugFeatures {
>      bool acpi_1_compatible;
>      bool has_legacy_cphp;
> +    const char *smi_path;
>  } CPUHotplugFeatures;
>  
>  void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index e61c17a484..7a33ef193e 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1521,6 +1521,32 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>          aml_append(dev, aml_name_decl("_UID", aml_int(0)));
>          aml_append(dev, build_q35_osc_method());
>          aml_append(sb_scope, dev);
> +
> +        if (pm->smi_on_cpuhp) {
> +            /* reserve SMI block resources, IO ports 0xB2, 0xB3 */
> +            dev = aml_device("PCI0.SMI0");
> +            aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A06")));
> +            aml_append(dev, aml_name_decl("_UID", aml_string("SMI resources")));
> +            crs = aml_resource_template();
> +            aml_append(crs,
> +                aml_io(
> +                       AML_DECODE16,
> +                       ACPI_PORT_SMI_CMD,
> +                       ACPI_PORT_SMI_CMD,
> +                       1,
> +                       2)
> +            );
> +            aml_append(dev, aml_name_decl("_CRS", crs));
> +            aml_append(dev, aml_operation_region("SMIR", AML_SYSTEM_IO,
> +                aml_int(ACPI_PORT_SMI_CMD), 2));
> +            field = aml_field("SMIR", AML_BYTE_ACC, AML_NOLOCK,
> +                              AML_WRITE_AS_ZEROS);
> +            aml_append(field, aml_named_field("SMIC", 8));
> +            aml_append(field, aml_reserved_field(8));
> +            aml_append(dev, field);
> +            aml_append(sb_scope, dev);
> +        }
> +
>          aml_append(dsdt, sb_scope);
>  
>          build_hpet_aml(dsdt);
> @@ -1536,7 +1562,8 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>          build_legacy_cpu_hotplug_aml(dsdt, machine, pm->cpu_hp_io_base);
>      } else {
>          CPUHotplugFeatures opts = {
> -            .acpi_1_compatible = true, .has_legacy_cphp = true
> +            .acpi_1_compatible = true, .has_legacy_cphp = true,
> +            .smi_path = pm->smi_on_cpuhp ? "\\_SB.PCI0.SMI0.SMIC" : NULL,
>          };
>          build_cpus_aml(dsdt, machine, opts, pm->cpu_hp_io_base,
>                         "\\_SB.PCI0", "\\_GPE._E02");
> 

Tested-by: Laszlo Ersek <lersek@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
  2020-09-07 11:23 ` [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM Igor Mammedov
@ 2020-09-07 15:17   ` Laszlo Ersek
  2020-09-08  7:39     ` Igor Mammedov
  2020-09-08 14:24   ` [PATCH v5 9/10] fixup! " Igor Mammedov
  1 sibling, 1 reply; 25+ messages in thread
From: Laszlo Ersek @ 2020-09-07 15:17 UTC (permalink / raw)
  To: Igor Mammedov, qemu-devel; +Cc: boris.ostrovsky, aaron.young

Hi Igor,

On 09/07/20 13:23, Igor Mammedov wrote:
> In case firmware has negotiated CPU hotplug SMI feature, generate
> AML to describe SMI IO port region and send SMI to firmware
> on each CPU hotplug SCI in case new CPUs were hotplugged.
>
> Since new CPUs can be hotplugged while CPU_SCAN_METHOD is running
> we can't send SMI before new CPUs are fetched from QEMU as it
> could cause sending Notify to a CPU that firmware hasn't seen yet.
> So fetch new CPUs into local cache first, then send SMI and
> after that send Notify events to cached CPUs. This should ensure
> that Notify is sent only to CPUs which were processed by firmware
> first.
> Any CPUs that were hotplugged after caching will be processed
> by the next CPU_SCAN_METHOD, when pending SCI is handled.
>
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
> v5:
>  - fix hotplug on Windows when there is more than 256 possible CPUs
>    (Windows isn't able to handle VarPackage over 255 elements
>     so process CPUs in batches)
>  - (Laszlo Ersek <lersek@redhat.com>)
>    - fix off-by-one in package length
>    - fix not selecting CPU before clearing insert event
>    - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus,
>      zero)
>    - split off in separate patches:
>      - making smi_negotiated_features a property
>      - introduction of AcpiPmInfo.smi_on_cpuhp
>      - introduction of PCI0.SMI0 ACPI device
> v2:
>  - clear insert event after firmware has returned
>    control from SMI. (Laszlo Ersek <lersek@redhat.com>)
> v1:
>  - make sure that Notify is sent only to CPUs seen by FW
>  - (Laszlo Ersek <lersek@redhat.com>)
>     - use macro instead of x-smi-negotiated-features
>     - style fixes
>     - reserve whole SMI block (0xB2, 0xB3)
> v0:
>  - s/aml_string/aml_eisaid/
>    /fixes Windows BSOD, on nonsense value/ (Laszlo Ersek
>    <lersek@redhat.com>)
>  - put SMI device under PCI0 like the rest of hotplug IO port
>  - do not generate SMI AML if CPU hotplug SMI feature hasn't been
>    negotiated
> ---
>  hw/acpi/cpu.c | 156 +++++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 129 insertions(+), 27 deletions(-)
>
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 3d6a500fb7..1283972001 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -14,6 +14,8 @@
>  #define ACPI_CPU_CMD_DATA_OFFSET_RW 8
>  #define ACPI_CPU_CMD_DATA2_OFFSET_R 0
>
> +#define OVMF_CPUHP_SMI_CMD 4
> +
>  enum {
>      CPHP_GET_NEXT_CPU_WITH_EVENT_CMD = 0,
>      CPHP_OST_EVENT_CMD = 1,
> @@ -321,6 +323,7 @@ const VMStateDescription vmstate_cpu_hotplug = {
>  #define CPU_NOTIFY_METHOD "CTFY"
>  #define CPU_EJECT_METHOD  "CEJ0"
>  #define CPU_OST_METHOD    "COST"
> +#define CPU_ADDED_LIST    "CNEW"
>
>  #define CPU_ENABLED       "CPEN"
>  #define CPU_SELECTOR      "CSEL"
> @@ -465,42 +468,141 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
>
>          method = aml_method(CPU_SCAN_METHOD, 0, AML_SERIALIZED);
>          {
> +            const uint8_t max_cpus_per_pass = 255;
>              Aml *else_ctx;
> -            Aml *while_ctx;
> +            Aml *while_ctx, *while_ctx2;
>              Aml *has_event = aml_local(0);
>              Aml *dev_chk = aml_int(1);
>              Aml *eject_req = aml_int(3);
>              Aml *next_cpu_cmd = aml_int(CPHP_GET_NEXT_CPU_WITH_EVENT_CMD);
> +            Aml *num_added_cpus = aml_local(1);
> +            Aml *cpu_idx = aml_local(2);
> +            Aml *uid = aml_local(3);
> +            Aml *has_job = aml_local(4);
> +            Aml *new_cpus = aml_name(CPU_ADDED_LIST);
>
>              aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
> -            aml_append(method, aml_store(one, has_event));
> -            while_ctx = aml_while(aml_equal(has_event, one));
> +
> +            /*
> +             * Windows versions newer than XP (including Windows 10/Windows
> +             * Server 2019), do support* VarPackageOp but, it is cripled to hold
> +             * the same elements number as old PackageOp.
> +             * For compatibility with Windows XP (so it won't crash) use ACPI1.0
> +             * PackageOp which can hold max 255 elements.
> +             *
> +             * use named package as old Windows don't support it in local var
> +             */
> +            aml_append(method, aml_name_decl(CPU_ADDED_LIST,
> +                                             aml_package(max_cpus_per_pass)));
> +
> +            aml_append(method, aml_store(zero, uid));
> +            aml_append(method, aml_store(one, has_job));
> +            /*
> +             * CPU_ADDED_LIST can hold limited number of elements, outer loop
> +             * allows to process CPUs in batches which let us to handle more
> +             * CPUs than CPU_ADDED_LIST can hold.
> +             */
> +            while_ctx2 = aml_while(aml_equal(has_job, one));
>              {
> -                 /* clear loop exit condition, ins_evt/rm_evt checks
> -                  * will set it to 1 while next_cpu_cmd returns a CPU
> -                  * with events */
> -                 aml_append(while_ctx, aml_store(zero, has_event));
> -                 aml_append(while_ctx, aml_store(next_cpu_cmd, cpu_cmd));
> -                 ifctx = aml_if(aml_equal(ins_evt, one));
> -                 {
> -                     aml_append(ifctx,
> -                         aml_call2(CPU_NOTIFY_METHOD, cpu_data, dev_chk));
> -                     aml_append(ifctx, aml_store(one, ins_evt));
> -                     aml_append(ifctx, aml_store(one, has_event));
> -                 }
> -                 aml_append(while_ctx, ifctx);
> -                 else_ctx = aml_else();
> -                 ifctx = aml_if(aml_equal(rm_evt, one));
> -                 {
> -                     aml_append(ifctx,
> -                         aml_call2(CPU_NOTIFY_METHOD, cpu_data, eject_req));
> -                     aml_append(ifctx, aml_store(one, rm_evt));
> -                     aml_append(ifctx, aml_store(one, has_event));
> -                 }
> -                 aml_append(else_ctx, ifctx);
> -                 aml_append(while_ctx, else_ctx);
> +                aml_append(while_ctx2, aml_store(zero, has_job));
> +
> +                aml_append(while_ctx2, aml_store(one, has_event));
> +                aml_append(while_ctx2, aml_store(zero, num_added_cpus));
> +
> +                /*
> +                 * Scan CPUs, till there are CPUs with events or
> +                 * CPU_ADDED_LIST capacity is exhausted
> +                 */
> +                while_ctx = aml_while(aml_land(aml_equal(has_event, one),
> +                                      aml_lless(uid, aml_int(arch_ids->len))));
> +                {
> +                     /*
> +                      * clear loop exit condition, ins_evt/rm_evt checks will
> +                      * set it to 1 while next_cpu_cmd returns a CPU with events
> +                      */
> +                     aml_append(while_ctx, aml_store(zero, has_event));
> +
> +                     aml_append(while_ctx, aml_store(uid, cpu_selector));
> +                     aml_append(while_ctx, aml_store(next_cpu_cmd, cpu_cmd));
> +
> +                     /*
> +                      * wrap around case, scan is complete, exit loop.
> +                      * It happens since events are not cleared in scan loop,
> +                      * so next_cpu_cmd continues to find already processed CPUs
> +                      */
> +                     ifctx = aml_if(aml_lless(cpu_data, uid));
> +                     {
> +                         aml_append(ifctx, aml_break());
> +                     }
> +                     aml_append(while_ctx, ifctx);
> +
> +                     /*
> +                      * if CPU_ADDED_LIST is full, exit inner loop and process
> +                      * collected CPUs
> +                      */
> +                     ifctx = aml_if(
> +                         aml_equal(num_added_cpus, aml_int(max_cpus_per_pass)));
> +                     {
> +                         aml_append(ifctx, aml_store(one, has_job));
> +                         aml_append(ifctx, aml_break());
> +                     }
> +                     aml_append(while_ctx, ifctx);
> +
> +                     aml_append(while_ctx, aml_store(cpu_data, uid));
> +                     ifctx = aml_if(aml_equal(ins_evt, one));
> +                     {
> +                         /* cache added CPUs to Notify/Wakeup later */
> +                         aml_append(ifctx, aml_store(uid,
> +                             aml_index(new_cpus, num_added_cpus)));
> +                         aml_append(ifctx, aml_increment(num_added_cpus));
> +                         aml_append(ifctx, aml_store(one, has_event));
> +                     }
> +                     aml_append(while_ctx, ifctx);
> +                     else_ctx = aml_else();
> +                     ifctx = aml_if(aml_equal(rm_evt, one));
> +                     {
> +                         aml_append(ifctx,
> +                             aml_call2(CPU_NOTIFY_METHOD, uid, eject_req));
> +                         aml_append(ifctx, aml_store(one, rm_evt));
> +                         aml_append(ifctx, aml_store(one, has_event));
> +                     }
> +                     aml_append(else_ctx, ifctx);
> +                     aml_append(while_ctx, else_ctx);
> +                     aml_append(while_ctx, aml_increment(uid));
> +                }
> +                aml_append(while_ctx2, while_ctx);
> +
> +                /*
> +                 * in case FW negotiated ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT,
> +                 * make upcall to FW, so it can pull in new CPUs before
> +                 * OS is notified and wakes them up
> +                 */
> +                if (opts.smi_path) {
> +                    ifctx = aml_if(aml_lgreater(num_added_cpus, zero));
> +                    {
> +                        aml_append(ifctx, aml_store(aml_int(OVMF_CPUHP_SMI_CMD),
> +                            aml_name("%s", opts.smi_path)));
> +                    }
> +                    aml_append(while_ctx2, ifctx);
> +                }
> +
> +                /* Notify OSPM about new CPUs and clear insert events */
> +                aml_append(while_ctx2, aml_store(zero, cpu_idx));
> +                while_ctx = aml_while(aml_lless(cpu_idx, num_added_cpus));
> +                {
> +                    aml_append(while_ctx,
> +                        aml_store(aml_derefof(aml_index(new_cpus, cpu_idx)),
> +                                  uid));
> +                    aml_append(while_ctx,
> +                        aml_call2(CPU_NOTIFY_METHOD, uid, dev_chk));
> +                    aml_append(while_ctx, aml_store(uid, aml_debug()));
> +                    aml_append(while_ctx, aml_store(uid, cpu_selector));
> +                    aml_append(while_ctx, aml_store(one, ins_evt));
> +                    aml_append(while_ctx, aml_increment(cpu_idx));
> +                }
> +                aml_append(while_ctx2, while_ctx);
>              }
> -            aml_append(method, while_ctx);
> +            aml_append(method, while_ctx2);
>              aml_append(method, aml_release(ctrl_lock));
>          }
>          aml_append(cpus_dev, method);
>

Here's the ASL decompiled, and using the meaningful local variable names
(and 8 possible VCPUs):

   1              Method (CSCN, 0, Serialized)
   2              {
   3                  Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF)
   4                  Name (CNEW, Package (0xFF){})
   5                  Local_Uid = Zero
   6                  Local_HasJob = One
   7                  While ((Local_HasJob == One))
   8                  {
   9                      Local_HasJob = Zero
  10                      Local_HasEvent = One
  11                      Local_NumAddedCpus = Zero
  12                      While (((Local_HasEvent == One) && (Local_Uid < 0x08)))
  13                      {
  14                          Local_HasEvent = Zero
  15                          \_SB.PCI0.PRES.CSEL = Local_Uid
  16                          \_SB.PCI0.PRES.CCMD = Zero
  17                          If ((\_SB.PCI0.PRES.CDAT < Local_Uid))
  18                          {
  19                              Break
  20                          }
  21
  22                          If ((Local_NumAddedCpus == 0xFF))
  23                          {
  24                              Local_HasJob = One
  25                              Break
  26                          }
  27
  28                          Local_Uid = \_SB.PCI0.PRES.CDAT
  29                          If ((\_SB.PCI0.PRES.CINS == One))
  30                          {
  31                              CNEW [Local_NumAddedCpus] = Local_Uid
  32                              Local_NumAddedCpus++
  33                              Local_HasEvent = One
  34                          }
  35                          ElseIf ((\_SB.PCI0.PRES.CRMV == One))
  36                          {
  37                              CTFY (Local_Uid, 0x03)
  38                              \_SB.PCI0.PRES.CRMV = One
  39                              Local_HasEvent = One
  40                          }
  41
  42                          Local_Uid++
  43                      }
  44
  45                      If ((Local_NumAddedCpus > Zero))
  46                      {
  47                          \_SB.PCI0.SMI0.SMIC = 0x04
  48                      }
  49
  50                      Local_CpuIdx = Zero
  51                      While ((Local_CpuIdx < Local_NumAddedCpus))
  52                      {
  53                          Local_Uid = DerefOf (CNEW [Local_CpuIdx])
  54                          CTFY (Local_Uid, One)
  55                          Debug = Local_Uid
  56                          \_SB.PCI0.PRES.CSEL = Local_Uid
  57                          \_SB.PCI0.PRES.CINS = One
  58                          Local_CpuIdx++
  59                      }
  60                  }
  61
  62                  Release (\_SB.PCI0.PRES.CPLK)
  63              }

When we take the Break on line 25, then:

(a) on line 25, the following equality holds:

  Local_Uid == CNEW[Local_NumAddedCpus - 1] + 1

(b) on line 60, the following equality holds:

  Local_Uid == CNEW[Local_NumAddedCpus - 1]

This means that, when we write Local_Uid to CSEL on line 15 again, then:

- we effectively re-investigate the last-cached CPU (with selector value
  CNEW[Local_NumAddedCpus - 1])

- rather than resuming the scanning right after it -- that is, with
  selector value (CNEW[Local_NumAddedCpus - 1] + 1) --, in the spirit of
  line 42.

My question is: is this "rewind" intentional?

Now, I don't see a functionality problem with this, as on line 57, we
clear the pending insert event for the last-cached CPU, so when we
re-check it, the "get pending" command will simply seek past it.

But I'd like to understand if this is *precisely* your intent, or if
it's an oversight and it just ends up working OK.

Basically my question is whether we should add, on top:

> --- cscn.v5     2020-09-07 15:02:13.401663487 +0200
> +++ cscn.v5.incr        2020-09-07 16:52:22.133843710 +0200
> @@ -57,6 +57,10 @@
>                          \_SB.PCI0.PRES.CINS = One
>                          Local_CpuIdx++
>                      }
> +
> +                    if ((Local_HasJob == One)) {
> +                        Local_Uid++
> +                    }
>                  }
>
>                  Release (\_SB.PCI0.PRES.CPLK)

If not -- that is, the currently proposed patch is intentional --, then
I think we should add a comment, about the effective "rewind" being
intentional & okay.

(Note: it's certainly valid and necessary to re-write CSEL on line 15
after raising the SMI on line 47; the question is not whether we should
or should not re-write CSEL (we must!), but the specific value that we
write to CSEL.)

So:

- If the outer loop body is entered only once, then the patch looks
  fine, from both the review side, and the testing side (I tested it
  with 4-8 possible VCPUs).

- If the outer loop body is entered twice or more, then from the review
  side, please my question above. From the testing side: do you have an
  environment where I could test this with OVMF?

(I expect it to work OK. Upon the first SMI, the firmware will likely
pick up more VCPUs than what's in CNEW. But edk2 commit 020bb4b46d6f
("OvmfPkg/CpuHotplugSmm: fix CPU hotplug race just before SMI
broadcast", 2020-08-27) should deal with that.)

Hmm, actually, there's no need for a special environment: I can patch
QEMU and lower "max_cpus_per_pass" to something small, such as "3" or
whatever, for testing the outer loop multiple times. But first I'd like
to know your thoughts on the "rewind".

Thanks!
Laszlo



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
  2020-09-07 15:17   ` Laszlo Ersek
@ 2020-09-08  7:39     ` Igor Mammedov
  2020-09-08  9:35       ` Laszlo Ersek
  0 siblings, 1 reply; 25+ messages in thread
From: Igor Mammedov @ 2020-09-08  7:39 UTC (permalink / raw)
  To: Laszlo Ersek; +Cc: boris.ostrovsky, qemu-devel, aaron.young

On Mon, 7 Sep 2020 17:17:52 +0200
Laszlo Ersek <lersek@redhat.com> wrote:

> Hi Igor,
> 
> On 09/07/20 13:23, Igor Mammedov wrote:
> > In case firmware has negotiated CPU hotplug SMI feature, generate
> > AML to describe SMI IO port region and send SMI to firmware
> > on each CPU hotplug SCI in case new CPUs were hotplugged.
> >
> > Since new CPUs can be hotplugged while CPU_SCAN_METHOD is running
> > we can't send SMI before new CPUs are fetched from QEMU as it
> > could cause sending Notify to a CPU that firmware hasn't seen yet.
> > So fetch new CPUs into local cache first, then send SMI and
> > after that send Notify events to cached CPUs. This should ensure
> > that Notify is sent only to CPUs which were processed by firmware
> > first.
> > Any CPUs that were hotplugged after caching will be processed
> > by the next CPU_SCAN_METHOD, when pending SCI is handled.
> >
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > ---
> > v5:
> >  - fix hotplug on Windows when there is more than 256 possible CPUs
> >    (Windows isn't able to handle VarPackage over 255 elements
> >     so process CPUs in batches)
> >  - (Laszlo Ersek <lersek@redhat.com>)
> >    - fix off-by-one in package length
> >    - fix not selecting CPU before clearing insert event
> >    - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus,
> >      zero)
> >    - split off in separate patches:
> >      - making smi_negotiated_features a property
> >      - introduction of AcpiPmInfo.smi_on_cpuhp
> >      - introduction of PCI0.SMI0 ACPI device
> > v2:
> >  - clear insert event after firmware has returned
> >    control from SMI. (Laszlo Ersek <lersek@redhat.com>)
> > v1:
> >  - make sure that Notify is sent only to CPUs seen by FW
> >  - (Laszlo Ersek <lersek@redhat.com>)
> >     - use macro instead of x-smi-negotiated-features
> >     - style fixes
> >     - reserve whole SMI block (0xB2, 0xB3)
> > v0:
> >  - s/aml_string/aml_eisaid/
> >    /fixes Windows BSOD, on nonsense value/ (Laszlo Ersek
> >    <lersek@redhat.com>)
> >  - put SMI device under PCI0 like the rest of hotplug IO port
> >  - do not generate SMI AML if CPU hotplug SMI feature hasn't been
> >    negotiated
> > ---
> >  hw/acpi/cpu.c | 156 +++++++++++++++++++++++++++++++++++++++++---------
> >  1 file changed, 129 insertions(+), 27 deletions(-)
> >
> > diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> > index 3d6a500fb7..1283972001 100644
> > --- a/hw/acpi/cpu.c
> > +++ b/hw/acpi/cpu.c
> > @@ -14,6 +14,8 @@
> >  #define ACPI_CPU_CMD_DATA_OFFSET_RW 8
> >  #define ACPI_CPU_CMD_DATA2_OFFSET_R 0
> >
> > +#define OVMF_CPUHP_SMI_CMD 4
> > +
> >  enum {
> >      CPHP_GET_NEXT_CPU_WITH_EVENT_CMD = 0,
> >      CPHP_OST_EVENT_CMD = 1,
> > @@ -321,6 +323,7 @@ const VMStateDescription vmstate_cpu_hotplug = {
> >  #define CPU_NOTIFY_METHOD "CTFY"
> >  #define CPU_EJECT_METHOD  "CEJ0"
> >  #define CPU_OST_METHOD    "COST"
> > +#define CPU_ADDED_LIST    "CNEW"
> >
> >  #define CPU_ENABLED       "CPEN"
> >  #define CPU_SELECTOR      "CSEL"
> > @@ -465,42 +468,141 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
> >
> >          method = aml_method(CPU_SCAN_METHOD, 0, AML_SERIALIZED);
> >          {
> > +            const uint8_t max_cpus_per_pass = 255;
> >              Aml *else_ctx;
> > -            Aml *while_ctx;
> > +            Aml *while_ctx, *while_ctx2;
> >              Aml *has_event = aml_local(0);
> >              Aml *dev_chk = aml_int(1);
> >              Aml *eject_req = aml_int(3);
> >              Aml *next_cpu_cmd = aml_int(CPHP_GET_NEXT_CPU_WITH_EVENT_CMD);
> > +            Aml *num_added_cpus = aml_local(1);
> > +            Aml *cpu_idx = aml_local(2);
> > +            Aml *uid = aml_local(3);
> > +            Aml *has_job = aml_local(4);
> > +            Aml *new_cpus = aml_name(CPU_ADDED_LIST);
> >
> >              aml_append(method, aml_acquire(ctrl_lock, 0xFFFF));
> > -            aml_append(method, aml_store(one, has_event));
> > -            while_ctx = aml_while(aml_equal(has_event, one));
> > +
> > +            /*
> > +             * Windows versions newer than XP (including Windows 10/Windows
> > +             * Server 2019), do support* VarPackageOp but, it is cripled to hold
> > +             * the same elements number as old PackageOp.
> > +             * For compatibility with Windows XP (so it won't crash) use ACPI1.0
> > +             * PackageOp which can hold max 255 elements.
> > +             *
> > +             * use named package as old Windows don't support it in local var
> > +             */
> > +            aml_append(method, aml_name_decl(CPU_ADDED_LIST,
> > +                                             aml_package(max_cpus_per_pass)));
> > +
> > +            aml_append(method, aml_store(zero, uid));
> > +            aml_append(method, aml_store(one, has_job));
> > +            /*
> > +             * CPU_ADDED_LIST can hold limited number of elements, outer loop
> > +             * allows to process CPUs in batches which let us to handle more
> > +             * CPUs than CPU_ADDED_LIST can hold.
> > +             */
> > +            while_ctx2 = aml_while(aml_equal(has_job, one));
> >              {
> > -                 /* clear loop exit condition, ins_evt/rm_evt checks
> > -                  * will set it to 1 while next_cpu_cmd returns a CPU
> > -                  * with events */
> > -                 aml_append(while_ctx, aml_store(zero, has_event));
> > -                 aml_append(while_ctx, aml_store(next_cpu_cmd, cpu_cmd));
> > -                 ifctx = aml_if(aml_equal(ins_evt, one));
> > -                 {
> > -                     aml_append(ifctx,
> > -                         aml_call2(CPU_NOTIFY_METHOD, cpu_data, dev_chk));
> > -                     aml_append(ifctx, aml_store(one, ins_evt));
> > -                     aml_append(ifctx, aml_store(one, has_event));
> > -                 }
> > -                 aml_append(while_ctx, ifctx);
> > -                 else_ctx = aml_else();
> > -                 ifctx = aml_if(aml_equal(rm_evt, one));
> > -                 {
> > -                     aml_append(ifctx,
> > -                         aml_call2(CPU_NOTIFY_METHOD, cpu_data, eject_req));
> > -                     aml_append(ifctx, aml_store(one, rm_evt));
> > -                     aml_append(ifctx, aml_store(one, has_event));
> > -                 }
> > -                 aml_append(else_ctx, ifctx);
> > -                 aml_append(while_ctx, else_ctx);
> > +                aml_append(while_ctx2, aml_store(zero, has_job));
> > +
> > +                aml_append(while_ctx2, aml_store(one, has_event));
> > +                aml_append(while_ctx2, aml_store(zero, num_added_cpus));
> > +
> > +                /*
> > +                 * Scan CPUs, till there are CPUs with events or
> > +                 * CPU_ADDED_LIST capacity is exhausted
> > +                 */
> > +                while_ctx = aml_while(aml_land(aml_equal(has_event, one),
> > +                                      aml_lless(uid, aml_int(arch_ids->len))));
> > +                {
> > +                     /*
> > +                      * clear loop exit condition, ins_evt/rm_evt checks will
> > +                      * set it to 1 while next_cpu_cmd returns a CPU with events
> > +                      */
> > +                     aml_append(while_ctx, aml_store(zero, has_event));
> > +
> > +                     aml_append(while_ctx, aml_store(uid, cpu_selector));
> > +                     aml_append(while_ctx, aml_store(next_cpu_cmd, cpu_cmd));
> > +
> > +                     /*
> > +                      * wrap around case, scan is complete, exit loop.
> > +                      * It happens since events are not cleared in scan loop,
> > +                      * so next_cpu_cmd continues to find already processed CPUs
> > +                      */
> > +                     ifctx = aml_if(aml_lless(cpu_data, uid));
> > +                     {
> > +                         aml_append(ifctx, aml_break());
> > +                     }
> > +                     aml_append(while_ctx, ifctx);
> > +
> > +                     /*
> > +                      * if CPU_ADDED_LIST is full, exit inner loop and process
> > +                      * collected CPUs
> > +                      */
> > +                     ifctx = aml_if(
> > +                         aml_equal(num_added_cpus, aml_int(max_cpus_per_pass)));
> > +                     {
> > +                         aml_append(ifctx, aml_store(one, has_job));
> > +                         aml_append(ifctx, aml_break());
> > +                     }
> > +                     aml_append(while_ctx, ifctx);
> > +
> > +                     aml_append(while_ctx, aml_store(cpu_data, uid));
> > +                     ifctx = aml_if(aml_equal(ins_evt, one));
> > +                     {
> > +                         /* cache added CPUs to Notify/Wakeup later */
> > +                         aml_append(ifctx, aml_store(uid,
> > +                             aml_index(new_cpus, num_added_cpus)));
> > +                         aml_append(ifctx, aml_increment(num_added_cpus));
> > +                         aml_append(ifctx, aml_store(one, has_event));
> > +                     }
> > +                     aml_append(while_ctx, ifctx);
> > +                     else_ctx = aml_else();
> > +                     ifctx = aml_if(aml_equal(rm_evt, one));
> > +                     {
> > +                         aml_append(ifctx,
> > +                             aml_call2(CPU_NOTIFY_METHOD, uid, eject_req));
> > +                         aml_append(ifctx, aml_store(one, rm_evt));
> > +                         aml_append(ifctx, aml_store(one, has_event));
> > +                     }
> > +                     aml_append(else_ctx, ifctx);
> > +                     aml_append(while_ctx, else_ctx);
> > +                     aml_append(while_ctx, aml_increment(uid));
> > +                }
> > +                aml_append(while_ctx2, while_ctx);
> > +
> > +                /*
> > +                 * in case FW negotiated ICH9_LPC_SMI_F_CPU_HOTPLUG_BIT,
> > +                 * make upcall to FW, so it can pull in new CPUs before
> > +                 * OS is notified and wakes them up
> > +                 */
> > +                if (opts.smi_path) {
> > +                    ifctx = aml_if(aml_lgreater(num_added_cpus, zero));
> > +                    {
> > +                        aml_append(ifctx, aml_store(aml_int(OVMF_CPUHP_SMI_CMD),
> > +                            aml_name("%s", opts.smi_path)));
> > +                    }
> > +                    aml_append(while_ctx2, ifctx);
> > +                }
> > +
> > +                /* Notify OSPM about new CPUs and clear insert events */
> > +                aml_append(while_ctx2, aml_store(zero, cpu_idx));
> > +                while_ctx = aml_while(aml_lless(cpu_idx, num_added_cpus));
> > +                {
> > +                    aml_append(while_ctx,
> > +                        aml_store(aml_derefof(aml_index(new_cpus, cpu_idx)),
> > +                                  uid));
> > +                    aml_append(while_ctx,
> > +                        aml_call2(CPU_NOTIFY_METHOD, uid, dev_chk));
> > +                    aml_append(while_ctx, aml_store(uid, aml_debug()));
> > +                    aml_append(while_ctx, aml_store(uid, cpu_selector));
> > +                    aml_append(while_ctx, aml_store(one, ins_evt));
> > +                    aml_append(while_ctx, aml_increment(cpu_idx));
> > +                }
> > +                aml_append(while_ctx2, while_ctx);
> >              }
> > -            aml_append(method, while_ctx);
> > +            aml_append(method, while_ctx2);
> >              aml_append(method, aml_release(ctrl_lock));
> >          }
> >          aml_append(cpus_dev, method);
> >  
> 
> Here's the ASL decompiled, and using the meaningful local variable names
> (and 8 possible VCPUs):
> 
>    1              Method (CSCN, 0, Serialized)
>    2              {
>    3                  Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF)
>    4                  Name (CNEW, Package (0xFF){})
>    5                  Local_Uid = Zero
>    6                  Local_HasJob = One
>    7                  While ((Local_HasJob == One))
>    8                  {
>    9                      Local_HasJob = Zero
>   10                      Local_HasEvent = One
>   11                      Local_NumAddedCpus = Zero
>   12                      While (((Local_HasEvent == One) && (Local_Uid < 0x08)))
>   13                      {
>   14                          Local_HasEvent = Zero
>   15                          \_SB.PCI0.PRES.CSEL = Local_Uid
>   16                          \_SB.PCI0.PRES.CCMD = Zero
>   17                          If ((\_SB.PCI0.PRES.CDAT < Local_Uid))
>   18                          {
>   19                              Break
>   20                          }
>   21
>   22                          If ((Local_NumAddedCpus == 0xFF))
>   23                          {
>   24                              Local_HasJob = One
>   25                              Break
>   26                          }
>   27
>   28                          Local_Uid = \_SB.PCI0.PRES.CDAT
>   29                          If ((\_SB.PCI0.PRES.CINS == One))
>   30                          {
>   31                              CNEW [Local_NumAddedCpus] = Local_Uid
>   32                              Local_NumAddedCpus++
>   33                              Local_HasEvent = One
>   34                          }
>   35                          ElseIf ((\_SB.PCI0.PRES.CRMV == One))
>   36                          {
>   37                              CTFY (Local_Uid, 0x03)
>   38                              \_SB.PCI0.PRES.CRMV = One
>   39                              Local_HasEvent = One
>   40                          }
>   41
>   42                          Local_Uid++
>   43                      }
>   44
>   45                      If ((Local_NumAddedCpus > Zero))
>   46                      {
>   47                          \_SB.PCI0.SMI0.SMIC = 0x04
>   48                      }
>   49
>   50                      Local_CpuIdx = Zero
>   51                      While ((Local_CpuIdx < Local_NumAddedCpus))
>   52                      {
>   53                          Local_Uid = DerefOf (CNEW [Local_CpuIdx])
>   54                          CTFY (Local_Uid, One)
>   55                          Debug = Local_Uid
>   56                          \_SB.PCI0.PRES.CSEL = Local_Uid
>   57                          \_SB.PCI0.PRES.CINS = One
>   58                          Local_CpuIdx++
>   59                      }
>   60                  }
>   61
>   62                  Release (\_SB.PCI0.PRES.CPLK)
>   63              }
> 
> When we take the Break on line 25, then:
> 
> (a) on line 25, the following equality holds:
> 
>   Local_Uid == CNEW[Local_NumAddedCpus - 1] + 1
> 
> (b) on line 60, the following equality holds:
> 
>   Local_Uid == CNEW[Local_NumAddedCpus - 1]
> 
> This means that, when we write Local_Uid to CSEL on line 15 again, then:
> 
> - we effectively re-investigate the last-cached CPU (with selector value
>   CNEW[Local_NumAddedCpus - 1])
> 
> - rather than resuming the scanning right after it -- that is, with
>   selector value (CNEW[Local_NumAddedCpus - 1] + 1) --, in the spirit of
>   line 42.
> 
> My question is: is this "rewind" intentional?
> 
> Now, I don't see a functionality problem with this, as on line 57, we
> clear the pending insert event for the last-cached CPU, so when we
> re-check it, the "get pending" command will simply seek past it.
> 
> But I'd like to understand if this is *precisely* your intent, or if
> it's an oversight and it just ends up working OK.
it's the later (it should not have any adverse effects) so I didn't care
much about restarting from the last processed CPU.

how about moving

  22                          If ((Local_NumAddedCpus == 0xFF))
  23                          {
  24                              Local_HasJob = One
  25                              Break
  26                          }

right after
  40                          }
  41
  42                          Local_Uid++

instead of adding extra 'if' at the end of outer loop?

> 
> Basically my question is whether we should add, on top:
> 
> > --- cscn.v5     2020-09-07 15:02:13.401663487 +0200
> > +++ cscn.v5.incr        2020-09-07 16:52:22.133843710 +0200
> > @@ -57,6 +57,10 @@
> >                          \_SB.PCI0.PRES.CINS = One
> >                          Local_CpuIdx++
> >                      }
> > +
> > +                    if ((Local_HasJob == One)) {
> > +                        Local_Uid++
> > +                    }
> >                  }
> >
> >                  Release (\_SB.PCI0.PRES.CPLK)  
> 
> If not -- that is, the currently proposed patch is intentional --, then
> I think we should add a comment, about the effective "rewind" being
> intentional & okay.
> 
> (Note: it's certainly valid and necessary to re-write CSEL on line 15
> after raising the SMI on line 47; the question is not whether we should
> or should not re-write CSEL (we must!), but the specific value that we
> write to CSEL.)
> 
> So:
> 
> - If the outer loop body is entered only once, then the patch looks
>   fine, from both the review side, and the testing side (I tested it
>   with 4-8 possible VCPUs).
> 
> - If the outer loop body is entered twice or more, then from the review
>   side, please my question above. From the testing side: do you have an
>   environment where I could test this with OVMF?
> 
> (I expect it to work OK. Upon the first SMI, the firmware will likely
> pick up more VCPUs than what's in CNEW. But edk2 commit 020bb4b46d6f
> ("OvmfPkg/CpuHotplugSmm: fix CPU hotplug race just before SMI
> broadcast", 2020-08-27) should deal with that.)
it should work, as firmware doesn't have to jump over hoops to
accomodate Windows quirks.

> Hmm, actually, there's no need for a special environment: I can patch
> QEMU and lower "max_cpus_per_pass" to something small, such as "3" or
> whatever, for testing the outer loop multiple times. But first I'd like
> to know your thoughts on the "rewind".

modulo OVMF, I've tested both:
 - hacking max_cpus_per_pass to simulate multiple passes of the outer loop
 - with maxcpus=288 to cross 255 thresold (2 passes of outer loop), to check
   that WS2019 is still working.

 
> Thanks!
> Laszlo
> 
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
  2020-09-08  7:39     ` Igor Mammedov
@ 2020-09-08  9:35       ` Laszlo Ersek
  2020-09-08 11:00         ` Laszlo Ersek
  0 siblings, 1 reply; 25+ messages in thread
From: Laszlo Ersek @ 2020-09-08  9:35 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: boris.ostrovsky, qemu-devel, aaron.young

On 09/08/20 09:39, Igor Mammedov wrote:
> On Mon, 7 Sep 2020 17:17:52 +0200
> Laszlo Ersek <lersek@redhat.com> wrote:

>>    1              Method (CSCN, 0, Serialized)
>>    2              {
>>    3                  Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF)
>>    4                  Name (CNEW, Package (0xFF){})
>>    5                  Local_Uid = Zero
>>    6                  Local_HasJob = One
>>    7                  While ((Local_HasJob == One))
>>    8                  {
>>    9                      Local_HasJob = Zero
>>   10                      Local_HasEvent = One
>>   11                      Local_NumAddedCpus = Zero
>>   12                      While (((Local_HasEvent == One) && (Local_Uid < 0x08)))
>>   13                      {
>>   14                          Local_HasEvent = Zero
>>   15                          \_SB.PCI0.PRES.CSEL = Local_Uid
>>   16                          \_SB.PCI0.PRES.CCMD = Zero
>>   17                          If ((\_SB.PCI0.PRES.CDAT < Local_Uid))
>>   18                          {
>>   19                              Break
>>   20                          }
>>   21
>>   22                          If ((Local_NumAddedCpus == 0xFF))
>>   23                          {
>>   24                              Local_HasJob = One
>>   25                              Break
>>   26                          }
>>   27
>>   28                          Local_Uid = \_SB.PCI0.PRES.CDAT
>>   29                          If ((\_SB.PCI0.PRES.CINS == One))
>>   30                          {
>>   31                              CNEW [Local_NumAddedCpus] = Local_Uid
>>   32                              Local_NumAddedCpus++
>>   33                              Local_HasEvent = One
>>   34                          }
>>   35                          ElseIf ((\_SB.PCI0.PRES.CRMV == One))
>>   36                          {
>>   37                              CTFY (Local_Uid, 0x03)
>>   38                              \_SB.PCI0.PRES.CRMV = One
>>   39                              Local_HasEvent = One
>>   40                          }
>>   41
>>   42                          Local_Uid++
>>   43                      }
>>   44
>>   45                      If ((Local_NumAddedCpus > Zero))
>>   46                      {
>>   47                          \_SB.PCI0.SMI0.SMIC = 0x04
>>   48                      }
>>   49
>>   50                      Local_CpuIdx = Zero
>>   51                      While ((Local_CpuIdx < Local_NumAddedCpus))
>>   52                      {
>>   53                          Local_Uid = DerefOf (CNEW [Local_CpuIdx])
>>   54                          CTFY (Local_Uid, One)
>>   55                          Debug = Local_Uid
>>   56                          \_SB.PCI0.PRES.CSEL = Local_Uid
>>   57                          \_SB.PCI0.PRES.CINS = One
>>   58                          Local_CpuIdx++
>>   59                      }
>>   60                  }
>>   61
>>   62                  Release (\_SB.PCI0.PRES.CPLK)
>>   63              }
>>
>> When we take the Break on line 25, then:
>>
>> (a) on line 25, the following equality holds:
>>
>>   Local_Uid == CNEW[Local_NumAddedCpus - 1] + 1
>>
>> (b) on line 60, the following equality holds:
>>
>>   Local_Uid == CNEW[Local_NumAddedCpus - 1]
>>
>> This means that, when we write Local_Uid to CSEL on line 15 again, then:
>>
>> - we effectively re-investigate the last-cached CPU (with selector value
>>   CNEW[Local_NumAddedCpus - 1])
>>
>> - rather than resuming the scanning right after it -- that is, with
>>   selector value (CNEW[Local_NumAddedCpus - 1] + 1) --, in the spirit of
>>   line 42.
>>
>> My question is: is this "rewind" intentional?
>>
>> Now, I don't see a functionality problem with this, as on line 57, we
>> clear the pending insert event for the last-cached CPU, so when we
>> re-check it, the "get pending" command will simply seek past it.
>>
>> But I'd like to understand if this is *precisely* your intent, or if
>> it's an oversight and it just ends up working OK.
> it's the later (it should not have any adverse effects) so I didn't care
> much about restarting from the last processed CPU.
>
> how about moving
>
>   22                          If ((Local_NumAddedCpus == 0xFF))
>   23                          {
>   24                              Local_HasJob = One
>   25                              Break
>   26                          }
>
> right after
>   40                          }
>   41
>   42                          Local_Uid++
>
> instead of adding extra 'if' at the end of outer loop?

That only seems to save a CSEL write on line 15, during the first
iteration of the outer loop. And we would still re-select the last
selector from CNEW, in the second iteration of the outer loop.

But, again, there's no bug; I just wanted to understand your intent.

Can you please squash the following patch:

> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 12839720018e..8dd4d8ebbf55 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -601,6 +601,15 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
>                      aml_append(while_ctx, aml_increment(cpu_idx));
>                  }
>                  aml_append(while_ctx2, while_ctx);
> +                /*
> +                 * If another batch is needed, then it will resume scanning
> +                 * exactly at -- and not after -- the last CPU that's currently
> +                 * in CPU_ADDED_LIST. In other words, the last CPU in
> +                 * CPU_ADDED_LIST is going to be re-checked. That's OK: we've
> +                 * just cleared the insert event for *all* CPUs in
> +                 * CPU_ADDED_LIST, including the last one. So the scan will
> +                 * simply seek past it.
> +                 */
>              }
>              aml_append(method, while_ctx2);
>              aml_append(method, aml_release(ctrl_lock));

With that:

Reviewed-by: Laszlo Ersek <lersek@redhat.com>

I'll also follow up with test results for this patch (meaning a lowered
"max_cpus_per_pass").

Thanks!
Laszlo



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
  2020-09-08  9:35       ` Laszlo Ersek
@ 2020-09-08 11:00         ` Laszlo Ersek
  0 siblings, 0 replies; 25+ messages in thread
From: Laszlo Ersek @ 2020-09-08 11:00 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: boris.ostrovsky, qemu-devel, aaron.young

On 09/08/20 11:35, Laszlo Ersek wrote:
> On 09/08/20 09:39, Igor Mammedov wrote:
>> On Mon, 7 Sep 2020 17:17:52 +0200
>> Laszlo Ersek <lersek@redhat.com> wrote:
> 
>>>    1              Method (CSCN, 0, Serialized)
>>>    2              {
>>>    3                  Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF)
>>>    4                  Name (CNEW, Package (0xFF){})
>>>    5                  Local_Uid = Zero
>>>    6                  Local_HasJob = One
>>>    7                  While ((Local_HasJob == One))
>>>    8                  {
>>>    9                      Local_HasJob = Zero
>>>   10                      Local_HasEvent = One
>>>   11                      Local_NumAddedCpus = Zero
>>>   12                      While (((Local_HasEvent == One) && (Local_Uid < 0x08)))
>>>   13                      {
>>>   14                          Local_HasEvent = Zero
>>>   15                          \_SB.PCI0.PRES.CSEL = Local_Uid
>>>   16                          \_SB.PCI0.PRES.CCMD = Zero
>>>   17                          If ((\_SB.PCI0.PRES.CDAT < Local_Uid))
>>>   18                          {
>>>   19                              Break
>>>   20                          }
>>>   21
>>>   22                          If ((Local_NumAddedCpus == 0xFF))
>>>   23                          {
>>>   24                              Local_HasJob = One
>>>   25                              Break
>>>   26                          }
>>>   27
>>>   28                          Local_Uid = \_SB.PCI0.PRES.CDAT
>>>   29                          If ((\_SB.PCI0.PRES.CINS == One))
>>>   30                          {
>>>   31                              CNEW [Local_NumAddedCpus] = Local_Uid
>>>   32                              Local_NumAddedCpus++
>>>   33                              Local_HasEvent = One
>>>   34                          }
>>>   35                          ElseIf ((\_SB.PCI0.PRES.CRMV == One))
>>>   36                          {
>>>   37                              CTFY (Local_Uid, 0x03)
>>>   38                              \_SB.PCI0.PRES.CRMV = One
>>>   39                              Local_HasEvent = One
>>>   40                          }
>>>   41
>>>   42                          Local_Uid++
>>>   43                      }
>>>   44
>>>   45                      If ((Local_NumAddedCpus > Zero))
>>>   46                      {
>>>   47                          \_SB.PCI0.SMI0.SMIC = 0x04
>>>   48                      }
>>>   49
>>>   50                      Local_CpuIdx = Zero
>>>   51                      While ((Local_CpuIdx < Local_NumAddedCpus))
>>>   52                      {
>>>   53                          Local_Uid = DerefOf (CNEW [Local_CpuIdx])
>>>   54                          CTFY (Local_Uid, One)
>>>   55                          Debug = Local_Uid
>>>   56                          \_SB.PCI0.PRES.CSEL = Local_Uid
>>>   57                          \_SB.PCI0.PRES.CINS = One
>>>   58                          Local_CpuIdx++
>>>   59                      }
>>>   60                  }
>>>   61
>>>   62                  Release (\_SB.PCI0.PRES.CPLK)
>>>   63              }
>>>
>>> When we take the Break on line 25, then:
>>>
>>> (a) on line 25, the following equality holds:
>>>
>>>   Local_Uid == CNEW[Local_NumAddedCpus - 1] + 1
>>>
>>> (b) on line 60, the following equality holds:
>>>
>>>   Local_Uid == CNEW[Local_NumAddedCpus - 1]
>>>
>>> This means that, when we write Local_Uid to CSEL on line 15 again, then:
>>>
>>> - we effectively re-investigate the last-cached CPU (with selector value
>>>   CNEW[Local_NumAddedCpus - 1])
>>>
>>> - rather than resuming the scanning right after it -- that is, with
>>>   selector value (CNEW[Local_NumAddedCpus - 1] + 1) --, in the spirit of
>>>   line 42.
>>>
>>> My question is: is this "rewind" intentional?
>>>
>>> Now, I don't see a functionality problem with this, as on line 57, we
>>> clear the pending insert event for the last-cached CPU, so when we
>>> re-check it, the "get pending" command will simply seek past it.
>>>
>>> But I'd like to understand if this is *precisely* your intent, or if
>>> it's an oversight and it just ends up working OK.
>> it's the later (it should not have any adverse effects) so I didn't care
>> much about restarting from the last processed CPU.
>>
>> how about moving
>>
>>   22                          If ((Local_NumAddedCpus == 0xFF))
>>   23                          {
>>   24                              Local_HasJob = One
>>   25                              Break
>>   26                          }
>>
>> right after
>>   40                          }
>>   41
>>   42                          Local_Uid++
>>
>> instead of adding extra 'if' at the end of outer loop?
> 
> That only seems to save a CSEL write on line 15, during the first
> iteration of the outer loop. And we would still re-select the last
> selector from CNEW, in the second iteration of the outer loop.
> 
> But, again, there's no bug; I just wanted to understand your intent.
> 
> Can you please squash the following patch:
> 
>> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
>> index 12839720018e..8dd4d8ebbf55 100644
>> --- a/hw/acpi/cpu.c
>> +++ b/hw/acpi/cpu.c
>> @@ -601,6 +601,15 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
>>                      aml_append(while_ctx, aml_increment(cpu_idx));
>>                  }
>>                  aml_append(while_ctx2, while_ctx);
>> +                /*
>> +                 * If another batch is needed, then it will resume scanning
>> +                 * exactly at -- and not after -- the last CPU that's currently
>> +                 * in CPU_ADDED_LIST. In other words, the last CPU in
>> +                 * CPU_ADDED_LIST is going to be re-checked. That's OK: we've
>> +                 * just cleared the insert event for *all* CPUs in
>> +                 * CPU_ADDED_LIST, including the last one. So the scan will
>> +                 * simply seek past it.
>> +                 */
>>              }
>>              aml_append(method, while_ctx2);
>>              aml_append(method, aml_release(ctrl_lock));
> 
> With that:
> 
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
> 
> I'll also follow up with test results for this patch (meaning a lowered
> "max_cpus_per_pass").

Tested-by: Laszlo Ersek <lersek@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v5 9/10] fixup! x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
  2020-09-07 11:23 ` [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM Igor Mammedov
  2020-09-07 15:17   ` Laszlo Ersek
@ 2020-09-08 14:24   ` Igor Mammedov
  2020-09-08 18:09     ` Laszlo Ersek
  1 sibling, 1 reply; 25+ messages in thread
From: Igor Mammedov @ 2020-09-08 14:24 UTC (permalink / raw)
  To: qemu-devel; +Cc: lersek, mst

Add comment explaining why while_ctx2 restarts from the last processed CPU.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
---
 hw/acpi/cpu.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index 1283972001..8dd4d8ebbf 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -601,6 +601,15 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
                     aml_append(while_ctx, aml_increment(cpu_idx));
                 }
                 aml_append(while_ctx2, while_ctx);
+                /*
+                 * If another batch is needed, then it will resume scanning
+                 * exactly at -- and not after -- the last CPU that's currently
+                 * in CPU_ADDED_LIST. In other words, the last CPU in
+                 * CPU_ADDED_LIST is going to be re-checked. That's OK: we've
+                 * just cleared the insert event for *all* CPUs in
+                 * CPU_ADDED_LIST, including the last one. So the scan will
+                 * simply seek past it.
+                 */
             }
             aml_append(method, while_ctx2);
             aml_append(method, aml_release(ctrl_lock));
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 00/10] x86: fix cpu hotplug with secure boot
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (9 preceding siblings ...)
  2020-09-07 11:23 ` [PATCH v5 10/10] tests: acpi: update acpi blobs with new AML Igor Mammedov
@ 2020-09-08 14:29 ` Igor Mammedov
  2020-09-08 18:08   ` Laszlo Ersek
  2020-09-21 11:46 ` Igor Mammedov
  11 siblings, 1 reply; 25+ messages in thread
From: Igor Mammedov @ 2020-09-08 14:29 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young, mst

On Mon,  7 Sep 2020 07:23:38 -0400
Igor Mammedov <imammedo@redhat.com> wrote:

> v5:
>   - fix hotplug on Windows when there is more than 256 possible CPUs
>     (Windows isn't able to handle VarPackage over 255 elements
>      so process CPUs in batches)
>   - fix off-by-one in package length (Laszlo)
>   - fix not selecting CPU before clearing insert event (Laszlo)
>   - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus, zero) (Laszlo)
>   - split 'x68: acpi: trigger SMI before sending hotplug Notify event to OSPM'
>     in samller chunks (Laszlo)
>   - fix comment to match spec (Laszlo)
>   - reorder aml_lor() and aml_land() in header (Laszlo)
> v4:
>   - fix 5.2 machine types so they won't apply pc_compat_5_1 (Laszlo)
> v3:
>   - rebase on top of "[PATCH v2] hw: add compat machines for 5.2"
>     so apply that before this patch
> v2:
>   - AML: clean is_inserted flag only after SMI callback
>   - make x-smi-cpu-hotunplug false by default
>   - massage error hint on not supported unplug
> v1:
>   - fix typos and some phrases (Laszlo)
>   - add unplug check (Laszlo)
>   - redo AML scan logic to avoid race when adding multiple CPUs
> 
> CPU hotplug with Secure Boot was not really supported and firmware wasn't aware
> of hotplugged CPUs (which might lead to guest crashes). During 4.2 we introduced
> locked SMI handler RAM arrea to make sure that guest OS wasn't able to inject
> its own SMI handler and OVMF added initial CPU hotplug support.
> 
> This series is QEMU part of that support which lets QMVF tell QEMU that
> CPU hotplug with SMI broadcast enabled is supported so that QEMU would be able
> to prevent hotplug in case it's not supported and trigger SMI on hotplug when
> it's necessary.

Michael,

Can you merge this along with Laszlo's
  [PATCH 00/10] edk2: adopt the edk2-stable202008 release
via PCI tree, preferably in the same pull req.


> 
> Igor Mammedov (10):
>   x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features
>   x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is
>     in use
>   x86: cpuhp: refuse cpu hot-unplug request earlier if not supported
>   acpi: add aml_land() and aml_break() primitives
>   tests: acpi: mark to be changed tables in
>     bios-tables-test-allowed-diff
>   x86: ich9: expose "smi_negotiated_features" as a QOM property
>   x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp
>   x86: acpi: introduce the PCI0.SMI0 ACPI device
>   x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
>   tests: acpi: update acpi blobs with new AML
> 
>  include/hw/acpi/aml-build.h       |   2 +
>  include/hw/acpi/cpu.h             |   1 +
>  include/hw/i386/ich9.h            |   4 +
>  hw/acpi/aml-build.c               |  16 +++
>  hw/acpi/cpu.c                     | 156 ++++++++++++++++++++++++------
>  hw/acpi/ich9.c                    |  24 ++++-
>  hw/i386/acpi-build.c              |  35 ++++++-
>  hw/i386/pc.c                      |  15 ++-
>  hw/isa/lpc_ich9.c                 |  16 +++
>  tests/data/acpi/pc/DSDT           | Bin 4934 -> 5060 bytes
>  tests/data/acpi/pc/DSDT.acpihmat  | Bin 6258 -> 6385 bytes
>  tests/data/acpi/pc/DSDT.bridge    | Bin 6793 -> 6919 bytes
>  tests/data/acpi/pc/DSDT.cphp      | Bin 5397 -> 5524 bytes
>  tests/data/acpi/pc/DSDT.dimmpxm   | Bin 6587 -> 6714 bytes
>  tests/data/acpi/pc/DSDT.ipmikcs   | Bin 5006 -> 5132 bytes
>  tests/data/acpi/pc/DSDT.memhp     | Bin 6293 -> 6419 bytes
>  tests/data/acpi/pc/DSDT.numamem   | Bin 4940 -> 5066 bytes
>  tests/data/acpi/q35/DSDT          | Bin 7678 -> 7804 bytes
>  tests/data/acpi/q35/DSDT.acpihmat | Bin 9002 -> 9129 bytes
>  tests/data/acpi/q35/DSDT.bridge   | Bin 7695 -> 7821 bytes
>  tests/data/acpi/q35/DSDT.cphp     | Bin 8141 -> 8268 bytes
>  tests/data/acpi/q35/DSDT.dimmpxm  | Bin 9331 -> 9458 bytes
>  tests/data/acpi/q35/DSDT.ipmibt   | Bin 7753 -> 7879 bytes
>  tests/data/acpi/q35/DSDT.memhp    | Bin 9037 -> 9163 bytes
>  tests/data/acpi/q35/DSDT.mmio64   | Bin 8808 -> 8934 bytes
>  tests/data/acpi/q35/DSDT.numamem  | Bin 7684 -> 7810 bytes
>  tests/data/acpi/q35/DSDT.tis      | Bin 8283 -> 8409 bytes
>  27 files changed, 239 insertions(+), 30 deletions(-)
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 00/10] x86: fix cpu hotplug with secure boot
  2020-09-08 14:29 ` [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
@ 2020-09-08 18:08   ` Laszlo Ersek
  0 siblings, 0 replies; 25+ messages in thread
From: Laszlo Ersek @ 2020-09-08 18:08 UTC (permalink / raw)
  To: Igor Mammedov, qemu-devel
  Cc: boris.ostrovsky, Philippe Mathieu-Daudé, aaron.young, mst

On 09/08/20 16:29, Igor Mammedov wrote:
> On Mon,  7 Sep 2020 07:23:38 -0400
> Igor Mammedov <imammedo@redhat.com> wrote:
> 
>> v5:
>>   - fix hotplug on Windows when there is more than 256 possible CPUs
>>     (Windows isn't able to handle VarPackage over 255 elements
>>      so process CPUs in batches)
>>   - fix off-by-one in package length (Laszlo)
>>   - fix not selecting CPU before clearing insert event (Laszlo)
>>   - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus, zero) (Laszlo)
>>   - split 'x68: acpi: trigger SMI before sending hotplug Notify event to OSPM'
>>     in samller chunks (Laszlo)
>>   - fix comment to match spec (Laszlo)
>>   - reorder aml_lor() and aml_land() in header (Laszlo)
>> v4:
>>   - fix 5.2 machine types so they won't apply pc_compat_5_1 (Laszlo)
>> v3:
>>   - rebase on top of "[PATCH v2] hw: add compat machines for 5.2"
>>     so apply that before this patch
>> v2:
>>   - AML: clean is_inserted flag only after SMI callback
>>   - make x-smi-cpu-hotunplug false by default
>>   - massage error hint on not supported unplug
>> v1:
>>   - fix typos and some phrases (Laszlo)
>>   - add unplug check (Laszlo)
>>   - redo AML scan logic to avoid race when adding multiple CPUs
>>
>> CPU hotplug with Secure Boot was not really supported and firmware wasn't aware
>> of hotplugged CPUs (which might lead to guest crashes). During 4.2 we introduced
>> locked SMI handler RAM arrea to make sure that guest OS wasn't able to inject
>> its own SMI handler and OVMF added initial CPU hotplug support.
>>
>> This series is QEMU part of that support which lets QMVF tell QEMU that
>> CPU hotplug with SMI broadcast enabled is supported so that QEMU would be able
>> to prevent hotplug in case it's not supported and trigger SMI on hotplug when
>> it's necessary.
> 
> Michael,
> 
> Can you merge this along with Laszlo's
>   [PATCH 00/10] edk2: adopt the edk2-stable202008 release
> via PCI tree, preferably in the same pull req.

That would be very kind, thank you -- however, please give some time to
Phil for reviewing the (rest of the) edk2 update series.

Cheers!
Laszlo

> 
> 
>>
>> Igor Mammedov (10):
>>   x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features
>>   x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is
>>     in use
>>   x86: cpuhp: refuse cpu hot-unplug request earlier if not supported
>>   acpi: add aml_land() and aml_break() primitives
>>   tests: acpi: mark to be changed tables in
>>     bios-tables-test-allowed-diff
>>   x86: ich9: expose "smi_negotiated_features" as a QOM property
>>   x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp
>>   x86: acpi: introduce the PCI0.SMI0 ACPI device
>>   x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
>>   tests: acpi: update acpi blobs with new AML
>>
>>  include/hw/acpi/aml-build.h       |   2 +
>>  include/hw/acpi/cpu.h             |   1 +
>>  include/hw/i386/ich9.h            |   4 +
>>  hw/acpi/aml-build.c               |  16 +++
>>  hw/acpi/cpu.c                     | 156 ++++++++++++++++++++++++------
>>  hw/acpi/ich9.c                    |  24 ++++-
>>  hw/i386/acpi-build.c              |  35 ++++++-
>>  hw/i386/pc.c                      |  15 ++-
>>  hw/isa/lpc_ich9.c                 |  16 +++
>>  tests/data/acpi/pc/DSDT           | Bin 4934 -> 5060 bytes
>>  tests/data/acpi/pc/DSDT.acpihmat  | Bin 6258 -> 6385 bytes
>>  tests/data/acpi/pc/DSDT.bridge    | Bin 6793 -> 6919 bytes
>>  tests/data/acpi/pc/DSDT.cphp      | Bin 5397 -> 5524 bytes
>>  tests/data/acpi/pc/DSDT.dimmpxm   | Bin 6587 -> 6714 bytes
>>  tests/data/acpi/pc/DSDT.ipmikcs   | Bin 5006 -> 5132 bytes
>>  tests/data/acpi/pc/DSDT.memhp     | Bin 6293 -> 6419 bytes
>>  tests/data/acpi/pc/DSDT.numamem   | Bin 4940 -> 5066 bytes
>>  tests/data/acpi/q35/DSDT          | Bin 7678 -> 7804 bytes
>>  tests/data/acpi/q35/DSDT.acpihmat | Bin 9002 -> 9129 bytes
>>  tests/data/acpi/q35/DSDT.bridge   | Bin 7695 -> 7821 bytes
>>  tests/data/acpi/q35/DSDT.cphp     | Bin 8141 -> 8268 bytes
>>  tests/data/acpi/q35/DSDT.dimmpxm  | Bin 9331 -> 9458 bytes
>>  tests/data/acpi/q35/DSDT.ipmibt   | Bin 7753 -> 7879 bytes
>>  tests/data/acpi/q35/DSDT.memhp    | Bin 9037 -> 9163 bytes
>>  tests/data/acpi/q35/DSDT.mmio64   | Bin 8808 -> 8934 bytes
>>  tests/data/acpi/q35/DSDT.numamem  | Bin 7684 -> 7810 bytes
>>  tests/data/acpi/q35/DSDT.tis      | Bin 8283 -> 8409 bytes
>>  27 files changed, 239 insertions(+), 30 deletions(-)
>>
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 9/10] fixup! x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
  2020-09-08 14:24   ` [PATCH v5 9/10] fixup! " Igor Mammedov
@ 2020-09-08 18:09     ` Laszlo Ersek
  0 siblings, 0 replies; 25+ messages in thread
From: Laszlo Ersek @ 2020-09-08 18:09 UTC (permalink / raw)
  To: Igor Mammedov, qemu-devel; +Cc: mst

On 09/08/20 16:24, Igor Mammedov wrote:
> Add comment explaining why while_ctx2 restarts from the last processed CPU.
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
> Tested-by: Laszlo Ersek <lersek@redhat.com>
> ---
>  hw/acpi/cpu.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 1283972001..8dd4d8ebbf 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -601,6 +601,15 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
>                      aml_append(while_ctx, aml_increment(cpu_idx));
>                  }
>                  aml_append(while_ctx2, while_ctx);
> +                /*
> +                 * If another batch is needed, then it will resume scanning
> +                 * exactly at -- and not after -- the last CPU that's currently
> +                 * in CPU_ADDED_LIST. In other words, the last CPU in
> +                 * CPU_ADDED_LIST is going to be re-checked. That's OK: we've
> +                 * just cleared the insert event for *all* CPUs in
> +                 * CPU_ADDED_LIST, including the last one. So the scan will
> +                 * simply seek past it.
> +                 */
>              }
>              aml_append(method, while_ctx2);
>              aml_append(method, aml_release(ctrl_lock));
> 

Thank you!
Laszlo



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 00/10] x86: fix cpu hotplug with secure boot
  2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
                   ` (10 preceding siblings ...)
  2020-09-08 14:29 ` [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
@ 2020-09-21 11:46 ` Igor Mammedov
  2020-09-21 12:34   ` Michael S. Tsirkin
  11 siblings, 1 reply; 25+ messages in thread
From: Igor Mammedov @ 2020-09-21 11:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: boris.ostrovsky, lersek, aaron.young, mst

On Mon,  7 Sep 2020 07:23:38 -0400
Igor Mammedov <imammedo@redhat.com> wrote:

> v5:
>   - fix hotplug on Windows when there is more than 256 possible CPUs
>     (Windows isn't able to handle VarPackage over 255 elements
>      so process CPUs in batches)
>   - fix off-by-one in package length (Laszlo)
>   - fix not selecting CPU before clearing insert event (Laszlo)
>   - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus, zero) (Laszlo)
>   - split 'x68: acpi: trigger SMI before sending hotplug Notify event to OSPM'
>     in samller chunks (Laszlo)
>   - fix comment to match spec (Laszlo)
>   - reorder aml_lor() and aml_land() in header (Laszlo)
> v4:
>   - fix 5.2 machine types so they won't apply pc_compat_5_1 (Laszlo)
> v3:
>   - rebase on top of "[PATCH v2] hw: add compat machines for 5.2"
>     so apply that before this patch
> v2:
>   - AML: clean is_inserted flag only after SMI callback
>   - make x-smi-cpu-hotunplug false by default
>   - massage error hint on not supported unplug
> v1:
>   - fix typos and some phrases (Laszlo)
>   - add unplug check (Laszlo)
>   - redo AML scan logic to avoid race when adding multiple CPUs

Michael,

just saw your pull request which missed this series.
Is there any plans to queue it for the next pull request?



> CPU hotplug with Secure Boot was not really supported and firmware wasn't aware
> of hotplugged CPUs (which might lead to guest crashes). During 4.2 we introduced
> locked SMI handler RAM arrea to make sure that guest OS wasn't able to inject
> its own SMI handler and OVMF added initial CPU hotplug support.
> 
> This series is QEMU part of that support which lets QMVF tell QEMU that
> CPU hotplug with SMI broadcast enabled is supported so that QEMU would be able
> to prevent hotplug in case it's not supported and trigger SMI on hotplug when
> it's necessary.
> 
> Igor Mammedov (10):
>   x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features
>   x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is
>     in use
>   x86: cpuhp: refuse cpu hot-unplug request earlier if not supported
>   acpi: add aml_land() and aml_break() primitives
>   tests: acpi: mark to be changed tables in
>     bios-tables-test-allowed-diff
>   x86: ich9: expose "smi_negotiated_features" as a QOM property
>   x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp
>   x86: acpi: introduce the PCI0.SMI0 ACPI device
>   x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
>   tests: acpi: update acpi blobs with new AML
> 
>  include/hw/acpi/aml-build.h       |   2 +
>  include/hw/acpi/cpu.h             |   1 +
>  include/hw/i386/ich9.h            |   4 +
>  hw/acpi/aml-build.c               |  16 +++
>  hw/acpi/cpu.c                     | 156 ++++++++++++++++++++++++------
>  hw/acpi/ich9.c                    |  24 ++++-
>  hw/i386/acpi-build.c              |  35 ++++++-
>  hw/i386/pc.c                      |  15 ++-
>  hw/isa/lpc_ich9.c                 |  16 +++
>  tests/data/acpi/pc/DSDT           | Bin 4934 -> 5060 bytes
>  tests/data/acpi/pc/DSDT.acpihmat  | Bin 6258 -> 6385 bytes
>  tests/data/acpi/pc/DSDT.bridge    | Bin 6793 -> 6919 bytes
>  tests/data/acpi/pc/DSDT.cphp      | Bin 5397 -> 5524 bytes
>  tests/data/acpi/pc/DSDT.dimmpxm   | Bin 6587 -> 6714 bytes
>  tests/data/acpi/pc/DSDT.ipmikcs   | Bin 5006 -> 5132 bytes
>  tests/data/acpi/pc/DSDT.memhp     | Bin 6293 -> 6419 bytes
>  tests/data/acpi/pc/DSDT.numamem   | Bin 4940 -> 5066 bytes
>  tests/data/acpi/q35/DSDT          | Bin 7678 -> 7804 bytes
>  tests/data/acpi/q35/DSDT.acpihmat | Bin 9002 -> 9129 bytes
>  tests/data/acpi/q35/DSDT.bridge   | Bin 7695 -> 7821 bytes
>  tests/data/acpi/q35/DSDT.cphp     | Bin 8141 -> 8268 bytes
>  tests/data/acpi/q35/DSDT.dimmpxm  | Bin 9331 -> 9458 bytes
>  tests/data/acpi/q35/DSDT.ipmibt   | Bin 7753 -> 7879 bytes
>  tests/data/acpi/q35/DSDT.memhp    | Bin 9037 -> 9163 bytes
>  tests/data/acpi/q35/DSDT.mmio64   | Bin 8808 -> 8934 bytes
>  tests/data/acpi/q35/DSDT.numamem  | Bin 7684 -> 7810 bytes
>  tests/data/acpi/q35/DSDT.tis      | Bin 8283 -> 8409 bytes
>  27 files changed, 239 insertions(+), 30 deletions(-)
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 00/10] x86: fix cpu hotplug with secure boot
  2020-09-21 11:46 ` Igor Mammedov
@ 2020-09-21 12:34   ` Michael S. Tsirkin
  2020-09-23  9:48     ` Igor Mammedov
  0 siblings, 1 reply; 25+ messages in thread
From: Michael S. Tsirkin @ 2020-09-21 12:34 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: boris.ostrovsky, lersek, qemu-devel, aaron.young

On Mon, Sep 21, 2020 at 01:46:01PM +0200, Igor Mammedov wrote:
> On Mon,  7 Sep 2020 07:23:38 -0400
> Igor Mammedov <imammedo@redhat.com> wrote:
> 
> > v5:
> >   - fix hotplug on Windows when there is more than 256 possible CPUs
> >     (Windows isn't able to handle VarPackage over 255 elements
> >      so process CPUs in batches)
> >   - fix off-by-one in package length (Laszlo)
> >   - fix not selecting CPU before clearing insert event (Laszlo)
> >   - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus, zero) (Laszlo)
> >   - split 'x68: acpi: trigger SMI before sending hotplug Notify event to OSPM'
> >     in samller chunks (Laszlo)
> >   - fix comment to match spec (Laszlo)
> >   - reorder aml_lor() and aml_land() in header (Laszlo)
> > v4:
> >   - fix 5.2 machine types so they won't apply pc_compat_5_1 (Laszlo)
> > v3:
> >   - rebase on top of "[PATCH v2] hw: add compat machines for 5.2"
> >     so apply that before this patch
> > v2:
> >   - AML: clean is_inserted flag only after SMI callback
> >   - make x-smi-cpu-hotunplug false by default
> >   - massage error hint on not supported unplug
> > v1:
> >   - fix typos and some phrases (Laszlo)
> >   - add unplug check (Laszlo)
> >   - redo AML scan logic to avoid race when adding multiple CPUs
> 
> Michael,
> 
> just saw your pull request which missed this series.
> Is there any plans to queue it for the next pull request?

Oh.
You didn't Cc me on most patches so I assumed this is targeting some other tree.
Sorry.
Will review and queue, thanks.

> 
> 
> > CPU hotplug with Secure Boot was not really supported and firmware wasn't aware
> > of hotplugged CPUs (which might lead to guest crashes). During 4.2 we introduced
> > locked SMI handler RAM arrea to make sure that guest OS wasn't able to inject
> > its own SMI handler and OVMF added initial CPU hotplug support.
> > 
> > This series is QEMU part of that support which lets QMVF tell QEMU that
> > CPU hotplug with SMI broadcast enabled is supported so that QEMU would be able
> > to prevent hotplug in case it's not supported and trigger SMI on hotplug when
> > it's necessary.
> > 
> > Igor Mammedov (10):
> >   x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features
> >   x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is
> >     in use
> >   x86: cpuhp: refuse cpu hot-unplug request earlier if not supported
> >   acpi: add aml_land() and aml_break() primitives
> >   tests: acpi: mark to be changed tables in
> >     bios-tables-test-allowed-diff
> >   x86: ich9: expose "smi_negotiated_features" as a QOM property
> >   x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp
> >   x86: acpi: introduce the PCI0.SMI0 ACPI device
> >   x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
> >   tests: acpi: update acpi blobs with new AML
> > 
> >  include/hw/acpi/aml-build.h       |   2 +
> >  include/hw/acpi/cpu.h             |   1 +
> >  include/hw/i386/ich9.h            |   4 +
> >  hw/acpi/aml-build.c               |  16 +++
> >  hw/acpi/cpu.c                     | 156 ++++++++++++++++++++++++------
> >  hw/acpi/ich9.c                    |  24 ++++-
> >  hw/i386/acpi-build.c              |  35 ++++++-
> >  hw/i386/pc.c                      |  15 ++-
> >  hw/isa/lpc_ich9.c                 |  16 +++
> >  tests/data/acpi/pc/DSDT           | Bin 4934 -> 5060 bytes
> >  tests/data/acpi/pc/DSDT.acpihmat  | Bin 6258 -> 6385 bytes
> >  tests/data/acpi/pc/DSDT.bridge    | Bin 6793 -> 6919 bytes
> >  tests/data/acpi/pc/DSDT.cphp      | Bin 5397 -> 5524 bytes
> >  tests/data/acpi/pc/DSDT.dimmpxm   | Bin 6587 -> 6714 bytes
> >  tests/data/acpi/pc/DSDT.ipmikcs   | Bin 5006 -> 5132 bytes
> >  tests/data/acpi/pc/DSDT.memhp     | Bin 6293 -> 6419 bytes
> >  tests/data/acpi/pc/DSDT.numamem   | Bin 4940 -> 5066 bytes
> >  tests/data/acpi/q35/DSDT          | Bin 7678 -> 7804 bytes
> >  tests/data/acpi/q35/DSDT.acpihmat | Bin 9002 -> 9129 bytes
> >  tests/data/acpi/q35/DSDT.bridge   | Bin 7695 -> 7821 bytes
> >  tests/data/acpi/q35/DSDT.cphp     | Bin 8141 -> 8268 bytes
> >  tests/data/acpi/q35/DSDT.dimmpxm  | Bin 9331 -> 9458 bytes
> >  tests/data/acpi/q35/DSDT.ipmibt   | Bin 7753 -> 7879 bytes
> >  tests/data/acpi/q35/DSDT.memhp    | Bin 9037 -> 9163 bytes
> >  tests/data/acpi/q35/DSDT.mmio64   | Bin 8808 -> 8934 bytes
> >  tests/data/acpi/q35/DSDT.numamem  | Bin 7684 -> 7810 bytes
> >  tests/data/acpi/q35/DSDT.tis      | Bin 8283 -> 8409 bytes
> >  27 files changed, 239 insertions(+), 30 deletions(-)
> > 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 00/10] x86: fix cpu hotplug with secure boot
  2020-09-21 12:34   ` Michael S. Tsirkin
@ 2020-09-23  9:48     ` Igor Mammedov
  0 siblings, 0 replies; 25+ messages in thread
From: Igor Mammedov @ 2020-09-23  9:48 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: boris.ostrovsky, lersek, qemu-devel, aaron.young

On Mon, 21 Sep 2020 08:34:31 -0400
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Sep 21, 2020 at 01:46:01PM +0200, Igor Mammedov wrote:
> > On Mon,  7 Sep 2020 07:23:38 -0400
> > Igor Mammedov <imammedo@redhat.com> wrote:
> >   
> > > v5:
> > >   - fix hotplug on Windows when there is more than 256 possible CPUs
> > >     (Windows isn't able to handle VarPackage over 255 elements
> > >      so process CPUs in batches)
> > >   - fix off-by-one in package length (Laszlo)
> > >   - fix not selecting CPU before clearing insert event (Laszlo)
> > >   - use aml_lgreater() instead of aml_lnot(aml_equal(num_added_cpus, zero) (Laszlo)
> > >   - split 'x68: acpi: trigger SMI before sending hotplug Notify event to OSPM'
> > >     in samller chunks (Laszlo)
> > >   - fix comment to match spec (Laszlo)
> > >   - reorder aml_lor() and aml_land() in header (Laszlo)
> > > v4:
> > >   - fix 5.2 machine types so they won't apply pc_compat_5_1 (Laszlo)
> > > v3:
> > >   - rebase on top of "[PATCH v2] hw: add compat machines for 5.2"
> > >     so apply that before this patch
> > > v2:
> > >   - AML: clean is_inserted flag only after SMI callback
> > >   - make x-smi-cpu-hotunplug false by default
> > >   - massage error hint on not supported unplug
> > > v1:
> > >   - fix typos and some phrases (Laszlo)
> > >   - add unplug check (Laszlo)
> > >   - redo AML scan logic to avoid race when adding multiple CPUs  
> > 
> > Michael,
> > 
> > just saw your pull request which missed this series.
> > Is there any plans to queue it for the next pull request?  
> 
> Oh.
> You didn't Cc me on most patches so I assumed this is targeting some other tree.
> Sorry.
> Will review and queue, thanks.
Thanks,

Series doesn't apply anymore,
I've rebased fixing conflicts and resent it as v6.

> 
> > 
> >   
> > > CPU hotplug with Secure Boot was not really supported and firmware wasn't aware
> > > of hotplugged CPUs (which might lead to guest crashes). During 4.2 we introduced
> > > locked SMI handler RAM arrea to make sure that guest OS wasn't able to inject
> > > its own SMI handler and OVMF added initial CPU hotplug support.
> > > 
> > > This series is QEMU part of that support which lets QMVF tell QEMU that
> > > CPU hotplug with SMI broadcast enabled is supported so that QEMU would be able
> > > to prevent hotplug in case it's not supported and trigger SMI on hotplug when
> > > it's necessary.
> > > 
> > > Igor Mammedov (10):
> > >   x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features
> > >   x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is
> > >     in use
> > >   x86: cpuhp: refuse cpu hot-unplug request earlier if not supported
> > >   acpi: add aml_land() and aml_break() primitives
> > >   tests: acpi: mark to be changed tables in
> > >     bios-tables-test-allowed-diff
> > >   x86: ich9: expose "smi_negotiated_features" as a QOM property
> > >   x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp
> > >   x86: acpi: introduce the PCI0.SMI0 ACPI device
> > >   x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
> > >   tests: acpi: update acpi blobs with new AML
> > > 
> > >  include/hw/acpi/aml-build.h       |   2 +
> > >  include/hw/acpi/cpu.h             |   1 +
> > >  include/hw/i386/ich9.h            |   4 +
> > >  hw/acpi/aml-build.c               |  16 +++
> > >  hw/acpi/cpu.c                     | 156 ++++++++++++++++++++++++------
> > >  hw/acpi/ich9.c                    |  24 ++++-
> > >  hw/i386/acpi-build.c              |  35 ++++++-
> > >  hw/i386/pc.c                      |  15 ++-
> > >  hw/isa/lpc_ich9.c                 |  16 +++
> > >  tests/data/acpi/pc/DSDT           | Bin 4934 -> 5060 bytes
> > >  tests/data/acpi/pc/DSDT.acpihmat  | Bin 6258 -> 6385 bytes
> > >  tests/data/acpi/pc/DSDT.bridge    | Bin 6793 -> 6919 bytes
> > >  tests/data/acpi/pc/DSDT.cphp      | Bin 5397 -> 5524 bytes
> > >  tests/data/acpi/pc/DSDT.dimmpxm   | Bin 6587 -> 6714 bytes
> > >  tests/data/acpi/pc/DSDT.ipmikcs   | Bin 5006 -> 5132 bytes
> > >  tests/data/acpi/pc/DSDT.memhp     | Bin 6293 -> 6419 bytes
> > >  tests/data/acpi/pc/DSDT.numamem   | Bin 4940 -> 5066 bytes
> > >  tests/data/acpi/q35/DSDT          | Bin 7678 -> 7804 bytes
> > >  tests/data/acpi/q35/DSDT.acpihmat | Bin 9002 -> 9129 bytes
> > >  tests/data/acpi/q35/DSDT.bridge   | Bin 7695 -> 7821 bytes
> > >  tests/data/acpi/q35/DSDT.cphp     | Bin 8141 -> 8268 bytes
> > >  tests/data/acpi/q35/DSDT.dimmpxm  | Bin 9331 -> 9458 bytes
> > >  tests/data/acpi/q35/DSDT.ipmibt   | Bin 7753 -> 7879 bytes
> > >  tests/data/acpi/q35/DSDT.memhp    | Bin 9037 -> 9163 bytes
> > >  tests/data/acpi/q35/DSDT.mmio64   | Bin 8808 -> 8934 bytes
> > >  tests/data/acpi/q35/DSDT.numamem  | Bin 7684 -> 7810 bytes
> > >  tests/data/acpi/q35/DSDT.tis      | Bin 8283 -> 8409 bytes
> > >  27 files changed, 239 insertions(+), 30 deletions(-)
> > >   
> 
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-09-23  9:54 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 01/10] x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 02/10] x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is in use Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 03/10] x86: cpuhp: refuse cpu hot-unplug request earlier if not supported Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 04/10] acpi: add aml_land() and aml_break() primitives Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 05/10] tests: acpi: mark to be changed tables in bios-tables-test-allowed-diff Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 06/10] x86: ich9: expose "smi_negotiated_features" as a QOM property Igor Mammedov
2020-09-07 12:49   ` Laszlo Ersek
2020-09-07 11:23 ` [PATCH v5 07/10] x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp Igor Mammedov
2020-09-07 12:49   ` Laszlo Ersek
2020-09-07 11:23 ` [PATCH v5 08/10] x86: acpi: introduce the PCI0.SMI0 ACPI device Igor Mammedov
2020-09-07 12:49   ` Laszlo Ersek
2020-09-07 11:23 ` [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM Igor Mammedov
2020-09-07 15:17   ` Laszlo Ersek
2020-09-08  7:39     ` Igor Mammedov
2020-09-08  9:35       ` Laszlo Ersek
2020-09-08 11:00         ` Laszlo Ersek
2020-09-08 14:24   ` [PATCH v5 9/10] fixup! " Igor Mammedov
2020-09-08 18:09     ` Laszlo Ersek
2020-09-07 11:23 ` [PATCH v5 10/10] tests: acpi: update acpi blobs with new AML Igor Mammedov
2020-09-08 14:29 ` [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
2020-09-08 18:08   ` Laszlo Ersek
2020-09-21 11:46 ` Igor Mammedov
2020-09-21 12:34   ` Michael S. Tsirkin
2020-09-23  9:48     ` Igor Mammedov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).