All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/4] Implement Physical Presence interface for TPM 1.2 and 2
@ 2018-01-16 15:51 Stefan Berger
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 1/4] tpm: implement virtual memory device for TPM PPI Stefan Berger
                   ` (3 more replies)
  0 siblings, 4 replies; 35+ messages in thread
From: Stefan Berger @ 2018-01-16 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, stefanb, kevin, marcandre.lureau, lersek

The following patches implement the TPM Physical Presence Interface that
allows a user to set a command via ACPI (sysfs entry in Linux) that, upon
the next reboot, the firmware (BIOS, UEFI) looks for an acts upon by sending
sequences of commands to the TPM 1.2 or 2.

My first goal is to get the virtual memory device accepted since it extends
the TPM TIS (and TPM CRB) interface's internal data structure with another
256 bytes that would have to be written out when suspending the VM. So this
patch would have to come before we support VM with TPM suspend and resume.
The device supports 256 bytes at address 0xffff0000. This is the address that
EDK2 uses and that I have patches for for SeaBIOS support.

Regards,
   Stefan


Stefan Berger (4):
  tpm: implement virtual memory device for TPM PPI
  acpi: build QEMU table for PPI virtual memory device
  acpi: implement aml_lless_equal
  acpi: build TPM Physical Presence interface

 hw/acpi/aml-build.c         |  11 ++
 hw/i386/acpi-build.c        | 292 ++++++++++++++++++++++++++++++++++++++++++++
 hw/tpm/Makefile.objs        |   2 +-
 hw/tpm/tpm_ppi.c            |  67 ++++++++++
 hw/tpm/tpm_ppi.h            |  26 ++++
 hw/tpm/tpm_tis.c            |   5 +
 include/hw/acpi/acpi-defs.h |  10 ++
 include/hw/acpi/aml-build.h |   1 +
 include/hw/acpi/tpm.h       |  37 ++++++
 9 files changed, 450 insertions(+), 1 deletion(-)
 create mode 100644 hw/tpm/tpm_ppi.c
 create mode 100644 hw/tpm/tpm_ppi.h

-- 
2.5.5

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH v2 1/4] tpm: implement virtual memory device for TPM PPI
  2018-01-16 15:51 [Qemu-devel] [PATCH v2 0/4] Implement Physical Presence interface for TPM 1.2 and 2 Stefan Berger
@ 2018-01-16 15:51 ` Stefan Berger
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device Stefan Berger
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 35+ messages in thread
From: Stefan Berger @ 2018-01-16 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, stefanb, kevin, marcandre.lureau, lersek

Implement a virtual memory device for the TPM Physical Presence interface.
The memory is located at 0xfffef000 and used by ACPI to send messages to the
firmware (BIOS) and by the firmware to provide parameters for each one of
the supported codes.

This device should be used by all TPM interfaces on x86 and can be added
by calling tpm_ppi_init_io().

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>

---

v1->v2:
 - moved to byte access since an infrequently used device;
   this simplifies code
 - increase size of device to 0x400
 - move device to 0xfffef000 since SeaBIOS has some code at 0xffff0000:
   'On the emulators, the bios at 0xf0000 is also at 0xffff0000'
---
 hw/tpm/Makefile.objs  |  2 +-
 hw/tpm/tpm_ppi.c      | 67 +++++++++++++++++++++++++++++++++++++++++++++++++++
 hw/tpm/tpm_ppi.h      | 26 ++++++++++++++++++++
 hw/tpm/tpm_tis.c      |  5 ++++
 include/hw/acpi/tpm.h |  6 +++++
 5 files changed, 105 insertions(+), 1 deletion(-)
 create mode 100644 hw/tpm/tpm_ppi.c
 create mode 100644 hw/tpm/tpm_ppi.h

diff --git a/hw/tpm/Makefile.objs b/hw/tpm/Makefile.objs
index 7a93b24..3eb0558 100644
--- a/hw/tpm/Makefile.objs
+++ b/hw/tpm/Makefile.objs
@@ -1,4 +1,4 @@
-common-obj-y += tpm_util.o
+common-obj-y += tpm_util.o tpm_ppi.o
 common-obj-$(CONFIG_TPM_TIS) += tpm_tis.o
 common-obj-$(CONFIG_TPM_PASSTHROUGH) += tpm_passthrough.o
 common-obj-$(CONFIG_TPM_EMULATOR) += tpm_emulator.o
diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
new file mode 100644
index 0000000..98e8318
--- /dev/null
+++ b/hw/tpm/tpm_ppi.c
@@ -0,0 +1,67 @@
+/*
+ * tpm_ppi.c - TPM Physical Presence Interface
+ *
+ * Copyright (C) 2018 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Berger <stefanb@us.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "exec/memory.h"
+#include "exec/address-spaces.h"
+
+#include "tpm_ppi.h"
+
+#define DEBUG_PPI 0
+
+#define DPRINTF(fmt, ...) do { \
+    if (DEBUG_PPI) { \
+        printf(fmt, ## __VA_ARGS__); \
+    } \
+} while (0)
+
+static uint64_t tpm_ppi_mmio_read(void *opaque, hwaddr addr,
+                                  unsigned size)
+{
+    TPMPPI *s = opaque;
+
+    DPRINTF("tpm_ppi:  read(%04x) = %02x\n",
+            (unsigned int)addr, (unsigned int)s->ram[addr]);
+
+    return s->ram[addr];
+}
+
+static void tpm_ppi_mmio_write(void *opaque, hwaddr addr,
+                               uint64_t val, unsigned size)
+{
+    TPMPPI *s = opaque;
+
+    DPRINTF("tpm_ppi: write(%04x) = %02x\n",
+            (unsigned int)addr, (unsigned int)val);
+
+    s->ram[addr] = val;
+}
+
+static const MemoryRegionOps tpm_ppi_memory_ops = {
+    .read = tpm_ppi_mmio_read,
+    .write = tpm_ppi_mmio_write,
+    .endianness = DEVICE_NATIVE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 1,
+    },
+};
+
+void tpm_ppi_init_io(TPMPPI *tpmppi, struct MemoryRegion *m, Object *obj)
+{
+    memory_region_init_io(&tpmppi->mmio, obj, &tpm_ppi_memory_ops,
+                          tpmppi, "tpm-ppi-mmio",
+                          TPM_PPI_ADDR_SIZE);
+
+    memory_region_add_subregion(m, TPM_PPI_ADDR_BASE, &tpmppi->mmio);
+}
diff --git a/hw/tpm/tpm_ppi.h b/hw/tpm/tpm_ppi.h
new file mode 100644
index 0000000..17030bd
--- /dev/null
+++ b/hw/tpm/tpm_ppi.h
@@ -0,0 +1,26 @@
+/*
+ * TPM Physical Presence Interface
+ *
+ * Copyright (C) 2018 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Berger    <stefanb@us.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#ifndef TPM_TPM_PPI_H
+#define TPM_TPM_PPI_H
+
+#include "hw/acpi/tpm.h"
+#include "exec/address-spaces.h"
+
+typedef struct TPMPPI {
+    MemoryRegion mmio;
+
+    uint8_t ram[TPM_PPI_ADDR_SIZE];
+} TPMPPI;
+
+void tpm_ppi_init_io(TPMPPI *tpmppi, struct MemoryRegion *m, Object *obj);
+
+#endif /* TPM_TPM_PPI_H */
diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
index 561384c..13e7dd3 100644
--- a/hw/tpm/tpm_tis.c
+++ b/hw/tpm/tpm_tis.c
@@ -31,6 +31,7 @@
 #include "sysemu/tpm_backend.h"
 #include "tpm_int.h"
 #include "tpm_util.h"
+#include "tpm_ppi.h"
 
 #define TPM_TIS_NUM_LOCALITIES      5     /* per spec */
 #define TPM_TIS_LOCALITY_SHIFT      12
@@ -80,6 +81,8 @@ typedef struct TPMState {
     TPMVersion be_tpm_version;
 
     size_t be_buffer_size;
+
+    TPMPPI ppi;
 } TPMState;
 
 #define TPM(obj) OBJECT_CHECK(TPMState, (obj), TYPE_TPM_TIS)
@@ -1041,6 +1044,8 @@ static void tpm_tis_realizefn(DeviceState *dev, Error **errp)
 
     memory_region_add_subregion(isa_address_space(ISA_DEVICE(dev)),
                                 TPM_TIS_ADDR_BASE, &s->mmio);
+
+    tpm_ppi_init_io(&s->ppi, isa_address_space(ISA_DEVICE(dev)), OBJECT(s));
 }
 
 static void tpm_tis_initfn(Object *obj)
diff --git a/include/hw/acpi/tpm.h b/include/hw/acpi/tpm.h
index 6d516c6..16227bc 100644
--- a/include/hw/acpi/tpm.h
+++ b/include/hw/acpi/tpm.h
@@ -31,4 +31,10 @@
 
 #define TPM2_START_METHOD_MMIO      6
 
+/*
+ * Physical Presence Interface
+ */
+#define TPM_PPI_ADDR_SIZE           0x400
+#define TPM_PPI_ADDR_BASE           0xfffef000
+
 #endif /* HW_ACPI_TPM_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device
  2018-01-16 15:51 [Qemu-devel] [PATCH v2 0/4] Implement Physical Presence interface for TPM 1.2 and 2 Stefan Berger
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 1/4] tpm: implement virtual memory device for TPM PPI Stefan Berger
@ 2018-01-16 15:51 ` Stefan Berger
  2018-01-16 16:35   ` Michael S. Tsirkin
  2018-01-16 20:42   ` Laszlo Ersek
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 3/4] acpi: implement aml_lless_equal Stefan Berger
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface Stefan Berger
  3 siblings, 2 replies; 35+ messages in thread
From: Stefan Berger @ 2018-01-16 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, stefanb, kevin, marcandre.lureau, lersek

To avoid having to hard code the base address of the PPI virtual memory
device we introduce a QEMU ACPI table that holds the base address, if a
TPM 1.2 or 2 is used. This table gives us flexibility to move the base
address later on.

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
---
 hw/i386/acpi-build.c        | 19 +++++++++++++++++++
 include/hw/acpi/acpi-defs.h |  8 ++++++++
 2 files changed, 27 insertions(+)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 18b939e..522d6d2 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2628,6 +2628,20 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
     return true;
 }
 
+static void build_qemu(GArray *table_data, BIOSLinker *linker,
+                       TPMVersion tpm_version)
+{
+    AcpiTableQemu *qemu = acpi_data_push(table_data, sizeof(*qemu));
+
+    if (tpm_version != TPM_VERSION_UNSPEC) {
+        qemu->tpmppi_addr = TPM_PPI_ADDR_BASE;
+        qemu->tpm_version = tpm_version;
+    }
+
+    build_header(linker, table_data,
+                 (void *)qemu, "QEMU", sizeof(*qemu), 1, "QEMU", "CONF");
+}
+
 static
 void acpi_build(AcpiBuildTables *tables, MachineState *machine)
 {
@@ -2734,6 +2748,11 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
                           &pcms->acpi_nvdimm_state, machine->ram_slots);
     }
 
+    if (misc.tpm_version != TPM_VERSION_UNSPEC) {
+        acpi_add_table(table_offsets, tables_blob);
+        build_qemu(tables_blob, tables->linker, misc.tpm_version);
+    }
+
     /* Add tables supplied by user (if any) */
     for (u = acpi_table_first(); u; u = acpi_table_next(u)) {
         unsigned len = acpi_table_len(u);
diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
index 80c8099..98764c1 100644
--- a/include/hw/acpi/acpi-defs.h
+++ b/include/hw/acpi/acpi-defs.h
@@ -573,6 +573,14 @@ struct Acpi20TPM2 {
 } QEMU_PACKED;
 typedef struct Acpi20TPM2 Acpi20TPM2;
 
+/* QEMU - Custom QEMU table */
+struct AcpiTableQemu {
+    ACPI_TABLE_HEADER_DEF
+    uint32_t tpmppi_addr;
+    uint8_t tpm_version; /* 1 = 1.2, 2 = 2 */
+};
+typedef struct AcpiTableQemu AcpiTableQemu;
+
 /* DMAR - DMA Remapping table r2.2 */
 struct AcpiTableDmar {
     ACPI_TABLE_HEADER_DEF
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH v2 3/4] acpi: implement aml_lless_equal
  2018-01-16 15:51 [Qemu-devel] [PATCH v2 0/4] Implement Physical Presence interface for TPM 1.2 and 2 Stefan Berger
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 1/4] tpm: implement virtual memory device for TPM PPI Stefan Berger
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device Stefan Berger
@ 2018-01-16 15:51 ` Stefan Berger
  2018-01-16 16:13   ` Michael S. Tsirkin
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface Stefan Berger
  3 siblings, 1 reply; 35+ messages in thread
From: Stefan Berger @ 2018-01-16 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, stefanb, kevin, marcandre.lureau, lersek

LLessEqualOp = LNotOp LGreaterOp

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/acpi/aml-build.c         | 11 +++++++++++
 include/hw/acpi/aml-build.h |  1 +
 2 files changed, 12 insertions(+)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 36a6cc4..c475f56 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -568,6 +568,17 @@ Aml *aml_lless(Aml *arg1, Aml *arg2)
     return var;
 }
 
+/* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefLLessEqual */
+Aml *aml_lless_equal(Aml *arg1, Aml *arg2)
+{
+    /* LLessEqualOp := LNotOp LGreaterOp */
+    Aml *var = aml_opcode(0x92 /* LNotOp */);
+    build_append_byte(var->buf, 0x94 /* LGreaterOp */);
+    aml_append(var, arg1);
+    aml_append(var, arg2);
+    return var;
+}
+
 /* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefAdd */
 Aml *aml_add(Aml *arg1, Aml *arg2, Aml *dst)
 {
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 88d0738..c4398cc 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -267,6 +267,7 @@ Aml *aml_lor(Aml *arg1, Aml *arg2);
 Aml *aml_shiftleft(Aml *arg1, Aml *count);
 Aml *aml_shiftright(Aml *arg1, Aml *count, Aml *dst);
 Aml *aml_lless(Aml *arg1, Aml *arg2);
+Aml *aml_lless_equal(Aml *arg1, Aml *arg2);
 Aml *aml_add(Aml *arg1, Aml *arg2, Aml *dst);
 Aml *aml_subtract(Aml *arg1, Aml *arg2, Aml *dst);
 Aml *aml_increment(Aml *arg);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-01-16 15:51 [Qemu-devel] [PATCH v2 0/4] Implement Physical Presence interface for TPM 1.2 and 2 Stefan Berger
                   ` (2 preceding siblings ...)
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 3/4] acpi: implement aml_lless_equal Stefan Berger
@ 2018-01-16 15:51 ` Stefan Berger
  2018-02-09 20:19   ` Stefan Berger
  3 siblings, 1 reply; 35+ messages in thread
From: Stefan Berger @ 2018-01-16 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, stefanb, kevin, marcandre.lureau, lersek

The TPM Physical Presence interface consists of an ACPI part, a shared
memory part, and code in the firmware. Users can send messages to the
firmware by writing a code into the shared memory through invoking the
ACPI code. When a reboot happens, the firmware looks for the code and
acts on it by sending sequences of commands to the TPM.

This patch adds the ACPI code. It is similar to the one in EDK2 but doesn't
assume that SMIs are necessary to use. It uses a similar datastructure for
the shared memory as EDK2 does so that EDK2 and SeaBIOS could both make use
of it. I extended the shared memory data structure with an array of 256
bytes, one for each code that could be implemented. The array contains
flags describing the individual codes. This decouples the ACPI implementation
from the firmware implementation.

The underlying TCG specification is accessible from the following page.

https://trustedcomputinggroup.org/tcg-physical-presence-interface-specification/

This patch implements version 1.30.

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>

---
 v1->v2:
   - get rid of FAIL variable; function 5 was using it and always
     returns 0; the value is related to the ACPI function call not
     a possible failure of the TPM function call.
   - extend shared memory data structure with per-opcode entries
     holding flags and use those flags to determine what to return
     to caller
   - implement interface version 1.3
---
 hw/i386/acpi-build.c        | 273 ++++++++++++++++++++++++++++++++++++++++++++
 include/hw/acpi/acpi-defs.h |   2 +
 include/hw/acpi/tpm.h       |  31 +++++
 3 files changed, 306 insertions(+)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 522d6d2..f8c2d01 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -42,6 +42,7 @@
 #include "hw/acpi/memory_hotplug.h"
 #include "sysemu/tpm.h"
 #include "hw/acpi/tpm.h"
+#include "hw/tpm/tpm_ppi.h"
 #include "hw/acpi/vmgenid.h"
 #include "sysemu/tpm_backend.h"
 #include "hw/timer/mc146818rtc_regs.h"
@@ -1860,6 +1861,276 @@ static Aml *build_q35_osc_method(void)
 }
 
 static void
+build_tpm_ppi(Aml *dev, TPMVersion tpm_version)
+{
+    Aml *method, *field, *ifctx, *ifctx2, *ifctx3, *pak;
+
+    aml_append(dev,
+               aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
+                                    aml_int(TPM_PPI_ADDR_BASE),
+                                    TPM_PPI_STRUCT_SIZE));
+
+    field = aml_field("TPPI", AML_ANY_ACC, AML_NOLOCK, AML_PRESERVE);
+    aml_append(field, aml_named_field("PPIN",
+               sizeof(uint8_t) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("PPIP",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("PPRP",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("PPRQ",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("PPRM",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(field, aml_named_field("LPPR",
+               sizeof(uint32_t) * BITS_PER_BYTE));
+    aml_append(field, aml_reserved_field(
+               sizeof(uint32_t) * BITS_PER_BYTE /* FRET */ +
+               sizeof(uint8_t) * BITS_PER_BYTE /* MCIN */ +
+               sizeof(uint32_t) * BITS_PER_BYTE * 4 /* MCIP .. UCRQ */ +
+               sizeof(uint8_t) * BITS_PER_BYTE * 214));
+    aml_append(field, aml_named_field("FUNC",
+               sizeof(uint8_t) * BITS_PER_BYTE * 256));
+    aml_append(dev, field);
+
+    method = aml_method("_DSM", 4, AML_SERIALIZED);
+    {
+        uint8_t zerobyte[1] = { 0 };
+
+        ifctx = aml_if(
+                  aml_equal(aml_arg(0),
+                            aml_touuid("3DDDFAA6-361B-4EB4-A424-8D10089D1653"))
+                );
+        {
+            aml_append(ifctx,
+                       aml_store(aml_to_integer(aml_arg(2)), aml_local(0)));
+
+            /* standard DSM query function */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(0)));
+            {
+                uint8_t byte_list[2] = { 0xff, 0x01 };
+                aml_append(ifctx2, aml_return(aml_buffer(2, byte_list)));
+            }
+            aml_append(ifctx, ifctx2);
+
+            /* interface version: 1.3 */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(1)));
+            {
+                aml_append(ifctx2, aml_return(aml_string("1.3")));
+            }
+            aml_append(ifctx, ifctx2);
+
+            /* submit TPM operation */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(2)));
+            {
+                /* get opcode */
+                aml_append(ifctx2,
+                           aml_store(aml_derefof(aml_index(aml_arg(3),
+                                                           aml_int(0))),
+                                     aml_local(0)));
+                /* get opcode flags */
+                aml_append(ifctx2,
+                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
+                                                           aml_local(0))),
+                                     aml_local(1)));
+                ifctx3 = aml_if(
+                              aml_equal(
+                                  aml_and(aml_local(1),
+                                          aml_int(TPM_PPI_FUNC_IMPLEMENTED),
+                                          NULL),
+                                  aml_int(0)
+                              )
+                         );
+                {
+                    /* 1: not implemented */
+                    aml_append(ifctx3, aml_return(aml_int(1)));
+                }
+                aml_append(ifctx2, ifctx3);
+                aml_append(ifctx2, aml_store(aml_local(0), aml_name("PPRQ")));
+                aml_append(ifctx2, aml_store(aml_int(0), aml_name("PPRM")));
+                /* 0: success */
+                aml_append(ifctx2, aml_return(aml_int(0)));
+            }
+            aml_append(ifctx, ifctx2);
+
+            /* get pending TPM operation */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(3)));
+            {
+                /* revision to integer */
+                aml_append(ifctx2,
+                           aml_store(
+                             aml_to_integer(aml_arg(1)),
+                             aml_local(1)));
+                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(1)));
+                {
+                    pak = aml_package(2);
+                    aml_append(pak, aml_int(0));
+                    aml_append(pak, aml_name("PPRQ"));
+                    aml_append(ifctx3, aml_return(pak));
+                }
+                aml_append(ifctx2, ifctx3);
+
+                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(2)));
+                {
+                    pak = aml_package(3);
+                    aml_append(pak, aml_int(0));
+                    aml_append(pak, aml_name("PPRQ"));
+                    aml_append(pak, aml_name("PPRM"));
+                    aml_append(ifctx3, aml_return(pak));
+                }
+                aml_append(ifctx2, ifctx3);
+            }
+            aml_append(ifctx, ifctx2);
+
+            /* get platform-specific action to transition to pre-OS env. */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(4)));
+            {
+                /* get opcode */
+                aml_append(ifctx2,
+                           aml_store(aml_name("PPRQ"),
+                                     aml_local(0)));
+                /* get opcode flags */
+                aml_append(ifctx2,
+                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
+                                                           aml_local(0))),
+                                     aml_local(1)));
+                /* return action flags */
+                aml_append(ifctx2,
+                           aml_return(
+                               aml_shiftright(
+                                   aml_and(aml_local(1),
+                                           aml_int(TPM_PPI_FUNC_ACTION_MASK),
+                                           NULL),
+                                   aml_int(1),
+                                   NULL)));
+            }
+            aml_append(ifctx, ifctx2);
+
+            /* get TPM operation response */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(5)));
+            {
+                pak = aml_package(3);
+
+                aml_append(pak, aml_int(0));
+                aml_append(pak, aml_name("LPPR"));
+                aml_append(pak, aml_name("PPRP"));
+
+                aml_append(ifctx2, aml_return(pak));
+            }
+            aml_append(ifctx, ifctx2);
+
+            /* submit preferred user language */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(6)));
+            {
+                /* 3 = not implemented */
+                aml_append(ifctx2, aml_return(aml_int(3)));
+            }
+            aml_append(ifctx, ifctx2);
+
+            /* submit TPM operation v2 */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(7)));
+            {
+                /* get opcode */
+                aml_append(ifctx2,
+                           aml_store(aml_derefof(aml_index(aml_arg(3),
+                                                           aml_int(0))),
+                                     aml_local(0)));
+                /* get opcode flags */
+                aml_append(ifctx2,
+                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
+                                                           aml_local(0))),
+                                     aml_local(1)));
+                ifctx3 = aml_if(
+                              aml_equal(
+                                  aml_and(aml_local(1),
+                                          aml_int(TPM_PPI_FUNC_MASK),
+                                          NULL),
+                                  aml_int(TPM_PPI_FUNC_NOT_IMPLEMENTED)
+                              )
+                         );
+                {
+                    /* 1: not implemented */
+                    aml_append(ifctx3, aml_return(aml_int(1)));
+                }
+                aml_append(ifctx2, ifctx3);
+
+                ifctx3 = aml_if(
+                              aml_equal(
+                                  aml_and(
+                                      aml_local(1),
+                                      aml_int(TPM_PPI_FUNC_MASK),
+                                      NULL),
+                                  aml_int(TPM_PPI_FUNC_BLOCKED)
+                              )
+                         );
+                {
+                    /* 3: blocked by firmware */
+                    aml_append(ifctx3, aml_return(aml_int(3)));
+                }
+                aml_append(ifctx2, ifctx3);
+
+                /* revision to integer */
+                aml_append(ifctx2,
+                           aml_store(
+                             aml_to_integer(aml_arg(1)),
+                             aml_local(1)));
+
+                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(1)));
+                {
+                    /* revision 1 */
+                    aml_append(ifctx3, aml_store(aml_local(0), aml_name("PPRQ")));
+                    aml_append(ifctx3, aml_store(aml_int(0), aml_name("PPRM")));
+                }
+                aml_append(ifctx2, ifctx3);
+
+                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(2)));
+                {
+                    /* revision 2 */
+                    aml_append(ifctx3, aml_store(aml_local(0), aml_name("PPRQ")));
+                    aml_append(ifctx3, aml_store(
+                                         aml_derefof(
+                                           aml_index(aml_arg(3),
+                                                     aml_int(1))),
+                                         aml_name("PPRM")));
+                }
+                aml_append(ifctx2, ifctx3);
+                /* 0: success */
+                aml_append(ifctx2, aml_return(aml_int(0)));
+            }
+            aml_append(ifctx, ifctx2);
+
+            /* get user confirmation status for operation */
+            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(8)));
+            {
+                /* get opcode */
+                aml_append(ifctx2,
+                           aml_store(aml_derefof(aml_index(aml_arg(3),
+                                                           aml_int(0))),
+                                     aml_local(0)));
+                /* get opcode flags */
+                aml_append(ifctx2,
+                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
+                                                           aml_local(0))),
+                                     aml_local(1)));
+                /* return confirmation status code */
+                aml_append(ifctx2,
+                           aml_return(
+                               aml_shiftright(
+                                   aml_and(aml_local(1),
+                                           aml_int(TPM_PPI_FUNC_MASK),
+                                           NULL),
+                                   aml_int(3),
+                                   NULL)));
+            }
+            aml_append(ifctx, ifctx2);
+
+            aml_append(ifctx, aml_return(aml_buffer(1, zerobyte)));
+        }
+        aml_append(method, ifctx);
+    }
+    aml_append(dev, method);
+}
+
+static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
            Range *pci_hole, Range *pci_hole64, MachineState *machine)
@@ -2218,6 +2489,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
                  */
                 /* aml_append(crs, aml_irq_no_flags(TPM_TIS_IRQ)); */
                 aml_append(dev, aml_name_decl("_CRS", crs));
+                build_tpm_ppi(dev, misc->tpm_version);
                 aml_append(scope, dev);
             }
 
@@ -2636,6 +2908,7 @@ static void build_qemu(GArray *table_data, BIOSLinker *linker,
     if (tpm_version != TPM_VERSION_UNSPEC) {
         qemu->tpmppi_addr = TPM_PPI_ADDR_BASE;
         qemu->tpm_version = tpm_version;
+        qemu->tpmppi_version = TPM_PPI_VERSION_1_30;
     }
 
     build_header(linker, table_data,
diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
index 98764c1..a182194 100644
--- a/include/hw/acpi/acpi-defs.h
+++ b/include/hw/acpi/acpi-defs.h
@@ -578,6 +578,8 @@ struct AcpiTableQemu {
     ACPI_TABLE_HEADER_DEF
     uint32_t tpmppi_addr;
     uint8_t tpm_version; /* 1 = 1.2, 2 = 2 */
+    uint8_t tpmppi_version;
+#define TPM_PPI_VERSION_1_30  1
 };
 typedef struct AcpiTableQemu AcpiTableQemu;
 
diff --git a/include/hw/acpi/tpm.h b/include/hw/acpi/tpm.h
index 16227bc..130a039 100644
--- a/include/hw/acpi/tpm.h
+++ b/include/hw/acpi/tpm.h
@@ -37,4 +37,35 @@
 #define TPM_PPI_ADDR_SIZE           0x400
 #define TPM_PPI_ADDR_BASE           0xfffef000
 
+struct tpm_ppi {
+    uint8_t ppin;            /*  0: set by BIOS */
+    uint32_t ppip;           /*  1: set by ACPI; not used */
+    uint32_t pprp;           /*  5: response from TPM; set by BIOS */
+    uint32_t pprq;           /*  9: opcode; set by ACPI */
+    uint32_t pprm;           /* 13: parameter for opcode; set by ACPI */
+    uint32_t lppr;           /* 17: last opcode; set by BIOS */
+    uint32_t fret;           /* 21: set by ACPI; not used */
+    uint8_t res1;            /* 25: reserved */
+    uint32_t res2[4];        /* 26: reserved */
+    uint8_t  res3[214];      /* 42: reserved */
+    uint8_t  func[256];      /* 256: per TPM function implementation flags;
+                                     set by BIOS */
+/* indication whether function is implemented; bit 0 */
+#define TPM_PPI_FUNC_IMPLEMENTED       (1 << 0)
+/* actions OS should take to transition to the pre-OS env.; bits 1, 2 */
+#define TPM_PPI_FUNC_ACTION_SHUTDOWN   (1 << 1)
+#define TPM_PPI_FUNC_ACTION_REBOOT     (2 << 1)
+#define TPM_PPI_FUNC_ACTION_VENDOR     (3 << 1)
+#define TPM_PPI_FUNC_ACTION_MASK       (3 << 1)
+/* whether function is blocked by BIOS settings; bits 3,4,5 */
+#define TPM_PPI_FUNC_NOT_IMPLEMENTED     (0 << 3)
+#define TPM_PPI_FUNC_BIOS_ONLY           (1 << 3)
+#define TPM_PPI_FUNC_BLOCKED             (2 << 3)
+#define TPM_PPI_FUNC_ALLOWED_USR_REQ     (3 << 3)
+#define TPM_PPI_FUNC_ALLOWED_USR_NOT_REQ (4 << 3)
+#define TPM_PPI_FUNC_MASK                (7 << 3)
+} QEMU_PACKED;
+
+#define TPM_PPI_STRUCT_SIZE  sizeof(struct tpm_ppi)
+
 #endif /* HW_ACPI_TPM_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 3/4] acpi: implement aml_lless_equal
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 3/4] acpi: implement aml_lless_equal Stefan Berger
@ 2018-01-16 16:13   ` Michael S. Tsirkin
  0 siblings, 0 replies; 35+ messages in thread
From: Michael S. Tsirkin @ 2018-01-16 16:13 UTC (permalink / raw)
  To: Stefan Berger; +Cc: qemu-devel, kevin, marcandre.lureau, lersek

On Tue, Jan 16, 2018 at 10:51:39AM -0500, Stefan Berger wrote:
> LLessEqualOp = LNotOp LGreaterOp
> 
> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> Reviewed-by: Igor Mammedov <imammedo@redhat.com>

Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

> ---
>  hw/acpi/aml-build.c         | 11 +++++++++++
>  include/hw/acpi/aml-build.h |  1 +
>  2 files changed, 12 insertions(+)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 36a6cc4..c475f56 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -568,6 +568,17 @@ Aml *aml_lless(Aml *arg1, Aml *arg2)
>      return var;
>  }
>  
> +/* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefLLessEqual */
> +Aml *aml_lless_equal(Aml *arg1, Aml *arg2)
> +{
> +    /* LLessEqualOp := LNotOp LGreaterOp */
> +    Aml *var = aml_opcode(0x92 /* LNotOp */);
> +    build_append_byte(var->buf, 0x94 /* LGreaterOp */);
> +    aml_append(var, arg1);
> +    aml_append(var, arg2);
> +    return var;
> +}
> +
>  /* ACPI 1.0b: 16.2.5.4 Type 2 Opcodes Encoding: DefAdd */
>  Aml *aml_add(Aml *arg1, Aml *arg2, Aml *dst)
>  {
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index 88d0738..c4398cc 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -267,6 +267,7 @@ Aml *aml_lor(Aml *arg1, Aml *arg2);
>  Aml *aml_shiftleft(Aml *arg1, Aml *count);
>  Aml *aml_shiftright(Aml *arg1, Aml *count, Aml *dst);
>  Aml *aml_lless(Aml *arg1, Aml *arg2);
> +Aml *aml_lless_equal(Aml *arg1, Aml *arg2);
>  Aml *aml_add(Aml *arg1, Aml *arg2, Aml *dst);
>  Aml *aml_subtract(Aml *arg1, Aml *arg2, Aml *dst);
>  Aml *aml_increment(Aml *arg);
> -- 
> 2.5.5

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device Stefan Berger
@ 2018-01-16 16:35   ` Michael S. Tsirkin
  2018-01-16 20:42   ` Laszlo Ersek
  1 sibling, 0 replies; 35+ messages in thread
From: Michael S. Tsirkin @ 2018-01-16 16:35 UTC (permalink / raw)
  To: Stefan Berger; +Cc: qemu-devel, kevin, marcandre.lureau, lersek

On Tue, Jan 16, 2018 at 10:51:38AM -0500, Stefan Berger wrote:
> To avoid having to hard code the base address of the PPI virtual memory
> device we introduce a QEMU ACPI table that holds the base address, if a
> TPM 1.2 or 2 is used. This table gives us flexibility to move the base
> address later on.
> 
> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> ---
>  hw/i386/acpi-build.c        | 19 +++++++++++++++++++
>  include/hw/acpi/acpi-defs.h |  8 ++++++++
>  2 files changed, 27 insertions(+)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 18b939e..522d6d2 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -2628,6 +2628,20 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
>      return true;
>  }
>  
> +static void build_qemu(GArray *table_data, BIOSLinker *linker,
> +                       TPMVersion tpm_version)
> +{
> +    AcpiTableQemu *qemu = acpi_data_push(table_data, sizeof(*qemu));
> +
> +    if (tpm_version != TPM_VERSION_UNSPEC) {
> +        qemu->tpmppi_addr = TPM_PPI_ADDR_BASE;
> +        qemu->tpm_version = tpm_version;
> +    }
> +
> +    build_header(linker, table_data,
> +                 (void *)qemu, "QEMU", sizeof(*qemu), 1, "QEMU", "CONF");
> +}
> +
>  static
>  void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>  {
> @@ -2734,6 +2748,11 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>                            &pcms->acpi_nvdimm_state, machine->ram_slots);
>      }
>  
> +    if (misc.tpm_version != TPM_VERSION_UNSPEC) {
> +        acpi_add_table(table_offsets, tables_blob);
> +        build_qemu(tables_blob, tables->linker, misc.tpm_version);
> +    }
> +
>      /* Add tables supplied by user (if any) */
>      for (u = acpi_table_first(); u; u = acpi_table_next(u)) {
>          unsigned len = acpi_table_len(u);
> diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
> index 80c8099..98764c1 100644
> --- a/include/hw/acpi/acpi-defs.h
> +++ b/include/hw/acpi/acpi-defs.h
> @@ -573,6 +573,14 @@ struct Acpi20TPM2 {
>  } QEMU_PACKED;
>  typedef struct Acpi20TPM2 Acpi20TPM2;
>  
> +/* QEMU - Custom QEMU table */
> +struct AcpiTableQemu {
> +    ACPI_TABLE_HEADER_DEF

Since we already have an 8 byte QEMU table due to patching of MCFG
(which we should drop eventually I think) I think it's a good idea to
reserve the first 8 bytes here after the header.


> +    uint32_t tpmppi_addr;
> +    uint8_t tpm_version; /* 1 = 1.2, 2 = 2 */

There are 3 bytes of padding here. Pls make them explicit
as a reserved field.

> +};
> +typedef struct AcpiTableQemu AcpiTableQemu;
> +

>  /* DMAR - DMA Remapping table r2.2 */
>  struct AcpiTableDmar {
>      ACPI_TABLE_HEADER_DEF
> -- 
> 2.5.5

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device Stefan Berger
  2018-01-16 16:35   ` Michael S. Tsirkin
@ 2018-01-16 20:42   ` Laszlo Ersek
  2018-01-16 21:20     ` Stefan Berger
  1 sibling, 1 reply; 35+ messages in thread
From: Laszlo Ersek @ 2018-01-16 20:42 UTC (permalink / raw)
  To: Stefan Berger, qemu-devel; +Cc: mst, kevin, marcandre.lureau

On 01/16/18 16:51, Stefan Berger wrote:
> To avoid having to hard code the base address of the PPI virtual memory
> device we introduce a QEMU ACPI table that holds the base address, if a
> TPM 1.2 or 2 is used. This table gives us flexibility to move the base
> address later on.
> 
> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> ---
>  hw/i386/acpi-build.c        | 19 +++++++++++++++++++
>  include/hw/acpi/acpi-defs.h |  8 ++++++++
>  2 files changed, 27 insertions(+)

I don't understand how the guest OS is supposed to consume the QEMU
table. The AML code in patch #4 does not seem to consume the QEMU table,
for locating the operation region. I'm not saying that it *should*
consume the QEMU table, only that I'm currently not seeing a use for the
QEMU table.

How is the QEMU table useful? What breaks if we drop it?

(Sorry if we've been through this; then I must have lost context.)

Thanks,
Laszlo

> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 18b939e..522d6d2 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -2628,6 +2628,20 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
>      return true;
>  }
>  
> +static void build_qemu(GArray *table_data, BIOSLinker *linker,
> +                       TPMVersion tpm_version)
> +{
> +    AcpiTableQemu *qemu = acpi_data_push(table_data, sizeof(*qemu));
> +
> +    if (tpm_version != TPM_VERSION_UNSPEC) {
> +        qemu->tpmppi_addr = TPM_PPI_ADDR_BASE;
> +        qemu->tpm_version = tpm_version;
> +    }
> +
> +    build_header(linker, table_data,
> +                 (void *)qemu, "QEMU", sizeof(*qemu), 1, "QEMU", "CONF");
> +}
> +
>  static
>  void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>  {
> @@ -2734,6 +2748,11 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>                            &pcms->acpi_nvdimm_state, machine->ram_slots);
>      }
>  
> +    if (misc.tpm_version != TPM_VERSION_UNSPEC) {
> +        acpi_add_table(table_offsets, tables_blob);
> +        build_qemu(tables_blob, tables->linker, misc.tpm_version);
> +    }
> +
>      /* Add tables supplied by user (if any) */
>      for (u = acpi_table_first(); u; u = acpi_table_next(u)) {
>          unsigned len = acpi_table_len(u);
> diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
> index 80c8099..98764c1 100644
> --- a/include/hw/acpi/acpi-defs.h
> +++ b/include/hw/acpi/acpi-defs.h
> @@ -573,6 +573,14 @@ struct Acpi20TPM2 {
>  } QEMU_PACKED;
>  typedef struct Acpi20TPM2 Acpi20TPM2;
>  
> +/* QEMU - Custom QEMU table */
> +struct AcpiTableQemu {
> +    ACPI_TABLE_HEADER_DEF
> +    uint32_t tpmppi_addr;
> +    uint8_t tpm_version; /* 1 = 1.2, 2 = 2 */
> +};
> +typedef struct AcpiTableQemu AcpiTableQemu;
> +
>  /* DMAR - DMA Remapping table r2.2 */
>  struct AcpiTableDmar {
>      ACPI_TABLE_HEADER_DEF
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device
  2018-01-16 20:42   ` Laszlo Ersek
@ 2018-01-16 21:20     ` Stefan Berger
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Berger @ 2018-01-16 21:20 UTC (permalink / raw)
  To: Laszlo Ersek, qemu-devel; +Cc: mst, kevin, marcandre.lureau

On 01/16/2018 03:42 PM, Laszlo Ersek wrote:
> On 01/16/18 16:51, Stefan Berger wrote:
>> To avoid having to hard code the base address of the PPI virtual memory
>> device we introduce a QEMU ACPI table that holds the base address, if a
>> TPM 1.2 or 2 is used. This table gives us flexibility to move the base
>> address later on.
>>
>> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
>> ---
>>   hw/i386/acpi-build.c        | 19 +++++++++++++++++++
>>   include/hw/acpi/acpi-defs.h |  8 ++++++++
>>   2 files changed, 27 insertions(+)
> I don't understand how the guest OS is supposed to consume the QEMU
> table. The AML code in patch #4 does not seem to consume the QEMU table,
> for locating the operation region. I'm not saying that it *should*
> consume the QEMU table, only that I'm currently not seeing a use for the
> QEMU table.
>
> How is the QEMU table useful? What breaks if we drop it?

The QEMU table would only be consumed by SeaBIOS. Following the 
discussion on the SeaBIOS ML, we'll move this into a new fw_cfg file.

     Stefan

>
> (Sorry if we've been through this; then I must have lost context.)
>
> Thanks,
> Laszlo
>
>> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
>> index 18b939e..522d6d2 100644
>> --- a/hw/i386/acpi-build.c
>> +++ b/hw/i386/acpi-build.c
>> @@ -2628,6 +2628,20 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
>>       return true;
>>   }
>>   
>> +static void build_qemu(GArray *table_data, BIOSLinker *linker,
>> +                       TPMVersion tpm_version)
>> +{
>> +    AcpiTableQemu *qemu = acpi_data_push(table_data, sizeof(*qemu));
>> +
>> +    if (tpm_version != TPM_VERSION_UNSPEC) {
>> +        qemu->tpmppi_addr = TPM_PPI_ADDR_BASE;
>> +        qemu->tpm_version = tpm_version;
>> +    }
>> +
>> +    build_header(linker, table_data,
>> +                 (void *)qemu, "QEMU", sizeof(*qemu), 1, "QEMU", "CONF");
>> +}
>> +
>>   static
>>   void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>>   {
>> @@ -2734,6 +2748,11 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>>                             &pcms->acpi_nvdimm_state, machine->ram_slots);
>>       }
>>   
>> +    if (misc.tpm_version != TPM_VERSION_UNSPEC) {
>> +        acpi_add_table(table_offsets, tables_blob);
>> +        build_qemu(tables_blob, tables->linker, misc.tpm_version);
>> +    }
>> +
>>       /* Add tables supplied by user (if any) */
>>       for (u = acpi_table_first(); u; u = acpi_table_next(u)) {
>>           unsigned len = acpi_table_len(u);
>> diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
>> index 80c8099..98764c1 100644
>> --- a/include/hw/acpi/acpi-defs.h
>> +++ b/include/hw/acpi/acpi-defs.h
>> @@ -573,6 +573,14 @@ struct Acpi20TPM2 {
>>   } QEMU_PACKED;
>>   typedef struct Acpi20TPM2 Acpi20TPM2;
>>   
>> +/* QEMU - Custom QEMU table */
>> +struct AcpiTableQemu {
>> +    ACPI_TABLE_HEADER_DEF
>> +    uint32_t tpmppi_addr;
>> +    uint8_t tpm_version; /* 1 = 1.2, 2 = 2 */
>> +};
>> +typedef struct AcpiTableQemu AcpiTableQemu;
>> +
>>   /* DMAR - DMA Remapping table r2.2 */
>>   struct AcpiTableDmar {
>>       ACPI_TABLE_HEADER_DEF
>>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface Stefan Berger
@ 2018-02-09 20:19   ` Stefan Berger
  2018-02-12 14:27     ` Igor Mammedov
                       ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Stefan Berger @ 2018-02-09 20:19 UTC (permalink / raw)
  To: qemu-devel; +Cc: marcandre.lureau, kevin, lersek, mst

On 01/16/2018 10:51 AM, Stefan Berger wrote:
> The TPM Physical Presence interface consists of an ACPI part, a shared
> memory part, and code in the firmware. Users can send messages to the
> firmware by writing a code into the shared memory through invoking the
> ACPI code. When a reboot happens, the firmware looks for the code and
> acts on it by sending sequences of commands to the TPM.
>
> This patch adds the ACPI code. It is similar to the one in EDK2 but doesn't
> assume that SMIs are necessary to use. It uses a similar datastructure for
> the shared memory as EDK2 does so that EDK2 and SeaBIOS could both make use
> of it. I extended the shared memory data structure with an array of 256
> bytes, one for each code that could be implemented. The array contains
> flags describing the individual codes. This decouples the ACPI implementation
> from the firmware implementation.
>
> The underlying TCG specification is accessible from the following page.
>
> https://trustedcomputinggroup.org/tcg-physical-presence-interface-specification/
>
> This patch implements version 1.30.

I have played around with this patch and some modifications to EDK2. 
Though for EDK2 the question is whether to try to circumvent their 
current implementation that uses SMM or use SMM. With this patch so far 
I circumvent it, which is maybe not a good idea.

The facts for EDK2's PPI:

- from within the OS a PPI code is submitted to ACPI and ACPI enters SMM 
via an SMI and the PPI code is written into a UEFI variable. For this 
ACPI uses the memory are at 0xFFFF 0000 to pass parameters from the OS 
(via ACPI) to SMM. This is declared in ACPI with an OperationRegion() at 
that address. Once the machine is rebooted, UEFI reads the variable and 
finds the PPI code and reacts to it.


The facts for SeaBIOS:
- we cannot use the area at 0xFFFF 0000 since SeaBIOS is also mapped to 
this address. So we would have to use the PPI memory device's memory 
region, which is currently at 0xFED4 5000. SeaBIOS doesn't have 
persistent memory like NVRAM where it stores varaibles. So to pass the 
PPI code here the OS would invoke ACPI, which in turn would write the 
PPI code into the PPI memory directly. Upon reboot SeaBIOS would find 
the PPI code in the PPI memory device and react to it.

The PPI device in this patch series allocates 0x400 bytes. 0x200 bytes 
are used by the OperationRegion() in this patch series. The rest was 
thought of for future extensions.

To allow both firmwares to use PPI, we would need to be able to have the 
OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and 
0xFED4 5000 for SeaBIOS, per choice of the firmware. One way to achieve 
this would be to have the firmwares choose their operation region base 
address by writing into the PPI memory device at offset 0x200 (for 
example). A '1' there would mean that we want the OperationRegion() at 
0xFFFF 0000, and a '2' would mean at the base address of the PPI device 
(0xFED4 5000). This could be achieved by declaring a 2nd 
OperationRegion() in the ACPI code that is located at 0xFED4 5200 and we 
declare that 1st OperationRegion()'s address based on findings from 
there. Further, a flags variable at offset 0x201 could indicate whether 
an SMI is needed to enter SMM or not. Obviously, the ACPI code would 
become more complex to be able to support both firmwares at the same time.

This is a lot of details but I rather post this description before 
posting more patches. To make these changes and get it to work with at 
least SeaBIOS is probably fairly easy. But is this acceptable?

    Stefan

>
> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
>
> ---
>   v1->v2:
>     - get rid of FAIL variable; function 5 was using it and always
>       returns 0; the value is related to the ACPI function call not
>       a possible failure of the TPM function call.
>     - extend shared memory data structure with per-opcode entries
>       holding flags and use those flags to determine what to return
>       to caller
>     - implement interface version 1.3
> ---
>   hw/i386/acpi-build.c        | 273 ++++++++++++++++++++++++++++++++++++++++++++
>   include/hw/acpi/acpi-defs.h |   2 +
>   include/hw/acpi/tpm.h       |  31 +++++
>   3 files changed, 306 insertions(+)
>
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 522d6d2..f8c2d01 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -42,6 +42,7 @@
>   #include "hw/acpi/memory_hotplug.h"
>   #include "sysemu/tpm.h"
>   #include "hw/acpi/tpm.h"
> +#include "hw/tpm/tpm_ppi.h"
>   #include "hw/acpi/vmgenid.h"
>   #include "sysemu/tpm_backend.h"
>   #include "hw/timer/mc146818rtc_regs.h"
> @@ -1860,6 +1861,276 @@ static Aml *build_q35_osc_method(void)
>   }
>
>   static void
> +build_tpm_ppi(Aml *dev, TPMVersion tpm_version)
> +{
> +    Aml *method, *field, *ifctx, *ifctx2, *ifctx3, *pak;
> +
> +    aml_append(dev,
> +               aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
> +                                    aml_int(TPM_PPI_ADDR_BASE),
> +                                    TPM_PPI_STRUCT_SIZE));
> +
> +    field = aml_field("TPPI", AML_ANY_ACC, AML_NOLOCK, AML_PRESERVE);
> +    aml_append(field, aml_named_field("PPIN",
> +               sizeof(uint8_t) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("PPIP",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("PPRP",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("PPRQ",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("PPRM",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(field, aml_named_field("LPPR",
> +               sizeof(uint32_t) * BITS_PER_BYTE));
> +    aml_append(field, aml_reserved_field(
> +               sizeof(uint32_t) * BITS_PER_BYTE /* FRET */ +
> +               sizeof(uint8_t) * BITS_PER_BYTE /* MCIN */ +
> +               sizeof(uint32_t) * BITS_PER_BYTE * 4 /* MCIP .. UCRQ */ +
> +               sizeof(uint8_t) * BITS_PER_BYTE * 214));
> +    aml_append(field, aml_named_field("FUNC",
> +               sizeof(uint8_t) * BITS_PER_BYTE * 256));
> +    aml_append(dev, field);
> +
> +    method = aml_method("_DSM", 4, AML_SERIALIZED);
> +    {
> +        uint8_t zerobyte[1] = { 0 };
> +
> +        ifctx = aml_if(
> +                  aml_equal(aml_arg(0),
> +                            aml_touuid("3DDDFAA6-361B-4EB4-A424-8D10089D1653"))
> +                );
> +        {
> +            aml_append(ifctx,
> +                       aml_store(aml_to_integer(aml_arg(2)), aml_local(0)));
> +
> +            /* standard DSM query function */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(0)));
> +            {
> +                uint8_t byte_list[2] = { 0xff, 0x01 };
> +                aml_append(ifctx2, aml_return(aml_buffer(2, byte_list)));
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            /* interface version: 1.3 */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(1)));
> +            {
> +                aml_append(ifctx2, aml_return(aml_string("1.3")));
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            /* submit TPM operation */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(2)));
> +            {
> +                /* get opcode */
> +                aml_append(ifctx2,
> +                           aml_store(aml_derefof(aml_index(aml_arg(3),
> +                                                           aml_int(0))),
> +                                     aml_local(0)));
> +                /* get opcode flags */
> +                aml_append(ifctx2,
> +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
> +                                                           aml_local(0))),
> +                                     aml_local(1)));
> +                ifctx3 = aml_if(
> +                              aml_equal(
> +                                  aml_and(aml_local(1),
> +                                          aml_int(TPM_PPI_FUNC_IMPLEMENTED),
> +                                          NULL),
> +                                  aml_int(0)
> +                              )
> +                         );
> +                {
> +                    /* 1: not implemented */
> +                    aml_append(ifctx3, aml_return(aml_int(1)));
> +                }
> +                aml_append(ifctx2, ifctx3);
> +                aml_append(ifctx2, aml_store(aml_local(0), aml_name("PPRQ")));
> +                aml_append(ifctx2, aml_store(aml_int(0), aml_name("PPRM")));
> +                /* 0: success */
> +                aml_append(ifctx2, aml_return(aml_int(0)));
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            /* get pending TPM operation */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(3)));
> +            {
> +                /* revision to integer */
> +                aml_append(ifctx2,
> +                           aml_store(
> +                             aml_to_integer(aml_arg(1)),
> +                             aml_local(1)));
> +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(1)));
> +                {
> +                    pak = aml_package(2);
> +                    aml_append(pak, aml_int(0));
> +                    aml_append(pak, aml_name("PPRQ"));
> +                    aml_append(ifctx3, aml_return(pak));
> +                }
> +                aml_append(ifctx2, ifctx3);
> +
> +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(2)));
> +                {
> +                    pak = aml_package(3);
> +                    aml_append(pak, aml_int(0));
> +                    aml_append(pak, aml_name("PPRQ"));
> +                    aml_append(pak, aml_name("PPRM"));
> +                    aml_append(ifctx3, aml_return(pak));
> +                }
> +                aml_append(ifctx2, ifctx3);
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            /* get platform-specific action to transition to pre-OS env. */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(4)));
> +            {
> +                /* get opcode */
> +                aml_append(ifctx2,
> +                           aml_store(aml_name("PPRQ"),
> +                                     aml_local(0)));
> +                /* get opcode flags */
> +                aml_append(ifctx2,
> +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
> +                                                           aml_local(0))),
> +                                     aml_local(1)));
> +                /* return action flags */
> +                aml_append(ifctx2,
> +                           aml_return(
> +                               aml_shiftright(
> +                                   aml_and(aml_local(1),
> +                                           aml_int(TPM_PPI_FUNC_ACTION_MASK),
> +                                           NULL),
> +                                   aml_int(1),
> +                                   NULL)));
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            /* get TPM operation response */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(5)));
> +            {
> +                pak = aml_package(3);
> +
> +                aml_append(pak, aml_int(0));
> +                aml_append(pak, aml_name("LPPR"));
> +                aml_append(pak, aml_name("PPRP"));
> +
> +                aml_append(ifctx2, aml_return(pak));
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            /* submit preferred user language */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(6)));
> +            {
> +                /* 3 = not implemented */
> +                aml_append(ifctx2, aml_return(aml_int(3)));
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            /* submit TPM operation v2 */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(7)));
> +            {
> +                /* get opcode */
> +                aml_append(ifctx2,
> +                           aml_store(aml_derefof(aml_index(aml_arg(3),
> +                                                           aml_int(0))),
> +                                     aml_local(0)));
> +                /* get opcode flags */
> +                aml_append(ifctx2,
> +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
> +                                                           aml_local(0))),
> +                                     aml_local(1)));
> +                ifctx3 = aml_if(
> +                              aml_equal(
> +                                  aml_and(aml_local(1),
> +                                          aml_int(TPM_PPI_FUNC_MASK),
> +                                          NULL),
> +                                  aml_int(TPM_PPI_FUNC_NOT_IMPLEMENTED)
> +                              )
> +                         );
> +                {
> +                    /* 1: not implemented */
> +                    aml_append(ifctx3, aml_return(aml_int(1)));
> +                }
> +                aml_append(ifctx2, ifctx3);
> +
> +                ifctx3 = aml_if(
> +                              aml_equal(
> +                                  aml_and(
> +                                      aml_local(1),
> +                                      aml_int(TPM_PPI_FUNC_MASK),
> +                                      NULL),
> +                                  aml_int(TPM_PPI_FUNC_BLOCKED)
> +                              )
> +                         );
> +                {
> +                    /* 3: blocked by firmware */
> +                    aml_append(ifctx3, aml_return(aml_int(3)));
> +                }
> +                aml_append(ifctx2, ifctx3);
> +
> +                /* revision to integer */
> +                aml_append(ifctx2,
> +                           aml_store(
> +                             aml_to_integer(aml_arg(1)),
> +                             aml_local(1)));
> +
> +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(1)));
> +                {
> +                    /* revision 1 */
> +                    aml_append(ifctx3, aml_store(aml_local(0), aml_name("PPRQ")));
> +                    aml_append(ifctx3, aml_store(aml_int(0), aml_name("PPRM")));
> +                }
> +                aml_append(ifctx2, ifctx3);
> +
> +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(2)));
> +                {
> +                    /* revision 2 */
> +                    aml_append(ifctx3, aml_store(aml_local(0), aml_name("PPRQ")));
> +                    aml_append(ifctx3, aml_store(
> +                                         aml_derefof(
> +                                           aml_index(aml_arg(3),
> +                                                     aml_int(1))),
> +                                         aml_name("PPRM")));
> +                }
> +                aml_append(ifctx2, ifctx3);
> +                /* 0: success */
> +                aml_append(ifctx2, aml_return(aml_int(0)));
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            /* get user confirmation status for operation */
> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(8)));
> +            {
> +                /* get opcode */
> +                aml_append(ifctx2,
> +                           aml_store(aml_derefof(aml_index(aml_arg(3),
> +                                                           aml_int(0))),
> +                                     aml_local(0)));
> +                /* get opcode flags */
> +                aml_append(ifctx2,
> +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
> +                                                           aml_local(0))),
> +                                     aml_local(1)));
> +                /* return confirmation status code */
> +                aml_append(ifctx2,
> +                           aml_return(
> +                               aml_shiftright(
> +                                   aml_and(aml_local(1),
> +                                           aml_int(TPM_PPI_FUNC_MASK),
> +                                           NULL),
> +                                   aml_int(3),
> +                                   NULL)));
> +            }
> +            aml_append(ifctx, ifctx2);
> +
> +            aml_append(ifctx, aml_return(aml_buffer(1, zerobyte)));
> +        }
> +        aml_append(method, ifctx);
> +    }
> +    aml_append(dev, method);
> +}
> +
> +static void
>   build_dsdt(GArray *table_data, BIOSLinker *linker,
>              AcpiPmInfo *pm, AcpiMiscInfo *misc,
>              Range *pci_hole, Range *pci_hole64, MachineState *machine)
> @@ -2218,6 +2489,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>                    */
>                   /* aml_append(crs, aml_irq_no_flags(TPM_TIS_IRQ)); */
>                   aml_append(dev, aml_name_decl("_CRS", crs));
> +                build_tpm_ppi(dev, misc->tpm_version);
>                   aml_append(scope, dev);
>               }
>
> @@ -2636,6 +2908,7 @@ static void build_qemu(GArray *table_data, BIOSLinker *linker,
>       if (tpm_version != TPM_VERSION_UNSPEC) {
>           qemu->tpmppi_addr = TPM_PPI_ADDR_BASE;
>           qemu->tpm_version = tpm_version;
> +        qemu->tpmppi_version = TPM_PPI_VERSION_1_30;
>       }
>
>       build_header(linker, table_data,
> diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
> index 98764c1..a182194 100644
> --- a/include/hw/acpi/acpi-defs.h
> +++ b/include/hw/acpi/acpi-defs.h
> @@ -578,6 +578,8 @@ struct AcpiTableQemu {
>       ACPI_TABLE_HEADER_DEF
>       uint32_t tpmppi_addr;
>       uint8_t tpm_version; /* 1 = 1.2, 2 = 2 */
> +    uint8_t tpmppi_version;
> +#define TPM_PPI_VERSION_1_30  1
>   };
>   typedef struct AcpiTableQemu AcpiTableQemu;
>
> diff --git a/include/hw/acpi/tpm.h b/include/hw/acpi/tpm.h
> index 16227bc..130a039 100644
> --- a/include/hw/acpi/tpm.h
> +++ b/include/hw/acpi/tpm.h
> @@ -37,4 +37,35 @@
>   #define TPM_PPI_ADDR_SIZE           0x400
>   #define TPM_PPI_ADDR_BASE           0xfffef000
>
> +struct tpm_ppi {
> +    uint8_t ppin;            /*  0: set by BIOS */
> +    uint32_t ppip;           /*  1: set by ACPI; not used */
> +    uint32_t pprp;           /*  5: response from TPM; set by BIOS */
> +    uint32_t pprq;           /*  9: opcode; set by ACPI */
> +    uint32_t pprm;           /* 13: parameter for opcode; set by ACPI */
> +    uint32_t lppr;           /* 17: last opcode; set by BIOS */
> +    uint32_t fret;           /* 21: set by ACPI; not used */
> +    uint8_t res1;            /* 25: reserved */
> +    uint32_t res2[4];        /* 26: reserved */
> +    uint8_t  res3[214];      /* 42: reserved */
> +    uint8_t  func[256];      /* 256: per TPM function implementation flags;
> +                                     set by BIOS */
> +/* indication whether function is implemented; bit 0 */
> +#define TPM_PPI_FUNC_IMPLEMENTED       (1 << 0)
> +/* actions OS should take to transition to the pre-OS env.; bits 1, 2 */
> +#define TPM_PPI_FUNC_ACTION_SHUTDOWN   (1 << 1)
> +#define TPM_PPI_FUNC_ACTION_REBOOT     (2 << 1)
> +#define TPM_PPI_FUNC_ACTION_VENDOR     (3 << 1)
> +#define TPM_PPI_FUNC_ACTION_MASK       (3 << 1)
> +/* whether function is blocked by BIOS settings; bits 3,4,5 */
> +#define TPM_PPI_FUNC_NOT_IMPLEMENTED     (0 << 3)
> +#define TPM_PPI_FUNC_BIOS_ONLY           (1 << 3)
> +#define TPM_PPI_FUNC_BLOCKED             (2 << 3)
> +#define TPM_PPI_FUNC_ALLOWED_USR_REQ     (3 << 3)
> +#define TPM_PPI_FUNC_ALLOWED_USR_NOT_REQ (4 << 3)
> +#define TPM_PPI_FUNC_MASK                (7 << 3)
> +} QEMU_PACKED;
> +
> +#define TPM_PPI_STRUCT_SIZE  sizeof(struct tpm_ppi)
> +
>   #endif /* HW_ACPI_TPM_H */

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-09 20:19   ` Stefan Berger
@ 2018-02-12 14:27     ` Igor Mammedov
  2018-02-12 16:44       ` Stefan Berger
  2018-02-12 19:45     ` Kevin O'Connor
  2018-02-12 20:46     ` Kevin O'Connor
  2 siblings, 1 reply; 35+ messages in thread
From: Igor Mammedov @ 2018-02-12 14:27 UTC (permalink / raw)
  To: Stefan Berger; +Cc: qemu-devel, marcandre.lureau, kevin, lersek, mst

On Fri, 9 Feb 2018 15:19:31 -0500
Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:

> On 01/16/2018 10:51 AM, Stefan Berger wrote:
> > The TPM Physical Presence interface consists of an ACPI part, a shared
> > memory part, and code in the firmware. Users can send messages to the
> > firmware by writing a code into the shared memory through invoking the
> > ACPI code. When a reboot happens, the firmware looks for the code and
> > acts on it by sending sequences of commands to the TPM.
> >
> > This patch adds the ACPI code. It is similar to the one in EDK2 but doesn't
> > assume that SMIs are necessary to use. It uses a similar datastructure for
> > the shared memory as EDK2 does so that EDK2 and SeaBIOS could both make use
> > of it. I extended the shared memory data structure with an array of 256
> > bytes, one for each code that could be implemented. The array contains
> > flags describing the individual codes. This decouples the ACPI implementation
> > from the firmware implementation.
> >
> > The underlying TCG specification is accessible from the following page.
> >
> > https://trustedcomputinggroup.org/tcg-physical-presence-interface-specification/
> >
> > This patch implements version 1.30.  
> 
> I have played around with this patch and some modifications to EDK2. 
> Though for EDK2 the question is whether to try to circumvent their 
> current implementation that uses SMM or use SMM. With this patch so far 
> I circumvent it, which is maybe not a good idea.
> 
> The facts for EDK2's PPI:
> 
> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM 
> via an SMI and the PPI code is written into a UEFI variable. For this 
> ACPI uses the memory are at 0xFFFF 0000 to pass parameters from the OS 
> (via ACPI) to SMM. This is declared in ACPI with an OperationRegion() at 
> that address. Once the machine is rebooted, UEFI reads the variable and 
> finds the PPI code and reacts to it.
> 
> 
> The facts for SeaBIOS:
> - we cannot use the area at 0xFFFF 0000 since SeaBIOS is also mapped to 
> this address. So we would have to use the PPI memory device's memory 
> region, which is currently at 0xFED4 5000. SeaBIOS doesn't have 
> persistent memory like NVRAM where it stores varaibles. So to pass the 
> PPI code here the OS would invoke ACPI, which in turn would write the 
> PPI code into the PPI memory directly. Upon reboot SeaBIOS would find 
> the PPI code in the PPI memory device and react to it.
> 
> The PPI device in this patch series allocates 0x400 bytes. 0x200 bytes 
> are used by the OperationRegion() in this patch series. The rest was 
> thought of for future extensions.
> 
> To allow both firmwares to use PPI, we would need to be able to have the 
> OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and 
> 0xFED4 5000 for SeaBIOS, per choice of the firmware. One way to achieve 
> this would be to have the firmwares choose their operation region base 
> address by writing into the PPI memory device at offset 0x200 (for 
> example). A '1' there would mean that we want the OperationRegion() at 
> 0xFFFF 0000, and a '2' would mean at the base address of the PPI device 
> (0xFED4 5000). This could be achieved by declaring a 2nd 
> OperationRegion() in the ACPI code that is located at 0xFED4 5200 and we 
> declare that 1st OperationRegion()'s address based on findings from 
> there. Further, a flags variable at offset 0x201 could indicate whether 
> an SMI is needed to enter SMM or not. Obviously, the ACPI code would 
> become more complex to be able to support both firmwares at the same time.
> This is a lot of details but I rather post this description before 
> posting more patches. To make these changes and get it to work with at 
> least SeaBIOS is probably fairly easy. But is this acceptable?

You could use hw/acpi/vmgenid.c as example,

use following command to instruct firmware to write address back to QEMU:

    bios_linker_loader_write_pointer(linker,                                     
        TMP_PPI_ADDR_FW_CFG_FILE, 0, sizeof(uint64_t),                           
        TPM_PPI_MEM_FW_CFG_FILE, TPM_PPI_MEM_OFFSET); 

then when address is written into fwcfg, a callback in QEMU would be called due to

    /* Create a read-write fw_cfg file for Address */                            
    fw_cfg_add_file_callback(s, TPM_PPI_ADDR_FW_CFG_FILE, ...);  

when CB is called you could map persistent overlay memory region
(PPI memory device) at address written by firmware.

As for OperationRegion, you can instruct firmware to patch address
in AML as well, using bios_linker_loader_add_pointer() linker command.
what I'd suggest is to use dedicated TPM SSDT table and
at its start put a DWORD/QWORD variable where address would be patched in
and then use that variable as argument to OperationRegion

 ssdt = init_aml_allocator();
 ...
 addr_offset = table_data->len + build_append_named_dword(ssdt->buf, "PPIA");
 ...
 ... aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
                      aml_name("PPIA"), TPM_PPI_STRUCT_SIZE)
 ...
 bios_linker_loader_add_pointer(linker,                                       
       ACPI_BUILD_TABLE_FILE, addr_offset, sizeof(uint32_t),                    
       TPM_PPI_MEM_FW_CFG_FILE, 0);

This way both UEFI and Seabios would use the same implementation and
work the same way.

aml_operation_region("TPPI",..., aml_name("PPIA"), ...)
might work but it's not per spec, so it would be better to add
an API similar to build_append_named_dword() but only for
OperationRegion() which would return its offset within the table.
And skip on intermediate dword variable.


Wrt migration, you'd need to migrate address where PPI memory device
is mapped and its memory region.

As for giving up control to from OS (APCI) to firmware that would be
target depended, one could use writing to SMM port to trigger SMI
for example and something else in case of ARM or x86 SMM-less design.

>     Stefan
> 
> >
> > Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> >
> > ---
> >   v1->v2:
> >     - get rid of FAIL variable; function 5 was using it and always
> >       returns 0; the value is related to the ACPI function call not
> >       a possible failure of the TPM function call.
> >     - extend shared memory data structure with per-opcode entries
> >       holding flags and use those flags to determine what to return
> >       to caller
> >     - implement interface version 1.3
> > ---
> >   hw/i386/acpi-build.c        | 273 ++++++++++++++++++++++++++++++++++++++++++++
> >   include/hw/acpi/acpi-defs.h |   2 +
> >   include/hw/acpi/tpm.h       |  31 +++++
> >   3 files changed, 306 insertions(+)
> >
> > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> > index 522d6d2..f8c2d01 100644
> > --- a/hw/i386/acpi-build.c
> > +++ b/hw/i386/acpi-build.c
> > @@ -42,6 +42,7 @@
> >   #include "hw/acpi/memory_hotplug.h"
> >   #include "sysemu/tpm.h"
> >   #include "hw/acpi/tpm.h"
> > +#include "hw/tpm/tpm_ppi.h"
> >   #include "hw/acpi/vmgenid.h"
> >   #include "sysemu/tpm_backend.h"
> >   #include "hw/timer/mc146818rtc_regs.h"
> > @@ -1860,6 +1861,276 @@ static Aml *build_q35_osc_method(void)
> >   }
> >
> >   static void
> > +build_tpm_ppi(Aml *dev, TPMVersion tpm_version)
> > +{
> > +    Aml *method, *field, *ifctx, *ifctx2, *ifctx3, *pak;
> > +
> > +    aml_append(dev,
> > +               aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
> > +                                    aml_int(TPM_PPI_ADDR_BASE),
> > +                                    TPM_PPI_STRUCT_SIZE));
> > +
> > +    field = aml_field("TPPI", AML_ANY_ACC, AML_NOLOCK, AML_PRESERVE);
> > +    aml_append(field, aml_named_field("PPIN",
> > +               sizeof(uint8_t) * BITS_PER_BYTE));
> > +    aml_append(field, aml_named_field("PPIP",
> > +               sizeof(uint32_t) * BITS_PER_BYTE));
> > +    aml_append(field, aml_named_field("PPRP",
> > +               sizeof(uint32_t) * BITS_PER_BYTE));
> > +    aml_append(field, aml_named_field("PPRQ",
> > +               sizeof(uint32_t) * BITS_PER_BYTE));
> > +    aml_append(field, aml_named_field("PPRM",
> > +               sizeof(uint32_t) * BITS_PER_BYTE));
> > +    aml_append(field, aml_named_field("LPPR",
> > +               sizeof(uint32_t) * BITS_PER_BYTE));
> > +    aml_append(field, aml_reserved_field(
> > +               sizeof(uint32_t) * BITS_PER_BYTE /* FRET */ +
> > +               sizeof(uint8_t) * BITS_PER_BYTE /* MCIN */ +
> > +               sizeof(uint32_t) * BITS_PER_BYTE * 4 /* MCIP .. UCRQ */ +
> > +               sizeof(uint8_t) * BITS_PER_BYTE * 214));
> > +    aml_append(field, aml_named_field("FUNC",
> > +               sizeof(uint8_t) * BITS_PER_BYTE * 256));
> > +    aml_append(dev, field);
> > +
> > +    method = aml_method("_DSM", 4, AML_SERIALIZED);
> > +    {
> > +        uint8_t zerobyte[1] = { 0 };
> > +
> > +        ifctx = aml_if(
> > +                  aml_equal(aml_arg(0),
> > +                            aml_touuid("3DDDFAA6-361B-4EB4-A424-8D10089D1653"))
> > +                );
> > +        {
> > +            aml_append(ifctx,
> > +                       aml_store(aml_to_integer(aml_arg(2)), aml_local(0)));
> > +
> > +            /* standard DSM query function */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(0)));
> > +            {
> > +                uint8_t byte_list[2] = { 0xff, 0x01 };
> > +                aml_append(ifctx2, aml_return(aml_buffer(2, byte_list)));
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            /* interface version: 1.3 */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(1)));
> > +            {
> > +                aml_append(ifctx2, aml_return(aml_string("1.3")));
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            /* submit TPM operation */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(2)));
> > +            {
> > +                /* get opcode */
> > +                aml_append(ifctx2,
> > +                           aml_store(aml_derefof(aml_index(aml_arg(3),
> > +                                                           aml_int(0))),
> > +                                     aml_local(0)));
> > +                /* get opcode flags */
> > +                aml_append(ifctx2,
> > +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
> > +                                                           aml_local(0))),
> > +                                     aml_local(1)));
> > +                ifctx3 = aml_if(
> > +                              aml_equal(
> > +                                  aml_and(aml_local(1),
> > +                                          aml_int(TPM_PPI_FUNC_IMPLEMENTED),
> > +                                          NULL),
> > +                                  aml_int(0)
> > +                              )
> > +                         );
> > +                {
> > +                    /* 1: not implemented */
> > +                    aml_append(ifctx3, aml_return(aml_int(1)));
> > +                }
> > +                aml_append(ifctx2, ifctx3);
> > +                aml_append(ifctx2, aml_store(aml_local(0), aml_name("PPRQ")));
> > +                aml_append(ifctx2, aml_store(aml_int(0), aml_name("PPRM")));
> > +                /* 0: success */
> > +                aml_append(ifctx2, aml_return(aml_int(0)));
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            /* get pending TPM operation */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(3)));
> > +            {
> > +                /* revision to integer */
> > +                aml_append(ifctx2,
> > +                           aml_store(
> > +                             aml_to_integer(aml_arg(1)),
> > +                             aml_local(1)));
> > +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(1)));
> > +                {
> > +                    pak = aml_package(2);
> > +                    aml_append(pak, aml_int(0));
> > +                    aml_append(pak, aml_name("PPRQ"));
> > +                    aml_append(ifctx3, aml_return(pak));
> > +                }
> > +                aml_append(ifctx2, ifctx3);
> > +
> > +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(2)));
> > +                {
> > +                    pak = aml_package(3);
> > +                    aml_append(pak, aml_int(0));
> > +                    aml_append(pak, aml_name("PPRQ"));
> > +                    aml_append(pak, aml_name("PPRM"));
> > +                    aml_append(ifctx3, aml_return(pak));
> > +                }
> > +                aml_append(ifctx2, ifctx3);
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            /* get platform-specific action to transition to pre-OS env. */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(4)));
> > +            {
> > +                /* get opcode */
> > +                aml_append(ifctx2,
> > +                           aml_store(aml_name("PPRQ"),
> > +                                     aml_local(0)));
> > +                /* get opcode flags */
> > +                aml_append(ifctx2,
> > +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
> > +                                                           aml_local(0))),
> > +                                     aml_local(1)));
> > +                /* return action flags */
> > +                aml_append(ifctx2,
> > +                           aml_return(
> > +                               aml_shiftright(
> > +                                   aml_and(aml_local(1),
> > +                                           aml_int(TPM_PPI_FUNC_ACTION_MASK),
> > +                                           NULL),
> > +                                   aml_int(1),
> > +                                   NULL)));
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            /* get TPM operation response */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(5)));
> > +            {
> > +                pak = aml_package(3);
> > +
> > +                aml_append(pak, aml_int(0));
> > +                aml_append(pak, aml_name("LPPR"));
> > +                aml_append(pak, aml_name("PPRP"));
> > +
> > +                aml_append(ifctx2, aml_return(pak));
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            /* submit preferred user language */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(6)));
> > +            {
> > +                /* 3 = not implemented */
> > +                aml_append(ifctx2, aml_return(aml_int(3)));
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            /* submit TPM operation v2 */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(7)));
> > +            {
> > +                /* get opcode */
> > +                aml_append(ifctx2,
> > +                           aml_store(aml_derefof(aml_index(aml_arg(3),
> > +                                                           aml_int(0))),
> > +                                     aml_local(0)));
> > +                /* get opcode flags */
> > +                aml_append(ifctx2,
> > +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
> > +                                                           aml_local(0))),
> > +                                     aml_local(1)));
> > +                ifctx3 = aml_if(
> > +                              aml_equal(
> > +                                  aml_and(aml_local(1),
> > +                                          aml_int(TPM_PPI_FUNC_MASK),
> > +                                          NULL),
> > +                                  aml_int(TPM_PPI_FUNC_NOT_IMPLEMENTED)
> > +                              )
> > +                         );
> > +                {
> > +                    /* 1: not implemented */
> > +                    aml_append(ifctx3, aml_return(aml_int(1)));
> > +                }
> > +                aml_append(ifctx2, ifctx3);
> > +
> > +                ifctx3 = aml_if(
> > +                              aml_equal(
> > +                                  aml_and(
> > +                                      aml_local(1),
> > +                                      aml_int(TPM_PPI_FUNC_MASK),
> > +                                      NULL),
> > +                                  aml_int(TPM_PPI_FUNC_BLOCKED)
> > +                              )
> > +                         );
> > +                {
> > +                    /* 3: blocked by firmware */
> > +                    aml_append(ifctx3, aml_return(aml_int(3)));
> > +                }
> > +                aml_append(ifctx2, ifctx3);
> > +
> > +                /* revision to integer */
> > +                aml_append(ifctx2,
> > +                           aml_store(
> > +                             aml_to_integer(aml_arg(1)),
> > +                             aml_local(1)));
> > +
> > +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(1)));
> > +                {
> > +                    /* revision 1 */
> > +                    aml_append(ifctx3, aml_store(aml_local(0), aml_name("PPRQ")));
> > +                    aml_append(ifctx3, aml_store(aml_int(0), aml_name("PPRM")));
> > +                }
> > +                aml_append(ifctx2, ifctx3);
> > +
> > +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(2)));
> > +                {
> > +                    /* revision 2 */
> > +                    aml_append(ifctx3, aml_store(aml_local(0), aml_name("PPRQ")));
> > +                    aml_append(ifctx3, aml_store(
> > +                                         aml_derefof(
> > +                                           aml_index(aml_arg(3),
> > +                                                     aml_int(1))),
> > +                                         aml_name("PPRM")));
> > +                }
> > +                aml_append(ifctx2, ifctx3);
> > +                /* 0: success */
> > +                aml_append(ifctx2, aml_return(aml_int(0)));
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            /* get user confirmation status for operation */
> > +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(8)));
> > +            {
> > +                /* get opcode */
> > +                aml_append(ifctx2,
> > +                           aml_store(aml_derefof(aml_index(aml_arg(3),
> > +                                                           aml_int(0))),
> > +                                     aml_local(0)));
> > +                /* get opcode flags */
> > +                aml_append(ifctx2,
> > +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
> > +                                                           aml_local(0))),
> > +                                     aml_local(1)));
> > +                /* return confirmation status code */
> > +                aml_append(ifctx2,
> > +                           aml_return(
> > +                               aml_shiftright(
> > +                                   aml_and(aml_local(1),
> > +                                           aml_int(TPM_PPI_FUNC_MASK),
> > +                                           NULL),
> > +                                   aml_int(3),
> > +                                   NULL)));
> > +            }
> > +            aml_append(ifctx, ifctx2);
> > +
> > +            aml_append(ifctx, aml_return(aml_buffer(1, zerobyte)));
> > +        }
> > +        aml_append(method, ifctx);
> > +    }
> > +    aml_append(dev, method);
> > +}
> > +
> > +static void
> >   build_dsdt(GArray *table_data, BIOSLinker *linker,
> >              AcpiPmInfo *pm, AcpiMiscInfo *misc,
> >              Range *pci_hole, Range *pci_hole64, MachineState *machine)
> > @@ -2218,6 +2489,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
> >                    */
> >                   /* aml_append(crs, aml_irq_no_flags(TPM_TIS_IRQ)); */
> >                   aml_append(dev, aml_name_decl("_CRS", crs));
> > +                build_tpm_ppi(dev, misc->tpm_version);
> >                   aml_append(scope, dev);
> >               }
> >
> > @@ -2636,6 +2908,7 @@ static void build_qemu(GArray *table_data, BIOSLinker *linker,
> >       if (tpm_version != TPM_VERSION_UNSPEC) {
> >           qemu->tpmppi_addr = TPM_PPI_ADDR_BASE;
> >           qemu->tpm_version = tpm_version;
> > +        qemu->tpmppi_version = TPM_PPI_VERSION_1_30;
> >       }
> >
> >       build_header(linker, table_data,
> > diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
> > index 98764c1..a182194 100644
> > --- a/include/hw/acpi/acpi-defs.h
> > +++ b/include/hw/acpi/acpi-defs.h
> > @@ -578,6 +578,8 @@ struct AcpiTableQemu {
> >       ACPI_TABLE_HEADER_DEF
> >       uint32_t tpmppi_addr;
> >       uint8_t tpm_version; /* 1 = 1.2, 2 = 2 */
> > +    uint8_t tpmppi_version;
> > +#define TPM_PPI_VERSION_1_30  1
> >   };
> >   typedef struct AcpiTableQemu AcpiTableQemu;
> >
> > diff --git a/include/hw/acpi/tpm.h b/include/hw/acpi/tpm.h
> > index 16227bc..130a039 100644
> > --- a/include/hw/acpi/tpm.h
> > +++ b/include/hw/acpi/tpm.h
> > @@ -37,4 +37,35 @@
> >   #define TPM_PPI_ADDR_SIZE           0x400
> >   #define TPM_PPI_ADDR_BASE           0xfffef000
> >
> > +struct tpm_ppi {
> > +    uint8_t ppin;            /*  0: set by BIOS */
> > +    uint32_t ppip;           /*  1: set by ACPI; not used */
> > +    uint32_t pprp;           /*  5: response from TPM; set by BIOS */
> > +    uint32_t pprq;           /*  9: opcode; set by ACPI */
> > +    uint32_t pprm;           /* 13: parameter for opcode; set by ACPI */
> > +    uint32_t lppr;           /* 17: last opcode; set by BIOS */
> > +    uint32_t fret;           /* 21: set by ACPI; not used */
> > +    uint8_t res1;            /* 25: reserved */
> > +    uint32_t res2[4];        /* 26: reserved */
> > +    uint8_t  res3[214];      /* 42: reserved */
> > +    uint8_t  func[256];      /* 256: per TPM function implementation flags;
> > +                                     set by BIOS */
> > +/* indication whether function is implemented; bit 0 */
> > +#define TPM_PPI_FUNC_IMPLEMENTED       (1 << 0)
> > +/* actions OS should take to transition to the pre-OS env.; bits 1, 2 */
> > +#define TPM_PPI_FUNC_ACTION_SHUTDOWN   (1 << 1)
> > +#define TPM_PPI_FUNC_ACTION_REBOOT     (2 << 1)
> > +#define TPM_PPI_FUNC_ACTION_VENDOR     (3 << 1)
> > +#define TPM_PPI_FUNC_ACTION_MASK       (3 << 1)
> > +/* whether function is blocked by BIOS settings; bits 3,4,5 */
> > +#define TPM_PPI_FUNC_NOT_IMPLEMENTED     (0 << 3)
> > +#define TPM_PPI_FUNC_BIOS_ONLY           (1 << 3)
> > +#define TPM_PPI_FUNC_BLOCKED             (2 << 3)
> > +#define TPM_PPI_FUNC_ALLOWED_USR_REQ     (3 << 3)
> > +#define TPM_PPI_FUNC_ALLOWED_USR_NOT_REQ (4 << 3)
> > +#define TPM_PPI_FUNC_MASK                (7 << 3)
> > +} QEMU_PACKED;
> > +
> > +#define TPM_PPI_STRUCT_SIZE  sizeof(struct tpm_ppi)
> > +
> >   #endif /* HW_ACPI_TPM_H */  
> 
> 
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-12 14:27     ` Igor Mammedov
@ 2018-02-12 16:44       ` Stefan Berger
  2018-02-12 17:52         ` Igor Mammedov
  0 siblings, 1 reply; 35+ messages in thread
From: Stefan Berger @ 2018-02-12 16:44 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: qemu-devel, marcandre.lureau, kevin, lersek, mst

On 02/12/2018 09:27 AM, Igor Mammedov wrote:
> On Fri, 9 Feb 2018 15:19:31 -0500
> Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
>
>> On 01/16/2018 10:51 AM, Stefan Berger wrote:
>>> The TPM Physical Presence interface consists of an ACPI part, a shared
>>> memory part, and code in the firmware. Users can send messages to the
>>> firmware by writing a code into the shared memory through invoking the
>>> ACPI code. When a reboot happens, the firmware looks for the code and
>>> acts on it by sending sequences of commands to the TPM.
>>>
>>> This patch adds the ACPI code. It is similar to the one in EDK2 but doesn't
>>> assume that SMIs are necessary to use. It uses a similar datastructure for
>>> the shared memory as EDK2 does so that EDK2 and SeaBIOS could both make use
>>> of it. I extended the shared memory data structure with an array of 256
>>> bytes, one for each code that could be implemented. The array contains
>>> flags describing the individual codes. This decouples the ACPI implementation
>>> from the firmware implementation.
>>>
>>> The underlying TCG specification is accessible from the following page.
>>>
>>> https://trustedcomputinggroup.org/tcg-physical-presence-interface-specification/
>>>
>>> This patch implements version 1.30.
>> I have played around with this patch and some modifications to EDK2.
>> Though for EDK2 the question is whether to try to circumvent their
>> current implementation that uses SMM or use SMM. With this patch so far
>> I circumvent it, which is maybe not a good idea.
>>
>> The facts for EDK2's PPI:
>>
>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM
>> via an SMI and the PPI code is written into a UEFI variable. For this
>> ACPI uses the memory are at 0xFFFF 0000 to pass parameters from the OS
>> (via ACPI) to SMM. This is declared in ACPI with an OperationRegion() at
>> that address. Once the machine is rebooted, UEFI reads the variable and
>> finds the PPI code and reacts to it.
>>
>>
>> The facts for SeaBIOS:
>> - we cannot use the area at 0xFFFF 0000 since SeaBIOS is also mapped to
>> this address. So we would have to use the PPI memory device's memory
>> region, which is currently at 0xFED4 5000. SeaBIOS doesn't have
>> persistent memory like NVRAM where it stores varaibles. So to pass the
>> PPI code here the OS would invoke ACPI, which in turn would write the
>> PPI code into the PPI memory directly. Upon reboot SeaBIOS would find
>> the PPI code in the PPI memory device and react to it.
>>
>> The PPI device in this patch series allocates 0x400 bytes. 0x200 bytes
>> are used by the OperationRegion() in this patch series. The rest was
>> thought of for future extensions.
>>
>> To allow both firmwares to use PPI, we would need to be able to have the
>> OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and
>> 0xFED4 5000 for SeaBIOS, per choice of the firmware. One way to achieve
>> this would be to have the firmwares choose their operation region base
>> address by writing into the PPI memory device at offset 0x200 (for
>> example). A '1' there would mean that we want the OperationRegion() at
>> 0xFFFF 0000, and a '2' would mean at the base address of the PPI device
>> (0xFED4 5000). This could be achieved by declaring a 2nd
>> OperationRegion() in the ACPI code that is located at 0xFED4 5200 and we
>> declare that 1st OperationRegion()'s address based on findings from
>> there. Further, a flags variable at offset 0x201 could indicate whether
>> an SMI is needed to enter SMM or not. Obviously, the ACPI code would
>> become more complex to be able to support both firmwares at the same time.
>> This is a lot of details but I rather post this description before
>> posting more patches. To make these changes and get it to work with at
>> least SeaBIOS is probably fairly easy. But is this acceptable?
> You could use hw/acpi/vmgenid.c as example,
>
> use following command to instruct firmware to write address back to QEMU:
>
>      bios_linker_loader_write_pointer(linker,
>          TMP_PPI_ADDR_FW_CFG_FILE, 0, sizeof(uint64_t),
>          TPM_PPI_MEM_FW_CFG_FILE, TPM_PPI_MEM_OFFSET);
>
> then when address is written into fwcfg, a callback in QEMU would be called due to
>
>      /* Create a read-write fw_cfg file for Address */
>      fw_cfg_add_file_callback(s, TPM_PPI_ADDR_FW_CFG_FILE, ...);
>
> when CB is called you could map persistent overlay memory region
> (PPI memory device) at address written by firmware.


> As for OperationRegion, you can instruct firmware to patch address
> in AML as well, using bios_linker_loader_add_pointer() linker command.
> what I'd suggest is to use dedicated TPM SSDT table and
> at its start put a DWORD/QWORD variable where address would be patched in
> and then use that variable as argument to OperationRegion
>
>   ssdt = init_aml_allocator();
>   ...
>   addr_offset = table_data->len + build_append_named_dword(ssdt->buf, "PPIA");
>   ...
>   ... aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
>                        aml_name("PPIA"), TPM_PPI_STRUCT_SIZE)
>   ...
>   bios_linker_loader_add_pointer(linker,
>         ACPI_BUILD_TABLE_FILE, addr_offset, sizeof(uint32_t),
>         TPM_PPI_MEM_FW_CFG_FILE, 0);
>
> This way both UEFI and Seabios would use the same implementation and
> work the same way.

Thanks for this sample code. Though it only applies to how they write 
the base address for the OperationRegion() and not the behaviour of the 
code when it comes to SMI versus non-SMI.

>
> aml_operation_region("TPPI",..., aml_name("PPIA"), ...)
> might work but it's not per spec, so it would be better to add
> an API similar to build_append_named_dword() but only for
> OperationRegion() which would return its offset within the table.
> And skip on intermediate dword variable.
>
>
> Wrt migration, you'd need to migrate address where PPI memory device
> is mapped and its memory region.

So today the PPI memory device is located at some address A. If we 
decided that we need to move it to address B due to a collision with 
another device, then are we going to be able to update the ACPI 
OperationRegion base address post-migration to move it from address A to 
address B? Or should we refuse the migration ? Keeping it at address A 
seems wrong due to the collision.

>
> As for giving up control to from OS (APCI) to firmware that would be
> target depended, one could use writing to SMM port to trigger SMI
> for example and something else in case of ARM or x86 SMM-less design.

Though to support these different cases, we either need to be able to 
generate different ACPI code or make it configurable via some sort of 
register. If we cannot get rid of these flag to let the firmware for 
example choose between non-SMI and SMI, then do we need to use the above 
shown callback mechanisms to avoid a 2nd field that lets one choose the 
base address of the PPI memory device?

>
>>      Stefan
>>
>>> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
>>>
>>> ---
>>>    v1->v2:
>>>      - get rid of FAIL variable; function 5 was using it and always
>>>        returns 0; the value is related to the ACPI function call not
>>>        a possible failure of the TPM function call.
>>>      - extend shared memory data structure with per-opcode entries
>>>        holding flags and use those flags to determine what to return
>>>        to caller
>>>      - implement interface version 1.3
>>> ---
>>>    hw/i386/acpi-build.c        | 273 ++++++++++++++++++++++++++++++++++++++++++++
>>>    include/hw/acpi/acpi-defs.h |   2 +
>>>    include/hw/acpi/tpm.h       |  31 +++++
>>>    3 files changed, 306 insertions(+)
>>>
>>> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
>>> index 522d6d2..f8c2d01 100644
>>> --- a/hw/i386/acpi-build.c
>>> +++ b/hw/i386/acpi-build.c
>>> @@ -42,6 +42,7 @@
>>>    #include "hw/acpi/memory_hotplug.h"
>>>    #include "sysemu/tpm.h"
>>>    #include "hw/acpi/tpm.h"
>>> +#include "hw/tpm/tpm_ppi.h"
>>>    #include "hw/acpi/vmgenid.h"
>>>    #include "sysemu/tpm_backend.h"
>>>    #include "hw/timer/mc146818rtc_regs.h"
>>> @@ -1860,6 +1861,276 @@ static Aml *build_q35_osc_method(void)
>>>    }
>>>
>>>    static void
>>> +build_tpm_ppi(Aml *dev, TPMVersion tpm_version)
>>> +{
>>> +    Aml *method, *field, *ifctx, *ifctx2, *ifctx3, *pak;
>>> +
>>> +    aml_append(dev,
>>> +               aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
>>> +                                    aml_int(TPM_PPI_ADDR_BASE),
>>> +                                    TPM_PPI_STRUCT_SIZE));
>>> +
>>> +    field = aml_field("TPPI", AML_ANY_ACC, AML_NOLOCK, AML_PRESERVE);
>>> +    aml_append(field, aml_named_field("PPIN",
>>> +               sizeof(uint8_t) * BITS_PER_BYTE));
>>> +    aml_append(field, aml_named_field("PPIP",
>>> +               sizeof(uint32_t) * BITS_PER_BYTE));
>>> +    aml_append(field, aml_named_field("PPRP",
>>> +               sizeof(uint32_t) * BITS_PER_BYTE));
>>> +    aml_append(field, aml_named_field("PPRQ",
>>> +               sizeof(uint32_t) * BITS_PER_BYTE));
>>> +    aml_append(field, aml_named_field("PPRM",
>>> +               sizeof(uint32_t) * BITS_PER_BYTE));
>>> +    aml_append(field, aml_named_field("LPPR",
>>> +               sizeof(uint32_t) * BITS_PER_BYTE));
>>> +    aml_append(field, aml_reserved_field(
>>> +               sizeof(uint32_t) * BITS_PER_BYTE /* FRET */ +
>>> +               sizeof(uint8_t) * BITS_PER_BYTE /* MCIN */ +
>>> +               sizeof(uint32_t) * BITS_PER_BYTE * 4 /* MCIP .. UCRQ */ +
>>> +               sizeof(uint8_t) * BITS_PER_BYTE * 214));
>>> +    aml_append(field, aml_named_field("FUNC",
>>> +               sizeof(uint8_t) * BITS_PER_BYTE * 256));
>>> +    aml_append(dev, field);
>>> +
>>> +    method = aml_method("_DSM", 4, AML_SERIALIZED);
>>> +    {
>>> +        uint8_t zerobyte[1] = { 0 };
>>> +
>>> +        ifctx = aml_if(
>>> +                  aml_equal(aml_arg(0),
>>> +                            aml_touuid("3DDDFAA6-361B-4EB4-A424-8D10089D1653"))
>>> +                );
>>> +        {
>>> +            aml_append(ifctx,
>>> +                       aml_store(aml_to_integer(aml_arg(2)), aml_local(0)));
>>> +
>>> +            /* standard DSM query function */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(0)));
>>> +            {
>>> +                uint8_t byte_list[2] = { 0xff, 0x01 };
>>> +                aml_append(ifctx2, aml_return(aml_buffer(2, byte_list)));
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            /* interface version: 1.3 */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(1)));
>>> +            {
>>> +                aml_append(ifctx2, aml_return(aml_string("1.3")));
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            /* submit TPM operation */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(2)));
>>> +            {
>>> +                /* get opcode */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(aml_derefof(aml_index(aml_arg(3),
>>> +                                                           aml_int(0))),
>>> +                                     aml_local(0)));
>>> +                /* get opcode flags */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
>>> +                                                           aml_local(0))),
>>> +                                     aml_local(1)));
>>> +                ifctx3 = aml_if(
>>> +                              aml_equal(
>>> +                                  aml_and(aml_local(1),
>>> +                                          aml_int(TPM_PPI_FUNC_IMPLEMENTED),
>>> +                                          NULL),
>>> +                                  aml_int(0)
>>> +                              )
>>> +                         );
>>> +                {
>>> +                    /* 1: not implemented */
>>> +                    aml_append(ifctx3, aml_return(aml_int(1)));
>>> +                }
>>> +                aml_append(ifctx2, ifctx3);
>>> +                aml_append(ifctx2, aml_store(aml_local(0), aml_name("PPRQ")));
>>> +                aml_append(ifctx2, aml_store(aml_int(0), aml_name("PPRM")));
>>> +                /* 0: success */
>>> +                aml_append(ifctx2, aml_return(aml_int(0)));
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            /* get pending TPM operation */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(3)));
>>> +            {
>>> +                /* revision to integer */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(
>>> +                             aml_to_integer(aml_arg(1)),
>>> +                             aml_local(1)));
>>> +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(1)));
>>> +                {
>>> +                    pak = aml_package(2);
>>> +                    aml_append(pak, aml_int(0));
>>> +                    aml_append(pak, aml_name("PPRQ"));
>>> +                    aml_append(ifctx3, aml_return(pak));
>>> +                }
>>> +                aml_append(ifctx2, ifctx3);
>>> +
>>> +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(2)));
>>> +                {
>>> +                    pak = aml_package(3);
>>> +                    aml_append(pak, aml_int(0));
>>> +                    aml_append(pak, aml_name("PPRQ"));
>>> +                    aml_append(pak, aml_name("PPRM"));
>>> +                    aml_append(ifctx3, aml_return(pak));
>>> +                }
>>> +                aml_append(ifctx2, ifctx3);
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            /* get platform-specific action to transition to pre-OS env. */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(4)));
>>> +            {
>>> +                /* get opcode */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(aml_name("PPRQ"),
>>> +                                     aml_local(0)));
>>> +                /* get opcode flags */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
>>> +                                                           aml_local(0))),
>>> +                                     aml_local(1)));
>>> +                /* return action flags */
>>> +                aml_append(ifctx2,
>>> +                           aml_return(
>>> +                               aml_shiftright(
>>> +                                   aml_and(aml_local(1),
>>> +                                           aml_int(TPM_PPI_FUNC_ACTION_MASK),
>>> +                                           NULL),
>>> +                                   aml_int(1),
>>> +                                   NULL)));
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            /* get TPM operation response */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(5)));
>>> +            {
>>> +                pak = aml_package(3);
>>> +
>>> +                aml_append(pak, aml_int(0));
>>> +                aml_append(pak, aml_name("LPPR"));
>>> +                aml_append(pak, aml_name("PPRP"));
>>> +
>>> +                aml_append(ifctx2, aml_return(pak));
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            /* submit preferred user language */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(6)));
>>> +            {
>>> +                /* 3 = not implemented */
>>> +                aml_append(ifctx2, aml_return(aml_int(3)));
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            /* submit TPM operation v2 */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(7)));
>>> +            {
>>> +                /* get opcode */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(aml_derefof(aml_index(aml_arg(3),
>>> +                                                           aml_int(0))),
>>> +                                     aml_local(0)));
>>> +                /* get opcode flags */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
>>> +                                                           aml_local(0))),
>>> +                                     aml_local(1)));
>>> +                ifctx3 = aml_if(
>>> +                              aml_equal(
>>> +                                  aml_and(aml_local(1),
>>> +                                          aml_int(TPM_PPI_FUNC_MASK),
>>> +                                          NULL),
>>> +                                  aml_int(TPM_PPI_FUNC_NOT_IMPLEMENTED)
>>> +                              )
>>> +                         );
>>> +                {
>>> +                    /* 1: not implemented */
>>> +                    aml_append(ifctx3, aml_return(aml_int(1)));
>>> +                }
>>> +                aml_append(ifctx2, ifctx3);
>>> +
>>> +                ifctx3 = aml_if(
>>> +                              aml_equal(
>>> +                                  aml_and(
>>> +                                      aml_local(1),
>>> +                                      aml_int(TPM_PPI_FUNC_MASK),
>>> +                                      NULL),
>>> +                                  aml_int(TPM_PPI_FUNC_BLOCKED)
>>> +                              )
>>> +                         );
>>> +                {
>>> +                    /* 3: blocked by firmware */
>>> +                    aml_append(ifctx3, aml_return(aml_int(3)));
>>> +                }
>>> +                aml_append(ifctx2, ifctx3);
>>> +
>>> +                /* revision to integer */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(
>>> +                             aml_to_integer(aml_arg(1)),
>>> +                             aml_local(1)));
>>> +
>>> +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(1)));
>>> +                {
>>> +                    /* revision 1 */
>>> +                    aml_append(ifctx3, aml_store(aml_local(0), aml_name("PPRQ")));
>>> +                    aml_append(ifctx3, aml_store(aml_int(0), aml_name("PPRM")));
>>> +                }
>>> +                aml_append(ifctx2, ifctx3);
>>> +
>>> +                ifctx3 = aml_if(aml_equal(aml_local(1), aml_int(2)));
>>> +                {
>>> +                    /* revision 2 */
>>> +                    aml_append(ifctx3, aml_store(aml_local(0), aml_name("PPRQ")));
>>> +                    aml_append(ifctx3, aml_store(
>>> +                                         aml_derefof(
>>> +                                           aml_index(aml_arg(3),
>>> +                                                     aml_int(1))),
>>> +                                         aml_name("PPRM")));
>>> +                }
>>> +                aml_append(ifctx2, ifctx3);
>>> +                /* 0: success */
>>> +                aml_append(ifctx2, aml_return(aml_int(0)));
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            /* get user confirmation status for operation */
>>> +            ifctx2 = aml_if(aml_equal(aml_local(0), aml_int(8)));
>>> +            {
>>> +                /* get opcode */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(aml_derefof(aml_index(aml_arg(3),
>>> +                                                           aml_int(0))),
>>> +                                     aml_local(0)));
>>> +                /* get opcode flags */
>>> +                aml_append(ifctx2,
>>> +                           aml_store(aml_derefof(aml_index(aml_name("FUNC"),
>>> +                                                           aml_local(0))),
>>> +                                     aml_local(1)));
>>> +                /* return confirmation status code */
>>> +                aml_append(ifctx2,
>>> +                           aml_return(
>>> +                               aml_shiftright(
>>> +                                   aml_and(aml_local(1),
>>> +                                           aml_int(TPM_PPI_FUNC_MASK),
>>> +                                           NULL),
>>> +                                   aml_int(3),
>>> +                                   NULL)));
>>> +            }
>>> +            aml_append(ifctx, ifctx2);
>>> +
>>> +            aml_append(ifctx, aml_return(aml_buffer(1, zerobyte)));
>>> +        }
>>> +        aml_append(method, ifctx);
>>> +    }
>>> +    aml_append(dev, method);
>>> +}
>>> +
>>> +static void
>>>    build_dsdt(GArray *table_data, BIOSLinker *linker,
>>>               AcpiPmInfo *pm, AcpiMiscInfo *misc,
>>>               Range *pci_hole, Range *pci_hole64, MachineState *machine)
>>> @@ -2218,6 +2489,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>>>                     */
>>>                    /* aml_append(crs, aml_irq_no_flags(TPM_TIS_IRQ)); */
>>>                    aml_append(dev, aml_name_decl("_CRS", crs));
>>> +                build_tpm_ppi(dev, misc->tpm_version);
>>>                    aml_append(scope, dev);
>>>                }
>>>
>>> @@ -2636,6 +2908,7 @@ static void build_qemu(GArray *table_data, BIOSLinker *linker,
>>>        if (tpm_version != TPM_VERSION_UNSPEC) {
>>>            qemu->tpmppi_addr = TPM_PPI_ADDR_BASE;
>>>            qemu->tpm_version = tpm_version;
>>> +        qemu->tpmppi_version = TPM_PPI_VERSION_1_30;
>>>        }
>>>
>>>        build_header(linker, table_data,
>>> diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
>>> index 98764c1..a182194 100644
>>> --- a/include/hw/acpi/acpi-defs.h
>>> +++ b/include/hw/acpi/acpi-defs.h
>>> @@ -578,6 +578,8 @@ struct AcpiTableQemu {
>>>        ACPI_TABLE_HEADER_DEF
>>>        uint32_t tpmppi_addr;
>>>        uint8_t tpm_version; /* 1 = 1.2, 2 = 2 */
>>> +    uint8_t tpmppi_version;
>>> +#define TPM_PPI_VERSION_1_30  1
>>>    };
>>>    typedef struct AcpiTableQemu AcpiTableQemu;
>>>
>>> diff --git a/include/hw/acpi/tpm.h b/include/hw/acpi/tpm.h
>>> index 16227bc..130a039 100644
>>> --- a/include/hw/acpi/tpm.h
>>> +++ b/include/hw/acpi/tpm.h
>>> @@ -37,4 +37,35 @@
>>>    #define TPM_PPI_ADDR_SIZE           0x400
>>>    #define TPM_PPI_ADDR_BASE           0xfffef000
>>>
>>> +struct tpm_ppi {
>>> +    uint8_t ppin;            /*  0: set by BIOS */
>>> +    uint32_t ppip;           /*  1: set by ACPI; not used */
>>> +    uint32_t pprp;           /*  5: response from TPM; set by BIOS */
>>> +    uint32_t pprq;           /*  9: opcode; set by ACPI */
>>> +    uint32_t pprm;           /* 13: parameter for opcode; set by ACPI */
>>> +    uint32_t lppr;           /* 17: last opcode; set by BIOS */
>>> +    uint32_t fret;           /* 21: set by ACPI; not used */
>>> +    uint8_t res1;            /* 25: reserved */
>>> +    uint32_t res2[4];        /* 26: reserved */
>>> +    uint8_t  res3[214];      /* 42: reserved */
>>> +    uint8_t  func[256];      /* 256: per TPM function implementation flags;
>>> +                                     set by BIOS */
>>> +/* indication whether function is implemented; bit 0 */
>>> +#define TPM_PPI_FUNC_IMPLEMENTED       (1 << 0)
>>> +/* actions OS should take to transition to the pre-OS env.; bits 1, 2 */
>>> +#define TPM_PPI_FUNC_ACTION_SHUTDOWN   (1 << 1)
>>> +#define TPM_PPI_FUNC_ACTION_REBOOT     (2 << 1)
>>> +#define TPM_PPI_FUNC_ACTION_VENDOR     (3 << 1)
>>> +#define TPM_PPI_FUNC_ACTION_MASK       (3 << 1)
>>> +/* whether function is blocked by BIOS settings; bits 3,4,5 */
>>> +#define TPM_PPI_FUNC_NOT_IMPLEMENTED     (0 << 3)
>>> +#define TPM_PPI_FUNC_BIOS_ONLY           (1 << 3)
>>> +#define TPM_PPI_FUNC_BLOCKED             (2 << 3)
>>> +#define TPM_PPI_FUNC_ALLOWED_USR_REQ     (3 << 3)
>>> +#define TPM_PPI_FUNC_ALLOWED_USR_NOT_REQ (4 << 3)
>>> +#define TPM_PPI_FUNC_MASK                (7 << 3)
>>> +} QEMU_PACKED;
>>> +
>>> +#define TPM_PPI_STRUCT_SIZE  sizeof(struct tpm_ppi)
>>> +
>>>    #endif /* HW_ACPI_TPM_H */
>>
>>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-12 16:44       ` Stefan Berger
@ 2018-02-12 17:52         ` Igor Mammedov
  2018-02-12 18:45           ` Stefan Berger
  0 siblings, 1 reply; 35+ messages in thread
From: Igor Mammedov @ 2018-02-12 17:52 UTC (permalink / raw)
  To: Stefan Berger; +Cc: marcandre.lureau, kevin, lersek, qemu-devel, mst

On Mon, 12 Feb 2018 11:44:16 -0500
Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:

> On 02/12/2018 09:27 AM, Igor Mammedov wrote:
> > On Fri, 9 Feb 2018 15:19:31 -0500
> > Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
> >  
> >> On 01/16/2018 10:51 AM, Stefan Berger wrote:  
> >>> The TPM Physical Presence interface consists of an ACPI part, a shared
> >>> memory part, and code in the firmware. Users can send messages to the
> >>> firmware by writing a code into the shared memory through invoking the
> >>> ACPI code. When a reboot happens, the firmware looks for the code and
> >>> acts on it by sending sequences of commands to the TPM.
> >>>
> >>> This patch adds the ACPI code. It is similar to the one in EDK2 but doesn't
> >>> assume that SMIs are necessary to use. It uses a similar datastructure for
> >>> the shared memory as EDK2 does so that EDK2 and SeaBIOS could both make use
> >>> of it. I extended the shared memory data structure with an array of 256
> >>> bytes, one for each code that could be implemented. The array contains
> >>> flags describing the individual codes. This decouples the ACPI implementation
> >>> from the firmware implementation.
> >>>
> >>> The underlying TCG specification is accessible from the following page.
> >>>
> >>> https://trustedcomputinggroup.org/tcg-physical-presence-interface-specification/
> >>>
> >>> This patch implements version 1.30.  
> >> I have played around with this patch and some modifications to EDK2.
> >> Though for EDK2 the question is whether to try to circumvent their
> >> current implementation that uses SMM or use SMM. With this patch so far
> >> I circumvent it, which is maybe not a good idea.
> >>
> >> The facts for EDK2's PPI:
> >>
> >> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM
> >> via an SMI and the PPI code is written into a UEFI variable. For this
> >> ACPI uses the memory are at 0xFFFF 0000 to pass parameters from the OS
> >> (via ACPI) to SMM. This is declared in ACPI with an OperationRegion() at
> >> that address. Once the machine is rebooted, UEFI reads the variable and
> >> finds the PPI code and reacts to it.
> >>
> >>
> >> The facts for SeaBIOS:
> >> - we cannot use the area at 0xFFFF 0000 since SeaBIOS is also mapped to
> >> this address. So we would have to use the PPI memory device's memory
> >> region, which is currently at 0xFED4 5000. SeaBIOS doesn't have
> >> persistent memory like NVRAM where it stores varaibles. So to pass the
> >> PPI code here the OS would invoke ACPI, which in turn would write the
> >> PPI code into the PPI memory directly. Upon reboot SeaBIOS would find
> >> the PPI code in the PPI memory device and react to it.
> >>
> >> The PPI device in this patch series allocates 0x400 bytes. 0x200 bytes
> >> are used by the OperationRegion() in this patch series. The rest was
> >> thought of for future extensions.
> >>
> >> To allow both firmwares to use PPI, we would need to be able to have the
> >> OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and
> >> 0xFED4 5000 for SeaBIOS, per choice of the firmware. One way to achieve
> >> this would be to have the firmwares choose their operation region base
> >> address by writing into the PPI memory device at offset 0x200 (for
> >> example). A '1' there would mean that we want the OperationRegion() at
> >> 0xFFFF 0000, and a '2' would mean at the base address of the PPI device
> >> (0xFED4 5000). This could be achieved by declaring a 2nd
> >> OperationRegion() in the ACPI code that is located at 0xFED4 5200 and we
> >> declare that 1st OperationRegion()'s address based on findings from
> >> there. Further, a flags variable at offset 0x201 could indicate whether
> >> an SMI is needed to enter SMM or not. Obviously, the ACPI code would
> >> become more complex to be able to support both firmwares at the same time.
> >> This is a lot of details but I rather post this description before
> >> posting more patches. To make these changes and get it to work with at
> >> least SeaBIOS is probably fairly easy. But is this acceptable?  
> > You could use hw/acpi/vmgenid.c as example,
> >
> > use following command to instruct firmware to write address back to QEMU:
> >
> >      bios_linker_loader_write_pointer(linker,
> >          TMP_PPI_ADDR_FW_CFG_FILE, 0, sizeof(uint64_t),
> >          TPM_PPI_MEM_FW_CFG_FILE, TPM_PPI_MEM_OFFSET);
> >
> > then when address is written into fwcfg, a callback in QEMU would be called due to
> >
> >      /* Create a read-write fw_cfg file for Address */
> >      fw_cfg_add_file_callback(s, TPM_PPI_ADDR_FW_CFG_FILE, ...);
> >
> > when CB is called you could map persistent overlay memory region
> > (PPI memory device) at address written by firmware.  
> 
> 
> > As for OperationRegion, you can instruct firmware to patch address
> > in AML as well, using bios_linker_loader_add_pointer() linker command.
> > what I'd suggest is to use dedicated TPM SSDT table and
> > at its start put a DWORD/QWORD variable where address would be patched in
> > and then use that variable as argument to OperationRegion
> >
> >   ssdt = init_aml_allocator();
> >   ...
> >   addr_offset = table_data->len + build_append_named_dword(ssdt->buf, "PPIA");
> >   ...
> >   ... aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
> >                        aml_name("PPIA"), TPM_PPI_STRUCT_SIZE)
> >   ...
> >   bios_linker_loader_add_pointer(linker,
> >         ACPI_BUILD_TABLE_FILE, addr_offset, sizeof(uint32_t),
> >         TPM_PPI_MEM_FW_CFG_FILE, 0);
> >
> > This way both UEFI and Seabios would use the same implementation and
> > work the same way.  
> 
> Thanks for this sample code. Though it only applies to how they write 
> the base address for the OperationRegion() and not the behaviour of the 
> code when it comes to SMI versus non-SMI.
From what I've gathered from discussions around the topic
is that untrusted guest code writes request into PPI memory
and then FW on reboot should act on it somehow (i.e. ask
user a terminal id changes are ok).

So I'd say, SMI versus non-SMI is a separate issue and we shouldn't
mix it with how firmware allocates PPI memory.

> > aml_operation_region("TPPI",..., aml_name("PPIA"), ...)
> > might work but it's not per spec, so it would be better to add
> > an API similar to build_append_named_dword() but only for
> > OperationRegion() which would return its offset within the table.
> > And skip on intermediate dword variable.
> >
> >
> > Wrt migration, you'd need to migrate address where PPI memory device
> > is mapped and its memory region.  
> 
> So today the PPI memory device is located at some address A. If we 
> decided that we need to move it to address B due to a collision with 
> another device, then are we going to be able to update the ACPI 
> OperationRegion base address post-migration to move it from address A to 
> address B? Or should we refuse the migration ? Keeping it at address A 
> seems wrong due to the collision.
Rule of the thumb is that HW on src and dst must be the same,
which implies the same address.
Also note that firmware on src is in RAM and it's also
migrated including address it's allocated/reported to QEMU on src.
So above mismatch nor collision isn't possible.

Though, more important point is that QEMU doesn't have to
give a damn (besides migrating this values from src to dst and
remapping overlay PPI memory region at that address (PCI code does
similar thing for migrated BARs)) about where in RAM this address
is allocated (it's upto firmware and should eliminate cross
version QEMU migration issues).
On new boot dst could get a new firmware which might allocate
region somewhere else and it would still work if it's migrated
back to the old host.

> > As for giving up control to from OS (APCI) to firmware that would be
> > target depended, one could use writing to SMM port to trigger SMI
> > for example and something else in case of ARM or x86 SMM-less design.  
> 
> Though to support these different cases, we either need to be able to 
> generate different ACPI code or make it configurable via some sort of 
> register. If we cannot get rid of these flag to let the firmware for 
> example choose between non-SMI and SMI, then do we need to use the above 
> shown callback mechanisms to avoid a 2nd field that lets one choose the 
> base address of the PPI memory device?
I'm sorry this is long discussion across several threads and
I've lost track of SMI vs SMI-less issue.
Could you point it out to me or describe here again what's the
problem and why we would need one vs another.

> >>      Stefan
[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-12 17:52         ` Igor Mammedov
@ 2018-02-12 18:45           ` Stefan Berger
  2018-02-13 12:50             ` Igor Mammedov
  0 siblings, 1 reply; 35+ messages in thread
From: Stefan Berger @ 2018-02-12 18:45 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: marcandre.lureau, kevin, lersek, qemu-devel, mst

On 02/12/2018 12:52 PM, Igor Mammedov wrote:
> On Mon, 12 Feb 2018 11:44:16 -0500
> Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
>
>> On 02/12/2018 09:27 AM, Igor Mammedov wrote:
>>> On Fri, 9 Feb 2018 15:19:31 -0500
>>> Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
>>>   
>>>> On 01/16/2018 10:51 AM, Stefan Berger wrote:
>>>>> The TPM Physical Presence interface consists of an ACPI part, a shared
>>>>> memory part, and code in the firmware. Users can send messages to the
>>>>> firmware by writing a code into the shared memory through invoking the
>>>>> ACPI code. When a reboot happens, the firmware looks for the code and
>>>>> acts on it by sending sequences of commands to the TPM.
>>>>>
>>>>> This patch adds the ACPI code. It is similar to the one in EDK2 but doesn't
>>>>> assume that SMIs are necessary to use. It uses a similar datastructure for
>>>>> the shared memory as EDK2 does so that EDK2 and SeaBIOS could both make use
>>>>> of it. I extended the shared memory data structure with an array of 256
>>>>> bytes, one for each code that could be implemented. The array contains
>>>>> flags describing the individual codes. This decouples the ACPI implementation
>>>>> from the firmware implementation.
>>>>>
>>>>> The underlying TCG specification is accessible from the following page.
>>>>>
>>>>> https://trustedcomputinggroup.org/tcg-physical-presence-interface-specification/
>>>>>
>>>>> This patch implements version 1.30.
>>>> I have played around with this patch and some modifications to EDK2.
>>>> Though for EDK2 the question is whether to try to circumvent their
>>>> current implementation that uses SMM or use SMM. With this patch so far
>>>> I circumvent it, which is maybe not a good idea.
>>>>
>>>> The facts for EDK2's PPI:
>>>>
>>>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM
>>>> via an SMI and the PPI code is written into a UEFI variable. For this
>>>> ACPI uses the memory are at 0xFFFF 0000 to pass parameters from the OS
>>>> (via ACPI) to SMM. This is declared in ACPI with an OperationRegion() at
>>>> that address. Once the machine is rebooted, UEFI reads the variable and
>>>> finds the PPI code and reacts to it.
>>>>
>>>>
>>>> The facts for SeaBIOS:
>>>> - we cannot use the area at 0xFFFF 0000 since SeaBIOS is also mapped to
>>>> this address. So we would have to use the PPI memory device's memory
>>>> region, which is currently at 0xFED4 5000. SeaBIOS doesn't have
>>>> persistent memory like NVRAM where it stores varaibles. So to pass the
>>>> PPI code here the OS would invoke ACPI, which in turn would write the
>>>> PPI code into the PPI memory directly. Upon reboot SeaBIOS would find
>>>> the PPI code in the PPI memory device and react to it.
>>>>
>>>> The PPI device in this patch series allocates 0x400 bytes. 0x200 bytes
>>>> are used by the OperationRegion() in this patch series. The rest was
>>>> thought of for future extensions.
>>>>
>>>> To allow both firmwares to use PPI, we would need to be able to have the
>>>> OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and
>>>> 0xFED4 5000 for SeaBIOS, per choice of the firmware. One way to achieve
>>>> this would be to have the firmwares choose their operation region base
>>>> address by writing into the PPI memory device at offset 0x200 (for
>>>> example). A '1' there would mean that we want the OperationRegion() at
>>>> 0xFFFF 0000, and a '2' would mean at the base address of the PPI device
>>>> (0xFED4 5000). This could be achieved by declaring a 2nd
>>>> OperationRegion() in the ACPI code that is located at 0xFED4 5200 and we
>>>> declare that 1st OperationRegion()'s address based on findings from
>>>> there. Further, a flags variable at offset 0x201 could indicate whether
>>>> an SMI is needed to enter SMM or not. Obviously, the ACPI code would
>>>> become more complex to be able to support both firmwares at the same time.
>>>> This is a lot of details but I rather post this description before
>>>> posting more patches. To make these changes and get it to work with at
>>>> least SeaBIOS is probably fairly easy. But is this acceptable?
>>> You could use hw/acpi/vmgenid.c as example,
>>>
>>> use following command to instruct firmware to write address back to QEMU:
>>>
>>>       bios_linker_loader_write_pointer(linker,
>>>           TMP_PPI_ADDR_FW_CFG_FILE, 0, sizeof(uint64_t),
>>>           TPM_PPI_MEM_FW_CFG_FILE, TPM_PPI_MEM_OFFSET);
>>>
>>> then when address is written into fwcfg, a callback in QEMU would be called due to
>>>
>>>       /* Create a read-write fw_cfg file for Address */
>>>       fw_cfg_add_file_callback(s, TPM_PPI_ADDR_FW_CFG_FILE, ...);
>>>
>>> when CB is called you could map persistent overlay memory region
>>> (PPI memory device) at address written by firmware.
>>
>>> As for OperationRegion, you can instruct firmware to patch address
>>> in AML as well, using bios_linker_loader_add_pointer() linker command.
>>> what I'd suggest is to use dedicated TPM SSDT table and
>>> at its start put a DWORD/QWORD variable where address would be patched in
>>> and then use that variable as argument to OperationRegion
>>>
>>>    ssdt = init_aml_allocator();
>>>    ...
>>>    addr_offset = table_data->len + build_append_named_dword(ssdt->buf, "PPIA");
>>>    ...
>>>    ... aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
>>>                         aml_name("PPIA"), TPM_PPI_STRUCT_SIZE)
>>>    ...
>>>    bios_linker_loader_add_pointer(linker,
>>>          ACPI_BUILD_TABLE_FILE, addr_offset, sizeof(uint32_t),
>>>          TPM_PPI_MEM_FW_CFG_FILE, 0);
>>>
>>> This way both UEFI and Seabios would use the same implementation and
>>> work the same way.
>> Thanks for this sample code. Though it only applies to how they write
>> the base address for the OperationRegion() and not the behaviour of the
>> code when it comes to SMI versus non-SMI.
>  From what I've gathered from discussions around the topic
> is that untrusted guest code writes request into PPI memory
> and then FW on reboot should act on it somehow (i.e. ask
> user a terminal id changes are ok).
>
> So I'd say, SMI versus non-SMI is a separate issue and we shouldn't
> mix it with how firmware allocates PPI memory.
>
>>> aml_operation_region("TPPI",..., aml_name("PPIA"), ...)
>>> might work but it's not per spec, so it would be better to add
>>> an API similar to build_append_named_dword() but only for
>>> OperationRegion() which would return its offset within the table.
>>> And skip on intermediate dword variable.
>>>
>>>
>>> Wrt migration, you'd need to migrate address where PPI memory device
>>> is mapped and its memory region.
>> So today the PPI memory device is located at some address A. If we
>> decided that we need to move it to address B due to a collision with
>> another device, then are we going to be able to update the ACPI
>> OperationRegion base address post-migration to move it from address A to
>> address B? Or should we refuse the migration ? Keeping it at address A
>> seems wrong due to the collision.
> Rule of the thumb is that HW on src and dst must be the same,
> which implies the same address.

Following this we don't need to migrate the address, do we ? My current 
implementation doesn't and uses a constant to set the subregion.

+void tpm_ppi_init_io(TPMPPI *tpmppi, struct MemoryRegion *m, Object *obj)
+{
+    memory_region_init_io(&tpmppi->mmio, obj, &tpm_ppi_memory_ops,
+                          tpmppi, "tpm-ppi-mmio",
+                          TPM_PPI_ADDR_SIZE);
+
+    memory_region_add_subregion(m, TPM_PPI_ADDR_BASE, &tpmppi->mmio);
+}


Here TPM_PPI_ADDR_BASE is set to 0xfed4 5000. Is that wrong and would 
have to be dynamic?I must admit I haven't spent much thought on EDK2's 
0xffff 0000 area and how that would be handled. It may very well have to 
be yet another region because in the current PPI memory is more data 
stored than what EDK2 uses in the 0xffff 0000 region.

> Also note that firmware on src is in RAM and it's also
> migrated including address it's allocated/reported to QEMU on src.
> So above mismatch nor collision isn't possible.

> Though, more important point is that QEMU doesn't have to
> give a damn (besides migrating this values from src to dst and
> remapping overlay PPI memory region at that address (PCI code does
> similar thing for migrated BARs)) about where in RAM this address
> is allocated (it's upto firmware and should eliminate cross
> version QEMU migration issues).
> On new boot dst could get a new firmware which might allocate
> region somewhere else and it would still work if it's migrated
> back to the old host.

That it can get a region somewhere else would depend on 'what'? Would 
the firmware decide the address of the PPI device? So the call 
'memory_region_add_subregion(m, TPM_PPI_ADDR_BASE, &tpmppi->mmio)' above 
would have to be called by that callback ?

I don't understand the part why 'it would still work if it's migrated 
back to the old host.'  Maybe there's more flexibility with moving 
addresses between migrations than what I currently know how to 
implement. My understanding is that the base address of the PPI device 
cannot change and both source and destination QEMU need to have it at 
the same address. As I point out above, I am working with a hard coded 
address that may have to be changed (=edited in the .h file) if we ever 
were to detect a collision with another device.

>
>>> As for giving up control to from OS (APCI) to firmware that would be
>>> target depended, one could use writing to SMM port to trigger SMI
>>> for example and something else in case of ARM or x86 SMM-less design.
>> Though to support these different cases, we either need to be able to
>> generate different ACPI code or make it configurable via some sort of
>> register. If we cannot get rid of these flag to let the firmware for
>> example choose between non-SMI and SMI, then do we need to use the above
>> shown callback mechanisms to avoid a 2nd field that lets one choose the
>> base address of the PPI memory device?
> I'm sorry this is long discussion across several threads and
> I've lost track of SMI vs SMI-less issue.
> Could you point it out to me or describe here again what's the
> problem and why we would need one vs another.

See the explanation above. With SeaBIOS we wouldn't need SMM since we 
wouldn't write the data in anything else than PPI. EDK2 uses SMM to 
write the data into UEFI variables and ACPI would transition to it via 
that interrupt.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-09 20:19   ` Stefan Berger
  2018-02-12 14:27     ` Igor Mammedov
@ 2018-02-12 19:45     ` Kevin O'Connor
  2018-02-12 20:17       ` Stefan Berger
  2018-02-12 20:46     ` Kevin O'Connor
  2 siblings, 1 reply; 35+ messages in thread
From: Kevin O'Connor @ 2018-02-12 19:45 UTC (permalink / raw)
  To: Stefan Berger; +Cc: qemu-devel, marcandre.lureau, lersek, mst

On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:
> I have played around with this patch and some modifications to EDK2. Though
> for EDK2 the question is whether to try to circumvent their current
> implementation that uses SMM or use SMM. With this patch so far I circumvent
> it, which is maybe not a good idea.
> 
> The facts for EDK2's PPI:
> 
> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM via
> an SMI and the PPI code is written into a UEFI variable. For this ACPI uses
> the memory are at 0xFFFF 0000 to pass parameters from the OS (via ACPI) to
> SMM. This is declared in ACPI with an OperationRegion() at that address.
> Once the machine is rebooted, UEFI reads the variable and finds the PPI code
> and reacts to it.

I'm a bit confused by this.  The top 1M of the first 4G of ram is
generally mapped to the flash device on real machines.  Indeed, this
is part of the mechanism used to boot an X86 machine - it starts
execution from flash at 0xfffffff0.  This is true even on modern
machines.

So, it seems strange that UEFI is pushing a code through a memory
device at 0xffff0000.  I can't see how that would be portable.  Are
you sure the memory write to 0xffff0000 is not just a trigger to
invoke the SMI?

It sounds as if the ultimate TPM interface that must be supported is
the ACPI DSDT methods.  Since you're crafting the DSDT table yourself,
why does there need to be two different backends - why can't the same
mechanism be used for both SeaBIOS and UEFI?  Is this because you are
looking to reuse TPM code already in UEFI that requires a specific
interface?

-Kevin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-12 19:45     ` Kevin O'Connor
@ 2018-02-12 20:17       ` Stefan Berger
  2018-02-13 12:57         ` Igor Mammedov
  0 siblings, 1 reply; 35+ messages in thread
From: Stefan Berger @ 2018-02-12 20:17 UTC (permalink / raw)
  To: Kevin O'Connor; +Cc: qemu-devel, marcandre.lureau, lersek, mst

On 02/12/2018 02:45 PM, Kevin O'Connor wrote:
> On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:
>> I have played around with this patch and some modifications to EDK2. Though
>> for EDK2 the question is whether to try to circumvent their current
>> implementation that uses SMM or use SMM. With this patch so far I circumvent
>> it, which is maybe not a good idea.
>>
>> The facts for EDK2's PPI:
>>
>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM via
>> an SMI and the PPI code is written into a UEFI variable. For this ACPI uses
>> the memory are at 0xFFFF 0000 to pass parameters from the OS (via ACPI) to
>> SMM. This is declared in ACPI with an OperationRegion() at that address.
>> Once the machine is rebooted, UEFI reads the variable and finds the PPI code
>> and reacts to it.
> I'm a bit confused by this.  The top 1M of the first 4G of ram is
> generally mapped to the flash device on real machines.  Indeed, this
> is part of the mechanism used to boot an X86 machine - it starts
> execution from flash at 0xfffffff0.  This is true even on modern
> machines.
>
> So, it seems strange that UEFI is pushing a code through a memory
> device at 0xffff0000.  I can't see how that would be portable.  Are
> you sure the memory write to 0xffff0000 is not just a trigger to
> invoke the SMI?

I base this on the code here:

https://github.com/tianocore/edk2/blob/master/SecurityPkg/Tcg/Tcg2Smm/Tpm.asl#L81

OperationRegion (TNVS, SystemMemory, 0xFFFF0000, 0xF0)

>
> It sounds as if the ultimate TPM interface that must be supported is
> the ACPI DSDT methods.  Since you're crafting the DSDT table yourself,
> why does there need to be two different backends - why can't the same
> mechanism be used for both SeaBIOS and UEFI?  Is this because you are
> looking to reuse TPM code already in UEFI that requires a specific
> interface?

UEFI uses SMM to write the PPI code into a UEFI variable that it then 
presumably stores back to NVRAM. With SeaBIOS I would go the path of 
having the ACPI code write the code in the OperationRegion() and leave 
it at that, so not invoke SMM. EDK2 also reads the result of the 
operation from a register that SMM uses to pass a return value through. 
We wouldn't need that for SeaBIOS.

    Stefan

>
> -Kevin
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-09 20:19   ` Stefan Berger
  2018-02-12 14:27     ` Igor Mammedov
  2018-02-12 19:45     ` Kevin O'Connor
@ 2018-02-12 20:46     ` Kevin O'Connor
  2018-02-12 20:49       ` Stefan Berger
  2 siblings, 1 reply; 35+ messages in thread
From: Kevin O'Connor @ 2018-02-12 20:46 UTC (permalink / raw)
  To: Stefan Berger; +Cc: qemu-devel, marcandre.lureau, lersek, mst

On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:
> The PPI device in this patch series allocates 0x400 bytes. 0x200 bytes are
> used by the OperationRegion() in this patch series. The rest was thought of
> for future extensions.
>
> To allow both firmwares to use PPI, we would need to be able to have the
> OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and 0xFED4
> 5000 for SeaBIOS, per choice of the firmware. One way to achieve this would
> be to have the firmwares choose their operation region base address by
> writing into the PPI memory device at offset 0x200 (for example). A '1'
> there would mean that we want the OperationRegion() at 0xFFFF 0000, and a
> '2' would mean at the base address of the PPI device (0xFED4 5000). This
> could be achieved by declaring a 2nd OperationRegion() in the ACPI code that
> is located at 0xFED4 5200 and we declare that 1st OperationRegion()'s
> address based on findings from there. Further, a flags variable at offset
> 0x201 could indicate whether an SMI is needed to enter SMM or not.
> Obviously, the ACPI code would become more complex to be able to support
> both firmwares at the same time.
>
> This is a lot of details but I rather post this description before posting
> more patches. To make these changes and get it to work with at least SeaBIOS
> is probably fairly easy. But is this acceptable?

I'm not sure I fully understand the goals of the PPI interface.
Here's what I understand so far:

The TPM specs define some actions that are considered privileged.  An
example of this would be disabling the TPM itself.  In order to
prevent an attacker from performing these actions without
authorization, the TPM specs define a mechanism to assert "physical
presence" before the privileged action can be done.  They do this by
having the firmware present a menu during early boot that permits
these privileged operations, and then the firmware locks the TPM chip
so the actions can no longer be done by any software that runs after
the firmware.  Thus "physical presence" is asserted by demonstrating
one has console access to the machine during early boot.

The PPI spec implements a work around for this - presumably some found
the enforcement mechanism too onerous.  It allows the OS to provide a
request code to the firmware, and on the next boot the firmware will
take the requested action before it locks the chip.  Thus allowing the
OS to indirectly perform the privileged action even after the chip has
been locked.  Thus, the PPI system seems to be an "elaborate hack" to
allow users to circumvent the physical presence mechanism (if they
choose to).

Here's what I understand the proposed implementation involves:

1 - in addition to emulating the TPM device itself, QEMU will also
    introduce a virtual memory device with 0x400 bytes.

2 - on first boot the firmware (seabios and uefi) will populate the
    memory region created in step 1.  In particular it will fill an
    array with the list of request codes it supports.  (Each request
    is an 8bit value, the array has 256 entries.)

3 - QEMU will produce AML code implementing the standard PPI ACPI
    interface.  This AML code will take the request, find the table
    produced in step 1, compare it to the list of accepted requests
    produced in step 2, and then place the 8bit request in another
    qemu virtual memory device (at 0xFFFF0000 or 0xFED45000).

4 - the OS will signal a reboot, qemu will do its normal reboot logic,
    and the firmware will be run again.

5 - the firmware will extract the code written in stage 3, and if the
    tpm device has been configured to accept PPI codes from the OS, it
    will invoke the requested action.

Did I understand the above correctly?

Separately, is there someone clamoring for PPI support today?  That
is, is the goal to implement PPI to make QEMU more spec compliant, or
is there someone struggling to perform a particular task today that
this support would improve?

Thanks,
-Kevin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-12 20:46     ` Kevin O'Connor
@ 2018-02-12 20:49       ` Stefan Berger
  2018-02-13 16:16         ` Laszlo Ersek
  0 siblings, 1 reply; 35+ messages in thread
From: Stefan Berger @ 2018-02-12 20:49 UTC (permalink / raw)
  To: Kevin O'Connor; +Cc: qemu-devel, marcandre.lureau, lersek, mst

On 02/12/2018 03:46 PM, Kevin O'Connor wrote:
> On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:
>> The PPI device in this patch series allocates 0x400 bytes. 0x200 bytes are
>> used by the OperationRegion() in this patch series. The rest was thought of
>> for future extensions.
>>
>> To allow both firmwares to use PPI, we would need to be able to have the
>> OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and 0xFED4
>> 5000 for SeaBIOS, per choice of the firmware. One way to achieve this would
>> be to have the firmwares choose their operation region base address by
>> writing into the PPI memory device at offset 0x200 (for example). A '1'
>> there would mean that we want the OperationRegion() at 0xFFFF 0000, and a
>> '2' would mean at the base address of the PPI device (0xFED4 5000). This
>> could be achieved by declaring a 2nd OperationRegion() in the ACPI code that
>> is located at 0xFED4 5200 and we declare that 1st OperationRegion()'s
>> address based on findings from there. Further, a flags variable at offset
>> 0x201 could indicate whether an SMI is needed to enter SMM or not.
>> Obviously, the ACPI code would become more complex to be able to support
>> both firmwares at the same time.
>>
>> This is a lot of details but I rather post this description before posting
>> more patches. To make these changes and get it to work with at least SeaBIOS
>> is probably fairly easy. But is this acceptable?
> I'm not sure I fully understand the goals of the PPI interface.
> Here's what I understand so far:
>
> The TPM specs define some actions that are considered privileged.  An
> example of this would be disabling the TPM itself.  In order to
> prevent an attacker from performing these actions without
> authorization, the TPM specs define a mechanism to assert "physical
> presence" before the privileged action can be done.  They do this by
> having the firmware present a menu during early boot that permits
> these privileged operations, and then the firmware locks the TPM chip
> so the actions can no longer be done by any software that runs after
> the firmware.  Thus "physical presence" is asserted by demonstrating
> one has console access to the machine during early boot.
>
> The PPI spec implements a work around for this - presumably some found
> the enforcement mechanism too onerous.  It allows the OS to provide a
> request code to the firmware, and on the next boot the firmware will
> take the requested action before it locks the chip.  Thus allowing the
> OS to indirectly perform the privileged action even after the chip has
> been locked.  Thus, the PPI system seems to be an "elaborate hack" to
> allow users to circumvent the physical presence mechanism (if they
> choose to).

Correct.
>
> Here's what I understand the proposed implementation involves:
>
> 1 - in addition to emulating the TPM device itself, QEMU will also
>      introduce a virtual memory device with 0x400 bytes.
Correct.
>
> 2 - on first boot the firmware (seabios and uefi) will populate the
>      memory region created in step 1.  In particular it will fill an
>      array with the list of request codes it supports.  (Each request
>      is an 8bit value, the array has 256 entries.)
Correct. Each firmware would fill out the 256 byte array depending on 
what it supports. The 8 bit values are basically flags and so on.
> 3 - QEMU will produce AML code implementing the standard PPI ACPI
>      interface.  This AML code will take the request, find the table
>      produced in step 1, compare it to the list of accepted requests
>      produced in step 2, and then place the 8bit request in another
>      qemu virtual memory device (at 0xFFFF0000 or 0xFED45000).

Correct.

Now EDK2 wants to store the code in a UEFI variable in NVRAM. We 
therefore would need to trigger an SMI. In SeaBIOS we wouldn't have to 
do this.

> 4 - the OS will signal a reboot, qemu will do its normal reboot logic,
>      and the firmware will be run again.
>
> 5 - the firmware will extract the code written in stage 3, and if the
>      tpm device has been configured to accept PPI codes from the OS, it
>      will invoke the requested action.

SeaBIOS would look into memory to find the code. EDK2 will read the code 
from a UEFI variable.

> Did I understand the above correctly?
I think so. With the fine differences between SeaBIOS and EDK2 pointed out.

> Separately, is there someone clamoring for PPI support today?  That
> is, is the goal to implement PPI to make QEMU more spec compliant, or
> is there someone struggling to perform a particular task today that
> this support would improve?

We could defer the implementation of this. My main goal was to to 
support TPM migration in QEMU and the PPI device also needs to be 
migrated as part of TPM migration. So that's why I am looking at PPI now.

     Stefan
> Thanks,
> -Kevin
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-12 18:45           ` Stefan Berger
@ 2018-02-13 12:50             ` Igor Mammedov
  0 siblings, 0 replies; 35+ messages in thread
From: Igor Mammedov @ 2018-02-13 12:50 UTC (permalink / raw)
  To: Stefan Berger; +Cc: marcandre.lureau, kevin, lersek, qemu-devel, mst

On Mon, 12 Feb 2018 13:45:17 -0500
Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:

> On 02/12/2018 12:52 PM, Igor Mammedov wrote:
> > On Mon, 12 Feb 2018 11:44:16 -0500
> > Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
> >  
> >> On 02/12/2018 09:27 AM, Igor Mammedov wrote:  
> >>> On Fri, 9 Feb 2018 15:19:31 -0500
> >>> Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
> >>>     
> >>>> On 01/16/2018 10:51 AM, Stefan Berger wrote:  
> >>>>> The TPM Physical Presence interface consists of an ACPI part, a shared
> >>>>> memory part, and code in the firmware. Users can send messages to the
> >>>>> firmware by writing a code into the shared memory through invoking the
> >>>>> ACPI code. When a reboot happens, the firmware looks for the code and
> >>>>> acts on it by sending sequences of commands to the TPM.
> >>>>>
> >>>>> This patch adds the ACPI code. It is similar to the one in EDK2 but doesn't
> >>>>> assume that SMIs are necessary to use. It uses a similar datastructure for
> >>>>> the shared memory as EDK2 does so that EDK2 and SeaBIOS could both make use
> >>>>> of it. I extended the shared memory data structure with an array of 256
> >>>>> bytes, one for each code that could be implemented. The array contains
> >>>>> flags describing the individual codes. This decouples the ACPI implementation
> >>>>> from the firmware implementation.
> >>>>>
> >>>>> The underlying TCG specification is accessible from the following page.
> >>>>>
> >>>>> https://trustedcomputinggroup.org/tcg-physical-presence-interface-specification/
> >>>>>
> >>>>> This patch implements version 1.30.  
> >>>> I have played around with this patch and some modifications to EDK2.
> >>>> Though for EDK2 the question is whether to try to circumvent their
> >>>> current implementation that uses SMM or use SMM. With this patch so far
> >>>> I circumvent it, which is maybe not a good idea.
> >>>>
> >>>> The facts for EDK2's PPI:
> >>>>
> >>>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM
> >>>> via an SMI and the PPI code is written into a UEFI variable. For this
> >>>> ACPI uses the memory are at 0xFFFF 0000 to pass parameters from the OS
> >>>> (via ACPI) to SMM. This is declared in ACPI with an OperationRegion() at
> >>>> that address. Once the machine is rebooted, UEFI reads the variable and
> >>>> finds the PPI code and reacts to it.
> >>>>
> >>>>
> >>>> The facts for SeaBIOS:
> >>>> - we cannot use the area at 0xFFFF 0000 since SeaBIOS is also mapped to
> >>>> this address. So we would have to use the PPI memory device's memory
> >>>> region, which is currently at 0xFED4 5000. SeaBIOS doesn't have
> >>>> persistent memory like NVRAM where it stores varaibles. So to pass the
> >>>> PPI code here the OS would invoke ACPI, which in turn would write the
> >>>> PPI code into the PPI memory directly. Upon reboot SeaBIOS would find
> >>>> the PPI code in the PPI memory device and react to it.
> >>>>
> >>>> The PPI device in this patch series allocates 0x400 bytes. 0x200 bytes
> >>>> are used by the OperationRegion() in this patch series. The rest was
> >>>> thought of for future extensions.
> >>>>
> >>>> To allow both firmwares to use PPI, we would need to be able to have the
> >>>> OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and
> >>>> 0xFED4 5000 for SeaBIOS, per choice of the firmware. One way to achieve
> >>>> this would be to have the firmwares choose their operation region base
> >>>> address by writing into the PPI memory device at offset 0x200 (for
> >>>> example). A '1' there would mean that we want the OperationRegion() at
> >>>> 0xFFFF 0000, and a '2' would mean at the base address of the PPI device
> >>>> (0xFED4 5000). This could be achieved by declaring a 2nd
> >>>> OperationRegion() in the ACPI code that is located at 0xFED4 5200 and we
> >>>> declare that 1st OperationRegion()'s address based on findings from
> >>>> there. Further, a flags variable at offset 0x201 could indicate whether
> >>>> an SMI is needed to enter SMM or not. Obviously, the ACPI code would
> >>>> become more complex to be able to support both firmwares at the same time.
> >>>> This is a lot of details but I rather post this description before
> >>>> posting more patches. To make these changes and get it to work with at
> >>>> least SeaBIOS is probably fairly easy. But is this acceptable?  
> >>> You could use hw/acpi/vmgenid.c as example,
> >>>
> >>> use following command to instruct firmware to write address back to QEMU:
> >>>
> >>>       bios_linker_loader_write_pointer(linker,
> >>>           TMP_PPI_ADDR_FW_CFG_FILE, 0, sizeof(uint64_t),
> >>>           TPM_PPI_MEM_FW_CFG_FILE, TPM_PPI_MEM_OFFSET);
> >>>
> >>> then when address is written into fwcfg, a callback in QEMU would be called due to
> >>>
> >>>       /* Create a read-write fw_cfg file for Address */
> >>>       fw_cfg_add_file_callback(s, TPM_PPI_ADDR_FW_CFG_FILE, ...);
> >>>
> >>> when CB is called you could map persistent overlay memory region
> >>> (PPI memory device) at address written by firmware.  
> >>  
> >>> As for OperationRegion, you can instruct firmware to patch address
> >>> in AML as well, using bios_linker_loader_add_pointer() linker command.
> >>> what I'd suggest is to use dedicated TPM SSDT table and
> >>> at its start put a DWORD/QWORD variable where address would be patched in
> >>> and then use that variable as argument to OperationRegion
> >>>
> >>>    ssdt = init_aml_allocator();
> >>>    ...
> >>>    addr_offset = table_data->len + build_append_named_dword(ssdt->buf, "PPIA");
> >>>    ...
> >>>    ... aml_operation_region("TPPI", AML_SYSTEM_MEMORY,
> >>>                         aml_name("PPIA"), TPM_PPI_STRUCT_SIZE)
> >>>    ...
> >>>    bios_linker_loader_add_pointer(linker,
> >>>          ACPI_BUILD_TABLE_FILE, addr_offset, sizeof(uint32_t),
> >>>          TPM_PPI_MEM_FW_CFG_FILE, 0);
> >>>
> >>> This way both UEFI and Seabios would use the same implementation and
> >>> work the same way.  
> >> Thanks for this sample code. Though it only applies to how they write
> >> the base address for the OperationRegion() and not the behaviour of the
> >> code when it comes to SMI versus non-SMI.  
> >  From what I've gathered from discussions around the topic
> > is that untrusted guest code writes request into PPI memory
> > and then FW on reboot should act on it somehow (i.e. ask
> > user a terminal id changes are ok).
> >
> > So I'd say, SMI versus non-SMI is a separate issue and we shouldn't
> > mix it with how firmware allocates PPI memory.
> >  
> >>> aml_operation_region("TPPI",..., aml_name("PPIA"), ...)
> >>> might work but it's not per spec, so it would be better to add
> >>> an API similar to build_append_named_dword() but only for
> >>> OperationRegion() which would return its offset within the table.
> >>> And skip on intermediate dword variable.
> >>>
> >>>
> >>> Wrt migration, you'd need to migrate address where PPI memory device
> >>> is mapped and its memory region.  
> >> So today the PPI memory device is located at some address A. If we
> >> decided that we need to move it to address B due to a collision with
> >> another device, then are we going to be able to update the ACPI
> >> OperationRegion base address post-migration to move it from address A to
> >> address B? Or should we refuse the migration ? Keeping it at address A
> >> seems wrong due to the collision.  
> > Rule of the thumb is that HW on src and dst must be the same,
> > which implies the same address.  
> 
> Following this we don't need to migrate the address, do we ? My current 
> implementation doesn't and uses a constant to set the subregion.
see below why migrating address might be necessary
 
> +void tpm_ppi_init_io(TPMPPI *tpmppi, struct MemoryRegion *m, Object *obj)
> +{
> +    memory_region_init_io(&tpmppi->mmio, obj, &tpm_ppi_memory_ops,
> +                          tpmppi, "tpm-ppi-mmio",
> +                          TPM_PPI_ADDR_SIZE);
> +
> +    memory_region_add_subregion(m, TPM_PPI_ADDR_BASE, &tpmppi->mmio);
In QEMU typically it's not acceptable for device to map itself at some
fixed address. (but there are exceptions). Usually mapping in
common address space is done by board that uses device.

> +}
> 
> 
> Here TPM_PPI_ADDR_BASE is set to 0xfed4 5000. Is that wrong and would 
> have to be dynamic?I must admit I haven't spent much thought on EDK2's 
> 0xffff 0000 area and how that would be handled. It may very well have to 
> be yet another region because in the current PPI memory is more data 
> stored than what EDK2 uses in the 0xffff 0000 region.
If you in advance take in account that address might be different
depending on target/board/FW kind or even version. Starting
with flexible allocation makes more sense, since one won't have
to redo implementation once something else beside SeaBIOS
needed to be supported. And there won't be need to introduce
negotiation mechanism on top to allow FW specify which of fixed
addresses to use. With all this building up, a simplistic
fixed address impl. becomes rather complex.

> > Also note that firmware on src is in RAM and it's also
> > migrated including address it's allocated/reported to QEMU on src.
> > So above mismatch nor collision isn't possible.  
> 
> > Though, more important point is that QEMU doesn't have to
> > give a damn (besides migrating this values from src to dst and
> > remapping overlay PPI memory region at that address (PCI code does
> > similar thing for migrated BARs)) about where in RAM this address
> > is allocated (it's upto firmware and should eliminate cross
> > version QEMU migration issues).
> > On new boot dst could get a new firmware which might allocate
> > region somewhere else and it would still work if it's migrated
> > back to the old host.  
> 
> That it can get a region somewhere else would depend on 'what'? Would 
> the firmware decide the address of the PPI device?
Yes, firmware would decide where to put it
For example EDK could write 0xffff0000 into fwcgf file
while SeaBIOS could write there something else.

If bios_linker API is used to allocate region, then that address
is taken from RAM address space that firmware reserves for ACPI
tables (i.e. it doesn't conflict with anything and won't be used
by guest OS as it's reported as reserved in E802 table).

> So the call 
> 'memory_region_add_subregion(m, TPM_PPI_ADDR_BASE, &tpmppi->mmio)' above 
> would have to be called by that callback ?
the callback, attached to file, will map PPI device memory
at written address:
  memory_region_add_subregion(m, fw_reported_ppi_addr, &tpmppi->mmio)

> I don't understand the part why 'it would still work if it's migrated 
> back to the old host.'  Maybe there's more flexibility with moving 
> addresses between migrations than what I currently know how to 
> implement. My understanding is that the base address of the PPI device 
> cannot change and both source and destination QEMU need to have it at 
> the same address. As I point out above, I am working with a hard coded 
> address that may have to be changed (=edited in the .h file) if we ever 
> were to detect a collision with another device.
And that changing 'fixed' address is the problem in case of
cross version migrations, as fixed address implies lockstep
update in both QEMU and firmware.
In case firmware with OLD constant is migrated to newer QEMU
with another address and vice verse, things will break.
Then to overcome issue you would have to migrate address as
device state anyway (but that would work only one way since
older QEMU with simple fixed address didn't have a clue about
it, plus other surrounding compat glue for supporting
old/new/other firmware).

If you go dynamic approach from the begging, you'll have
address as migrated state from the starters and QEMU won't
have any fixed address built in so there won't be any
dependency between QEMU and firmware. It won't matter
whether it's old or new QEMU is used, as firmware decides
how to program TMP/PPI device (allocates address from its
free address pool).

Considering you already have 2 different addresses depending
on firmware and there might be 3rd when ARM comes in
picture, I'm trying to persuade you to use dynamic approach
from the beginning to avoid compatibility/migration issues
in future.

Maybe instead of rewriting PPI series again, you could
just start with a simple prototype that would implement
this allocation logic in QEMU/SeaBIOS/EDK. And once that
is settled, fill it in with TPM specific logic.

PS:
I can guide you wrt usage of bios_liker API if hw/acpi/vmgenid.c
isn't enough of example.

> >>> As for giving up control to from OS (APCI) to firmware that would be
> >>> target depended, one could use writing to SMM port to trigger SMI
> >>> for example and something else in case of ARM or x86 SMM-less design.  
> >> Though to support these different cases, we either need to be able to
> >> generate different ACPI code or make it configurable via some sort of
> >> register. If we cannot get rid of these flag to let the firmware for
> >> example choose between non-SMI and SMI, then do we need to use the above
> >> shown callback mechanisms to avoid a 2nd field that lets one choose the
> >> base address of the PPI memory device?  
> > I'm sorry this is long discussion across several threads and
> > I've lost track of SMI vs SMI-less issue.
> > Could you point it out to me or describe here again what's the
> > problem and why we would need one vs another.  
> 
> See the explanation above. With SeaBIOS we wouldn't need SMM since we 
> wouldn't write the data in anything else than PPI. EDK2 uses SMM to 
> write the data into UEFI variables and ACPI would transition to it via 
> that interrupt.
I think Kevin did nice sum up of design.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-12 20:17       ` Stefan Berger
@ 2018-02-13 12:57         ` Igor Mammedov
  2018-02-13 13:31           ` Laszlo Ersek
  0 siblings, 1 reply; 35+ messages in thread
From: Igor Mammedov @ 2018-02-13 12:57 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Kevin O'Connor, marcandre.lureau, lersek, qemu-devel, mst

On Mon, 12 Feb 2018 15:17:21 -0500
Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:

> On 02/12/2018 02:45 PM, Kevin O'Connor wrote:
> > On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:  
> >> I have played around with this patch and some modifications to EDK2. Though
> >> for EDK2 the question is whether to try to circumvent their current
> >> implementation that uses SMM or use SMM. With this patch so far I circumvent
> >> it, which is maybe not a good idea.
> >>
> >> The facts for EDK2's PPI:
> >>
> >> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM via
> >> an SMI and the PPI code is written into a UEFI variable. For this ACPI uses
> >> the memory are at 0xFFFF 0000 to pass parameters from the OS (via ACPI) to
> >> SMM. This is declared in ACPI with an OperationRegion() at that address.
> >> Once the machine is rebooted, UEFI reads the variable and finds the PPI code
> >> and reacts to it.  
> > I'm a bit confused by this.  The top 1M of the first 4G of ram is
> > generally mapped to the flash device on real machines.  Indeed, this
> > is part of the mechanism used to boot an X86 machine - it starts
> > execution from flash at 0xfffffff0.  This is true even on modern
> > machines.
> >
> > So, it seems strange that UEFI is pushing a code through a memory
> > device at 0xffff0000.  I can't see how that would be portable.  Are
> > you sure the memory write to 0xffff0000 is not just a trigger to
> > invoke the SMI?  
> 
> I base this on the code here:
> 
> https://github.com/tianocore/edk2/blob/master/SecurityPkg/Tcg/Tcg2Smm/Tpm.asl#L81
> 
> OperationRegion (TNVS, SystemMemory, 0xFFFF0000, 0xF0)
Is the goal to reuse EDK PPI impl. including ASL?

If it's so, then perhaps we only need to write address into QEMU
and let OVMF to discard PPI SSDT from QEMU.

>     Stefan
> 
> >
> > -Kevin
> >  
> 
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 12:57         ` Igor Mammedov
@ 2018-02-13 13:31           ` Laszlo Ersek
  2018-02-13 14:17             ` Igor Mammedov
  0 siblings, 1 reply; 35+ messages in thread
From: Laszlo Ersek @ 2018-02-13 13:31 UTC (permalink / raw)
  To: Igor Mammedov, Stefan Berger
  Cc: Kevin O'Connor, marcandre.lureau, qemu-devel, mst

On 02/13/18 13:57, Igor Mammedov wrote:
> On Mon, 12 Feb 2018 15:17:21 -0500
> Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
> 
>> On 02/12/2018 02:45 PM, Kevin O'Connor wrote:
>>> On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:  
>>>> I have played around with this patch and some modifications to EDK2. Though
>>>> for EDK2 the question is whether to try to circumvent their current
>>>> implementation that uses SMM or use SMM. With this patch so far I circumvent
>>>> it, which is maybe not a good idea.
>>>>
>>>> The facts for EDK2's PPI:
>>>>
>>>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM via
>>>> an SMI and the PPI code is written into a UEFI variable. For this ACPI uses
>>>> the memory are at 0xFFFF 0000 to pass parameters from the OS (via ACPI) to
>>>> SMM. This is declared in ACPI with an OperationRegion() at that address.
>>>> Once the machine is rebooted, UEFI reads the variable and finds the PPI code
>>>> and reacts to it.  
>>> I'm a bit confused by this.  The top 1M of the first 4G of ram is
>>> generally mapped to the flash device on real machines.  Indeed, this
>>> is part of the mechanism used to boot an X86 machine - it starts
>>> execution from flash at 0xfffffff0.  This is true even on modern
>>> machines.
>>>
>>> So, it seems strange that UEFI is pushing a code through a memory
>>> device at 0xffff0000.  I can't see how that would be portable.  Are
>>> you sure the memory write to 0xffff0000 is not just a trigger to
>>> invoke the SMI?  
>>
>> I base this on the code here:
>>
>> https://github.com/tianocore/edk2/blob/master/SecurityPkg/Tcg/Tcg2Smm/Tpm.asl#L81
>>
>> OperationRegion (TNVS, SystemMemory, 0xFFFF0000, 0xF0)
> Is the goal to reuse EDK PPI impl. including ASL?

Right, that is one important question. Some code in edk2 is meant only
as example code (so actual firmware platforms are encouraged to copy &
customize the code, or use similar or dissimilar alternatives). Some
modules are meant for inclusion "as-is" in the platform description
files (~ makefiles). There are so many edk2 modules related to TPM
(several versions of the specs even) that it's hard to say what is meant
for what usage type. (By my earlier count, there are 19 modules.)

> If it's so, then perhaps we only need to write address into QEMU
> and let OVMF to discard PPI SSDT from QEMU.

That's something I would not like. When running on QEMU, it's OK for
some edk2 modules to install their own ACPI tables, but only if they are
"software" tables, such as the IBFT for iSCSI booting, BGRT (boot
graphics table) for the boot logo / splash screen, etc. For ACPI tables
that describe hardware (data tables) or carry out actions related to
hardware (DefinitionBlocks / AML), I much prefer QEMU to generate all
the stuff.

If there is a conflict between hardware-related tables that QEMU
generates and similar tables that pre-exist in edk2, I prefer working
with the edk2 maintainers to customize their subsystems, so that a
concrete firmware platform, such as OvmfPkg and ArmVirtPkg, can
conditionalize / exclude those preexistent ACPI tables, while benefiting
from the rest of the code. Then the ACPI linker/loader client used in
both OvmfPkg and ArmVirtPkg can remain dumb and process & expose
whatever tables QEMU generates.

We could control the AML payload for TPM and/or TPM PPI -- e.g. whether
it should raise an SMI -- via "-device tpm2,smi=on", for example. (Just
making up the syntax, but you know what I mean.)

We could go one step further, "-device tpm2,acpi=[on|off]". acpi=on
would make QEMU generate the TPM related tables (perhaps targeting only
SeaBIOS, if that makes sense), acpi=off would leave the related tables
to the firmware.

This is just speculation on my part; the point is I'd like to avoid any
more "intelligent filtering" regarding ACPI tables in OVMF. What we have
there is terrible enough already.

Thanks
Laszlo

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 13:31           ` Laszlo Ersek
@ 2018-02-13 14:17             ` Igor Mammedov
  2018-02-13 15:39               ` Laszlo Ersek
  0 siblings, 1 reply; 35+ messages in thread
From: Igor Mammedov @ 2018-02-13 14:17 UTC (permalink / raw)
  To: Laszlo Ersek
  Cc: Stefan Berger, Kevin O'Connor, marcandre.lureau, qemu-devel, mst

On Tue, 13 Feb 2018 14:31:41 +0100
Laszlo Ersek <lersek@redhat.com> wrote:

> On 02/13/18 13:57, Igor Mammedov wrote:
> > On Mon, 12 Feb 2018 15:17:21 -0500
> > Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
> >   
> >> On 02/12/2018 02:45 PM, Kevin O'Connor wrote:  
> >>> On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:    
> >>>> I have played around with this patch and some modifications to EDK2. Though
> >>>> for EDK2 the question is whether to try to circumvent their current
> >>>> implementation that uses SMM or use SMM. With this patch so far I circumvent
> >>>> it, which is maybe not a good idea.
> >>>>
> >>>> The facts for EDK2's PPI:
> >>>>
> >>>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM via
> >>>> an SMI and the PPI code is written into a UEFI variable. For this ACPI uses
> >>>> the memory are at 0xFFFF 0000 to pass parameters from the OS (via ACPI) to
> >>>> SMM. This is declared in ACPI with an OperationRegion() at that address.
> >>>> Once the machine is rebooted, UEFI reads the variable and finds the PPI code
> >>>> and reacts to it.    
> >>> I'm a bit confused by this.  The top 1M of the first 4G of ram is
> >>> generally mapped to the flash device on real machines.  Indeed, this
> >>> is part of the mechanism used to boot an X86 machine - it starts
> >>> execution from flash at 0xfffffff0.  This is true even on modern
> >>> machines.
> >>>
> >>> So, it seems strange that UEFI is pushing a code through a memory
> >>> device at 0xffff0000.  I can't see how that would be portable.  Are
> >>> you sure the memory write to 0xffff0000 is not just a trigger to
> >>> invoke the SMI?    
> >>
> >> I base this on the code here:
> >>
> >> https://github.com/tianocore/edk2/blob/master/SecurityPkg/Tcg/Tcg2Smm/Tpm.asl#L81
> >>
> >> OperationRegion (TNVS, SystemMemory, 0xFFFF0000, 0xF0)  
> > Is the goal to reuse EDK PPI impl. including ASL?  
> 
> Right, that is one important question. Some code in edk2 is meant only
> as example code (so actual firmware platforms are encouraged to copy &
> customize the code, or use similar or dissimilar alternatives). Some
> modules are meant for inclusion "as-is" in the platform description
> files (~ makefiles). There are so many edk2 modules related to TPM
> (several versions of the specs even) that it's hard to say what is meant
> for what usage type. (By my earlier count, there are 19 modules.)
> 
> > If it's so, then perhaps we only need to write address into QEMU
> > and let OVMF to discard PPI SSDT from QEMU.  
> 
> That's something I would not like. When running on QEMU, it's OK for
> some edk2 modules to install their own ACPI tables, but only if they are
> "software" tables, such as the IBFT for iSCSI booting, BGRT (boot
> graphics table) for the boot logo / splash screen, etc. For ACPI tables
> that describe hardware (data tables) or carry out actions related to
> hardware (DefinitionBlocks / AML), I much prefer QEMU to generate all
> the stuff.
> 
> If there is a conflict between hardware-related tables that QEMU
> generates and similar tables that pre-exist in edk2, I prefer working
> with the edk2 maintainers to customize their subsystems, so that a
> concrete firmware platform, such as OvmfPkg and ArmVirtPkg, can
> conditionalize / exclude those preexistent ACPI tables, while benefiting
> from the rest of the code. Then the ACPI linker/loader client used in
> both OvmfPkg and ArmVirtPkg can remain dumb and process & expose
> whatever tables QEMU generates.
> 
> We could control the AML payload for TPM and/or TPM PPI -- e.g. whether
> it should raise an SMI -- via "-device tpm2,smi=on", for example. (Just
> making up the syntax, but you know what I mean.)
> 
> We could go one step further, "-device tpm2,acpi=[on|off]". acpi=on
> would make QEMU generate the TPM related tables (perhaps targeting only
> SeaBIOS, if that makes sense), acpi=off would leave the related tables
> to the firmware.
> 
> This is just speculation on my part; the point is I'd like to avoid any
> more "intelligent filtering" regarding ACPI tables in OVMF. What we have
> there is terrible enough already.
If we could discard EDK's generated tables, it's fine as well,
but we would need to model TPM device model to match EDK's one
(risk here is that implementation goes of sync, if it's not spec
dictated). Then SeaBIOS side could 're-implement' it using
the same set of tables from QEMU.

I wonder if we could somehow detect firmware flavor we are
running on from ASL /wrt [no]using SMI/?
 * I don't really like idea but we can probably detect
   in QEMU if firmware is OVMF or not and generate extra SMI hunk
   based on that.
 * or we could always generate SMI code and probe for some
   EDK specific SSDT code to see in AML to enable it.
   Maybe OVMF could inject such table.

> 
> Thanks
> Laszlo

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 14:17             ` Igor Mammedov
@ 2018-02-13 15:39               ` Laszlo Ersek
  2018-02-13 16:19                 ` Igor Mammedov
  0 siblings, 1 reply; 35+ messages in thread
From: Laszlo Ersek @ 2018-02-13 15:39 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Stefan Berger, Kevin O'Connor, marcandre.lureau, qemu-devel, mst

On 02/13/18 15:17, Igor Mammedov wrote:
> On Tue, 13 Feb 2018 14:31:41 +0100
> Laszlo Ersek <lersek@redhat.com> wrote:
> 
>> On 02/13/18 13:57, Igor Mammedov wrote:
>>> On Mon, 12 Feb 2018 15:17:21 -0500
>>> Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
>>>   
>>>> On 02/12/2018 02:45 PM, Kevin O'Connor wrote:  
>>>>> On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:    
>>>>>> I have played around with this patch and some modifications to EDK2. Though
>>>>>> for EDK2 the question is whether to try to circumvent their current
>>>>>> implementation that uses SMM or use SMM. With this patch so far I circumvent
>>>>>> it, which is maybe not a good idea.
>>>>>>
>>>>>> The facts for EDK2's PPI:
>>>>>>
>>>>>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM via
>>>>>> an SMI and the PPI code is written into a UEFI variable. For this ACPI uses
>>>>>> the memory are at 0xFFFF 0000 to pass parameters from the OS (via ACPI) to
>>>>>> SMM. This is declared in ACPI with an OperationRegion() at that address.
>>>>>> Once the machine is rebooted, UEFI reads the variable and finds the PPI code
>>>>>> and reacts to it.    
>>>>> I'm a bit confused by this.  The top 1M of the first 4G of ram is
>>>>> generally mapped to the flash device on real machines.  Indeed, this
>>>>> is part of the mechanism used to boot an X86 machine - it starts
>>>>> execution from flash at 0xfffffff0.  This is true even on modern
>>>>> machines.
>>>>>
>>>>> So, it seems strange that UEFI is pushing a code through a memory
>>>>> device at 0xffff0000.  I can't see how that would be portable.  Are
>>>>> you sure the memory write to 0xffff0000 is not just a trigger to
>>>>> invoke the SMI?    
>>>>
>>>> I base this on the code here:
>>>>
>>>> https://github.com/tianocore/edk2/blob/master/SecurityPkg/Tcg/Tcg2Smm/Tpm.asl#L81
>>>>
>>>> OperationRegion (TNVS, SystemMemory, 0xFFFF0000, 0xF0)  
>>> Is the goal to reuse EDK PPI impl. including ASL?  
>>
>> Right, that is one important question. Some code in edk2 is meant only
>> as example code (so actual firmware platforms are encouraged to copy &
>> customize the code, or use similar or dissimilar alternatives). Some
>> modules are meant for inclusion "as-is" in the platform description
>> files (~ makefiles). There are so many edk2 modules related to TPM
>> (several versions of the specs even) that it's hard to say what is meant
>> for what usage type. (By my earlier count, there are 19 modules.)
>>
>>> If it's so, then perhaps we only need to write address into QEMU
>>> and let OVMF to discard PPI SSDT from QEMU.  
>>
>> That's something I would not like. When running on QEMU, it's OK for
>> some edk2 modules to install their own ACPI tables, but only if they are
>> "software" tables, such as the IBFT for iSCSI booting, BGRT (boot
>> graphics table) for the boot logo / splash screen, etc. For ACPI tables
>> that describe hardware (data tables) or carry out actions related to
>> hardware (DefinitionBlocks / AML), I much prefer QEMU to generate all
>> the stuff.
>>
>> If there is a conflict between hardware-related tables that QEMU
>> generates and similar tables that pre-exist in edk2, I prefer working
>> with the edk2 maintainers to customize their subsystems, so that a
>> concrete firmware platform, such as OvmfPkg and ArmVirtPkg, can
>> conditionalize / exclude those preexistent ACPI tables, while benefiting
>> from the rest of the code. Then the ACPI linker/loader client used in
>> both OvmfPkg and ArmVirtPkg can remain dumb and process & expose
>> whatever tables QEMU generates.
>>
>> We could control the AML payload for TPM and/or TPM PPI -- e.g. whether
>> it should raise an SMI -- via "-device tpm2,smi=on", for example. (Just
>> making up the syntax, but you know what I mean.)
>>
>> We could go one step further, "-device tpm2,acpi=[on|off]". acpi=on
>> would make QEMU generate the TPM related tables (perhaps targeting only
>> SeaBIOS, if that makes sense), acpi=off would leave the related tables
>> to the firmware.
>>
>> This is just speculation on my part; the point is I'd like to avoid any
>> more "intelligent filtering" regarding ACPI tables in OVMF. What we have
>> there is terrible enough already.
> If we could discard EDK's generated tables, it's fine as well,
> but we would need to model TPM device model to match EDK's one
> (risk here is that implementation goes of sync, if it's not spec
> dictated). Then SeaBIOS side could 're-implement' it using
> the same set of tables from QEMU.
> 
> I wonder if we could somehow detect firmware flavor we are
> running on from ASL /wrt [no]using SMI/?
>  * I don't really like idea but we can probably detect
>    in QEMU if firmware is OVMF or not and generate extra SMI hunk
>    based on that.
>  * or we could always generate SMI code and probe for some
>    EDK specific SSDT code to see in AML to enable it.
>    Maybe OVMF could inject such table.

There are already several QEMU artifacts that are specific to OVMF, but
none of them is a full match for this purpose:

* "-machine smm=on" means that SMM emulation is enabled. However, this
is automatically done for Q35, and it happens regardless of firmware. So
basing the decision on this is no good.

* One (or much more frequently, two) pflash devices usually indicate
OVMF (as opposed to SeaBIOS, which is selected with -L, or with -bios,
or by default). However, the same pflash setup is appropriate for i440fx
machine types too, which lack SMM emulation.

* The "-global driver=cfi.pflash01,property=secure,value=on" flag means
that QEMU should restrict pflash write access to guest code that runs in
SMM. This is required for making the UEFI variable store actually
secure, but a user might have any reason for *not* specifying this flag
(while keeping SMM emulation generally enabled, with "-machine smm=on").
This looks like the closest match, but still not a 100% match.

* OVMF generally negotiates "SMI broadcast", but the user can prevent
that too on the QEMU command line.

... I really think SMM is a red herring here. We've established several
times that the PPI opcodes need no protection. SMM is an implementation
detail in edk2, and it is used *only* because there is no other way for
AML code to enter (invoke) the firmware. In other words, this is simply
a "firmware calling convention" for AML code -- prepare a request buffer
and raise an SMI.

A valid alternative would be if we provided native OS drivers for a new
ACPI device (both for Windows and Linux), and used Notify() in the
TPM-related AML. Then the ACPI code could call back into the native OS
driver, and *that* driver could then use the regular interface to the
UEFI variable services (for example), without having to care about SMM
explicitly. (Note that I'm not actually suggesting this; I'd just like
to show that SMM is accidental here, not inherent.)

Thanks
Laszlo

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-12 20:49       ` Stefan Berger
@ 2018-02-13 16:16         ` Laszlo Ersek
  2018-02-13 16:34           ` Igor Mammedov
  2018-02-13 19:37           ` Kevin O'Connor
  0 siblings, 2 replies; 35+ messages in thread
From: Laszlo Ersek @ 2018-02-13 16:16 UTC (permalink / raw)
  To: Stefan Berger, Kevin O'Connor
  Cc: marcandre.lureau, qemu-devel, mst, Igor Mammedov

On 02/12/18 21:49, Stefan Berger wrote:
> On 02/12/2018 03:46 PM, Kevin O'Connor wrote:
>> On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:
>>> The PPI device in this patch series allocates 0x400 bytes. 0x200
>>> bytes are
>>> used by the OperationRegion() in this patch series. The rest was
>>> thought of
>>> for future extensions.
>>>
>>> To allow both firmwares to use PPI, we would need to be able to have the
>>> OperationRegion() be flexible and located at 0xFFFF 0000 for EDK2 and
>>> 0xFED4
>>> 5000 for SeaBIOS, per choice of the firmware. One way to achieve this
>>> would
>>> be to have the firmwares choose their operation region base address by
>>> writing into the PPI memory device at offset 0x200 (for example). A '1'
>>> there would mean that we want the OperationRegion() at 0xFFFF 0000,
>>> and a
>>> '2' would mean at the base address of the PPI device (0xFED4 5000). This
>>> could be achieved by declaring a 2nd OperationRegion() in the ACPI
>>> code that
>>> is located at 0xFED4 5200 and we declare that 1st OperationRegion()'s
>>> address based on findings from there. Further, a flags variable at
>>> offset
>>> 0x201 could indicate whether an SMI is needed to enter SMM or not.
>>> Obviously, the ACPI code would become more complex to be able to support
>>> both firmwares at the same time.
>>>
>>> This is a lot of details but I rather post this description before
>>> posting
>>> more patches. To make these changes and get it to work with at least
>>> SeaBIOS
>>> is probably fairly easy. But is this acceptable?
>> I'm not sure I fully understand the goals of the PPI interface.
>> Here's what I understand so far:
>>
>> The TPM specs define some actions that are considered privileged.  An
>> example of this would be disabling the TPM itself.  In order to
>> prevent an attacker from performing these actions without
>> authorization, the TPM specs define a mechanism to assert "physical
>> presence" before the privileged action can be done.  They do this by
>> having the firmware present a menu during early boot that permits
>> these privileged operations, and then the firmware locks the TPM chip
>> so the actions can no longer be done by any software that runs after
>> the firmware.  Thus "physical presence" is asserted by demonstrating
>> one has console access to the machine during early boot.
>>
>> The PPI spec implements a work around for this - presumably some found
>> the enforcement mechanism too onerous.  It allows the OS to provide a
>> request code to the firmware, and on the next boot the firmware will
>> take the requested action before it locks the chip.  Thus allowing the
>> OS to indirectly perform the privileged action even after the chip has
>> been locked.  Thus, the PPI system seems to be an "elaborate hack" to
>> allow users to circumvent the physical presence mechanism (if they
>> choose to).
> 
> Correct.
>>
>> Here's what I understand the proposed implementation involves:
>>
>> 1 - in addition to emulating the TPM device itself, QEMU will also
>>      introduce a virtual memory device with 0x400 bytes.
> Correct.
>>
>> 2 - on first boot the firmware (seabios and uefi) will populate the
>>      memory region created in step 1.  In particular it will fill an
>>      array with the list of request codes it supports.  (Each request
>>      is an 8bit value, the array has 256 entries.)
> Correct. Each firmware would fill out the 256 byte array depending on
> what it supports. The 8 bit values are basically flags and so on.
>> 3 - QEMU will produce AML code implementing the standard PPI ACPI
>>      interface.  This AML code will take the request, find the table
>>      produced in step 1, compare it to the list of accepted requests
>>      produced in step 2, and then place the 8bit request in another
>>      qemu virtual memory device (at 0xFFFF0000 or 0xFED45000).
> 
> Correct.
> 
> Now EDK2 wants to store the code in a UEFI variable in NVRAM. We
> therefore would need to trigger an SMI. In SeaBIOS we wouldn't have to
> do this.
> 
>> 4 - the OS will signal a reboot, qemu will do its normal reboot logic,
>>      and the firmware will be run again.
>>
>> 5 - the firmware will extract the code written in stage 3, and if the
>>      tpm device has been configured to accept PPI codes from the OS, it
>>      will invoke the requested action.
> 
> SeaBIOS would look into memory to find the code. EDK2 will read the code
> from a UEFI variable.
> 
>> Did I understand the above correctly?
> I think so. With the fine differences between SeaBIOS and EDK2 pointed out.

Here's what I suggest:

Please everyone continue working on this, according to Kevin's &
Stefan's description, but focus on QEMU and SeaBIOS *only*. Ignore edk2
for now. Generate as much as possible AML in QEMU, using the
linker/loader. If you generate data tables, please don't forget about
the "OVMF SDT header probe suppressor" padding (see vmgenid for
example). Focus on basic TPM enablement (measurements, PCRs) and PPI.
Please ignore the "memory clear request" TPM feature for now.

Also, please focus on TPM2 only.

It's obvious we are unable to target both firmwares at once. So finalize
a *sensible* hardware device model, and we'll drive it one way or
another from OVMF as a second step. "Sensible" means that it should be
dynamically detectable to the firmware, in some way; either via hardware
(MMIO) registers or fw_cfg.

Once everything is working with SeaBIOS and QEMU, and the Windows guest
is happy, I'll do my best to set time aside for the OVMF enablement. If
needed, I'll carve chunks out of the existing SecurityPkg modules and
implement the needed drivers and libraries under OvmfPkg. I fully intend
to remove both (a) the non-volatile UEFI variable storage for the PPI
opcodes, and (b) SMM, from the UEFI picture. There's no reason basic
TPM, and TPM PPI, shouldn't work under OVMF with i440fx, or (later)
under ArmVirtQemu with aarch64/"virt".

I've tried to avoid such major surgery under edk2, but I clearly see now
that there's no other way. Thus far I've lacked both commitment and time
for working on this; from now on I'm only lacking time. Please polish
*and document* the device model and get it 100% functional with SeaBIOS
and Windows; I'll do my best to take care of OVMF. Again, please focus
on measurements / PCRs and PPI, and ignore the "memory clear request"
for now. (I very much hope that Windows does not insist on the latter.)

It's possible that I'll scream for additional hw enlightement under OVMF
later; so I suggest designing in some kind of feature negotiation up-front.

Importantly, please make sure that the TPM *backend* works without using
the TPM hardware of the virtualization host; the backend should be a
full software emulator. Otherwise I won't be able to experiment with OVMF.

I don't want to ignore or reject work that's already been done for edk2
/ OVMF, but I'm still not seeing patches or discussion on edk2-devel.
Plus, as I mentioned above, I fully intend to kill the complicated
*accidental* artifacts, such as non-volatile UEFI variables for PPI
opcode storage, and the consequent SMI requirement for the AML->fw
calling convention. No such calls should be necessary in the first place.

Thanks
Laszlo

>> Separately, is there someone clamoring for PPI support today?  That
>> is, is the goal to implement PPI to make QEMU more spec compliant, or
>> is there someone struggling to perform a particular task today that
>> this support would improve?
> 
> We could defer the implementation of this. My main goal was to to
> support TPM migration in QEMU and the PPI device also needs to be
> migrated as part of TPM migration. So that's why I am looking at PPI now.
> 
>     Stefan
>> Thanks,
>> -Kevin
>>
> 
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 15:39               ` Laszlo Ersek
@ 2018-02-13 16:19                 ` Igor Mammedov
  2018-02-13 16:34                   ` Laszlo Ersek
  0 siblings, 1 reply; 35+ messages in thread
From: Igor Mammedov @ 2018-02-13 16:19 UTC (permalink / raw)
  To: Laszlo Ersek
  Cc: Stefan Berger, Kevin O'Connor, marcandre.lureau, qemu-devel, mst

On Tue, 13 Feb 2018 16:39:01 +0100
Laszlo Ersek <lersek@redhat.com> wrote:

> On 02/13/18 15:17, Igor Mammedov wrote:
> > On Tue, 13 Feb 2018 14:31:41 +0100
> > Laszlo Ersek <lersek@redhat.com> wrote:
> >   
> >> On 02/13/18 13:57, Igor Mammedov wrote:  
> >>> On Mon, 12 Feb 2018 15:17:21 -0500
> >>> Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
> >>>     
> >>>> On 02/12/2018 02:45 PM, Kevin O'Connor wrote:    
> >>>>> On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:      
> >>>>>> I have played around with this patch and some modifications to EDK2. Though
> >>>>>> for EDK2 the question is whether to try to circumvent their current
> >>>>>> implementation that uses SMM or use SMM. With this patch so far I circumvent
> >>>>>> it, which is maybe not a good idea.
> >>>>>>
> >>>>>> The facts for EDK2's PPI:
> >>>>>>
> >>>>>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM via
> >>>>>> an SMI and the PPI code is written into a UEFI variable. For this ACPI uses
> >>>>>> the memory are at 0xFFFF 0000 to pass parameters from the OS (via ACPI) to
> >>>>>> SMM. This is declared in ACPI with an OperationRegion() at that address.
> >>>>>> Once the machine is rebooted, UEFI reads the variable and finds the PPI code
> >>>>>> and reacts to it.      
> >>>>> I'm a bit confused by this.  The top 1M of the first 4G of ram is
> >>>>> generally mapped to the flash device on real machines.  Indeed, this
> >>>>> is part of the mechanism used to boot an X86 machine - it starts
> >>>>> execution from flash at 0xfffffff0.  This is true even on modern
> >>>>> machines.
> >>>>>
> >>>>> So, it seems strange that UEFI is pushing a code through a memory
> >>>>> device at 0xffff0000.  I can't see how that would be portable.  Are
> >>>>> you sure the memory write to 0xffff0000 is not just a trigger to
> >>>>> invoke the SMI?      
> >>>>
> >>>> I base this on the code here:
> >>>>
> >>>> https://github.com/tianocore/edk2/blob/master/SecurityPkg/Tcg/Tcg2Smm/Tpm.asl#L81
> >>>>
> >>>> OperationRegion (TNVS, SystemMemory, 0xFFFF0000, 0xF0)    
> >>> Is the goal to reuse EDK PPI impl. including ASL?    
> >>
> >> Right, that is one important question. Some code in edk2 is meant only
> >> as example code (so actual firmware platforms are encouraged to copy &
> >> customize the code, or use similar or dissimilar alternatives). Some
> >> modules are meant for inclusion "as-is" in the platform description
> >> files (~ makefiles). There are so many edk2 modules related to TPM
> >> (several versions of the specs even) that it's hard to say what is meant
> >> for what usage type. (By my earlier count, there are 19 modules.)
> >>  
> >>> If it's so, then perhaps we only need to write address into QEMU
> >>> and let OVMF to discard PPI SSDT from QEMU.    
> >>
> >> That's something I would not like. When running on QEMU, it's OK for
> >> some edk2 modules to install their own ACPI tables, but only if they are
> >> "software" tables, such as the IBFT for iSCSI booting, BGRT (boot
> >> graphics table) for the boot logo / splash screen, etc. For ACPI tables
> >> that describe hardware (data tables) or carry out actions related to
> >> hardware (DefinitionBlocks / AML), I much prefer QEMU to generate all
> >> the stuff.
> >>
> >> If there is a conflict between hardware-related tables that QEMU
> >> generates and similar tables that pre-exist in edk2, I prefer working
> >> with the edk2 maintainers to customize their subsystems, so that a
> >> concrete firmware platform, such as OvmfPkg and ArmVirtPkg, can
> >> conditionalize / exclude those preexistent ACPI tables, while benefiting
> >> from the rest of the code. Then the ACPI linker/loader client used in
> >> both OvmfPkg and ArmVirtPkg can remain dumb and process & expose
> >> whatever tables QEMU generates.
> >>
> >> We could control the AML payload for TPM and/or TPM PPI -- e.g. whether
> >> it should raise an SMI -- via "-device tpm2,smi=on", for example. (Just
> >> making up the syntax, but you know what I mean.)
> >>
> >> We could go one step further, "-device tpm2,acpi=[on|off]". acpi=on
> >> would make QEMU generate the TPM related tables (perhaps targeting only
> >> SeaBIOS, if that makes sense), acpi=off would leave the related tables
> >> to the firmware.
> >>
> >> This is just speculation on my part; the point is I'd like to avoid any
> >> more "intelligent filtering" regarding ACPI tables in OVMF. What we have
> >> there is terrible enough already.  
> > If we could discard EDK's generated tables, it's fine as well,
> > but we would need to model TPM device model to match EDK's one
> > (risk here is that implementation goes of sync, if it's not spec
> > dictated). Then SeaBIOS side could 're-implement' it using
> > the same set of tables from QEMU.
> > 
> > I wonder if we could somehow detect firmware flavor we are
> > running on from ASL /wrt [no]using SMI/?
> >  * I don't really like idea but we can probably detect
> >    in QEMU if firmware is OVMF or not and generate extra SMI hunk
> >    based on that.
> >  * or we could always generate SMI code and probe for some
> >    EDK specific SSDT code to see in AML to enable it.
> >    Maybe OVMF could inject such table.  
> 
> There are already several QEMU artifacts that are specific to OVMF, but
> none of them is a full match for this purpose:
> 
> * "-machine smm=on" means that SMM emulation is enabled. However, this
> is automatically done for Q35, and it happens regardless of firmware. So
> basing the decision on this is no good.
> 
> * One (or much more frequently, two) pflash devices usually indicate
> OVMF (as opposed to SeaBIOS, which is selected with -L, or with -bios,
> or by default). However, the same pflash setup is appropriate for i440fx
> machine types too, which lack SMM emulation.
> 
> * The "-global driver=cfi.pflash01,property=secure,value=on" flag means
> that QEMU should restrict pflash write access to guest code that runs in
> SMM. This is required for making the UEFI variable store actually
> secure, but a user might have any reason for *not* specifying this flag
> (while keeping SMM emulation generally enabled, with "-machine smm=on").
> This looks like the closest match, but still not a 100% match.
> 
> * OVMF generally negotiates "SMI broadcast", but the user can prevent
> that too on the QEMU command line.
> 
> ... I really think SMM is a red herring here. We've established several
> times that the PPI opcodes need no protection. SMM is an implementation
> detail in edk2, and it is used *only* because there is no other way for
> AML code to enter (invoke) the firmware. In other words, this is simply
> a "firmware calling convention" for AML code -- prepare a request buffer
> and raise an SMI.
> 
> A valid alternative would be if we provided native OS drivers for a new
> ACPI device (both for Windows and Linux), and used Notify() in the
> TPM-related AML. Then the ACPI code could call back into the native OS
> driver, and *that* driver could then use the regular interface to the
> UEFI variable services (for example), without having to care about SMM
> explicitly. (Note that I'm not actually suggesting this; I'd just like
> to show that SMM is accidental here, not inherent.)
Yep, relying on detecting SMM or pflash or ... is not really reliable.

What about my proposal for PPI enabled OVMF to inject an additional
SSDT table with a variable ex:
  firmware_ovmf = 1

then in QEMU generated tables we can probe for it in runtime
when OSPM executes method and trigger SMI if variable exists.


> 
> Thanks
> Laszlo

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 16:19                 ` Igor Mammedov
@ 2018-02-13 16:34                   ` Laszlo Ersek
  0 siblings, 0 replies; 35+ messages in thread
From: Laszlo Ersek @ 2018-02-13 16:34 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Stefan Berger, Kevin O'Connor, marcandre.lureau, qemu-devel, mst

On 02/13/18 17:19, Igor Mammedov wrote:
> On Tue, 13 Feb 2018 16:39:01 +0100
> Laszlo Ersek <lersek@redhat.com> wrote:
> 
>> On 02/13/18 15:17, Igor Mammedov wrote:
>>> On Tue, 13 Feb 2018 14:31:41 +0100
>>> Laszlo Ersek <lersek@redhat.com> wrote:
>>>   
>>>> On 02/13/18 13:57, Igor Mammedov wrote:  
>>>>> On Mon, 12 Feb 2018 15:17:21 -0500
>>>>> Stefan Berger <stefanb@linux.vnet.ibm.com> wrote:
>>>>>     
>>>>>> On 02/12/2018 02:45 PM, Kevin O'Connor wrote:    
>>>>>>> On Fri, Feb 09, 2018 at 03:19:31PM -0500, Stefan Berger wrote:      
>>>>>>>> I have played around with this patch and some modifications to EDK2. Though
>>>>>>>> for EDK2 the question is whether to try to circumvent their current
>>>>>>>> implementation that uses SMM or use SMM. With this patch so far I circumvent
>>>>>>>> it, which is maybe not a good idea.
>>>>>>>>
>>>>>>>> The facts for EDK2's PPI:
>>>>>>>>
>>>>>>>> - from within the OS a PPI code is submitted to ACPI and ACPI enters SMM via
>>>>>>>> an SMI and the PPI code is written into a UEFI variable. For this ACPI uses
>>>>>>>> the memory are at 0xFFFF 0000 to pass parameters from the OS (via ACPI) to
>>>>>>>> SMM. This is declared in ACPI with an OperationRegion() at that address.
>>>>>>>> Once the machine is rebooted, UEFI reads the variable and finds the PPI code
>>>>>>>> and reacts to it.      
>>>>>>> I'm a bit confused by this.  The top 1M of the first 4G of ram is
>>>>>>> generally mapped to the flash device on real machines.  Indeed, this
>>>>>>> is part of the mechanism used to boot an X86 machine - it starts
>>>>>>> execution from flash at 0xfffffff0.  This is true even on modern
>>>>>>> machines.
>>>>>>>
>>>>>>> So, it seems strange that UEFI is pushing a code through a memory
>>>>>>> device at 0xffff0000.  I can't see how that would be portable.  Are
>>>>>>> you sure the memory write to 0xffff0000 is not just a trigger to
>>>>>>> invoke the SMI?      
>>>>>>
>>>>>> I base this on the code here:
>>>>>>
>>>>>> https://github.com/tianocore/edk2/blob/master/SecurityPkg/Tcg/Tcg2Smm/Tpm.asl#L81
>>>>>>
>>>>>> OperationRegion (TNVS, SystemMemory, 0xFFFF0000, 0xF0)    
>>>>> Is the goal to reuse EDK PPI impl. including ASL?    
>>>>
>>>> Right, that is one important question. Some code in edk2 is meant only
>>>> as example code (so actual firmware platforms are encouraged to copy &
>>>> customize the code, or use similar or dissimilar alternatives). Some
>>>> modules are meant for inclusion "as-is" in the platform description
>>>> files (~ makefiles). There are so many edk2 modules related to TPM
>>>> (several versions of the specs even) that it's hard to say what is meant
>>>> for what usage type. (By my earlier count, there are 19 modules.)
>>>>  
>>>>> If it's so, then perhaps we only need to write address into QEMU
>>>>> and let OVMF to discard PPI SSDT from QEMU.    
>>>>
>>>> That's something I would not like. When running on QEMU, it's OK for
>>>> some edk2 modules to install their own ACPI tables, but only if they are
>>>> "software" tables, such as the IBFT for iSCSI booting, BGRT (boot
>>>> graphics table) for the boot logo / splash screen, etc. For ACPI tables
>>>> that describe hardware (data tables) or carry out actions related to
>>>> hardware (DefinitionBlocks / AML), I much prefer QEMU to generate all
>>>> the stuff.
>>>>
>>>> If there is a conflict between hardware-related tables that QEMU
>>>> generates and similar tables that pre-exist in edk2, I prefer working
>>>> with the edk2 maintainers to customize their subsystems, so that a
>>>> concrete firmware platform, such as OvmfPkg and ArmVirtPkg, can
>>>> conditionalize / exclude those preexistent ACPI tables, while benefiting
>>>> from the rest of the code. Then the ACPI linker/loader client used in
>>>> both OvmfPkg and ArmVirtPkg can remain dumb and process & expose
>>>> whatever tables QEMU generates.
>>>>
>>>> We could control the AML payload for TPM and/or TPM PPI -- e.g. whether
>>>> it should raise an SMI -- via "-device tpm2,smi=on", for example. (Just
>>>> making up the syntax, but you know what I mean.)
>>>>
>>>> We could go one step further, "-device tpm2,acpi=[on|off]". acpi=on
>>>> would make QEMU generate the TPM related tables (perhaps targeting only
>>>> SeaBIOS, if that makes sense), acpi=off would leave the related tables
>>>> to the firmware.
>>>>
>>>> This is just speculation on my part; the point is I'd like to avoid any
>>>> more "intelligent filtering" regarding ACPI tables in OVMF. What we have
>>>> there is terrible enough already.  
>>> If we could discard EDK's generated tables, it's fine as well,
>>> but we would need to model TPM device model to match EDK's one
>>> (risk here is that implementation goes of sync, if it's not spec
>>> dictated). Then SeaBIOS side could 're-implement' it using
>>> the same set of tables from QEMU.
>>>
>>> I wonder if we could somehow detect firmware flavor we are
>>> running on from ASL /wrt [no]using SMI/?
>>>  * I don't really like idea but we can probably detect
>>>    in QEMU if firmware is OVMF or not and generate extra SMI hunk
>>>    based on that.
>>>  * or we could always generate SMI code and probe for some
>>>    EDK specific SSDT code to see in AML to enable it.
>>>    Maybe OVMF could inject such table.  
>>
>> There are already several QEMU artifacts that are specific to OVMF, but
>> none of them is a full match for this purpose:
>>
>> * "-machine smm=on" means that SMM emulation is enabled. However, this
>> is automatically done for Q35, and it happens regardless of firmware. So
>> basing the decision on this is no good.
>>
>> * One (or much more frequently, two) pflash devices usually indicate
>> OVMF (as opposed to SeaBIOS, which is selected with -L, or with -bios,
>> or by default). However, the same pflash setup is appropriate for i440fx
>> machine types too, which lack SMM emulation.
>>
>> * The "-global driver=cfi.pflash01,property=secure,value=on" flag means
>> that QEMU should restrict pflash write access to guest code that runs in
>> SMM. This is required for making the UEFI variable store actually
>> secure, but a user might have any reason for *not* specifying this flag
>> (while keeping SMM emulation generally enabled, with "-machine smm=on").
>> This looks like the closest match, but still not a 100% match.
>>
>> * OVMF generally negotiates "SMI broadcast", but the user can prevent
>> that too on the QEMU command line.
>>
>> ... I really think SMM is a red herring here. We've established several
>> times that the PPI opcodes need no protection. SMM is an implementation
>> detail in edk2, and it is used *only* because there is no other way for
>> AML code to enter (invoke) the firmware. In other words, this is simply
>> a "firmware calling convention" for AML code -- prepare a request buffer
>> and raise an SMI.
>>
>> A valid alternative would be if we provided native OS drivers for a new
>> ACPI device (both for Windows and Linux), and used Notify() in the
>> TPM-related AML. Then the ACPI code could call back into the native OS
>> driver, and *that* driver could then use the regular interface to the
>> UEFI variable services (for example), without having to care about SMM
>> explicitly. (Note that I'm not actually suggesting this; I'd just like
>> to show that SMM is accidental here, not inherent.)
> Yep, relying on detecting SMM or pflash or ... is not really reliable.
> 
> What about my proposal for PPI enabled OVMF to inject an additional
> SSDT table with a variable ex:
>   firmware_ovmf = 1
> 
> then in QEMU generated tables we can probe for it in runtime
> when OSPM executes method and trigger SMI if variable exists.

According to my last email, I don't consider this necessary any longer,
for this concrete case.

Speaking generally (independently of TPM), the above seems simple enough
(for advertising OVMF characteristics to QEMU-generated AML), but it
also looks like a slippery slope. It establishes a new feature
advertisement method, and soon enough, OVMF would have to expose further
variables (maybe even methods!) that QEMU-generated AML code would
interrogate. Let's not go there just yet; let's keep the generated AML
firmware-agnostic for as long as we can.

Thanks,
Laszlo

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 16:16         ` Laszlo Ersek
@ 2018-02-13 16:34           ` Igor Mammedov
  2018-02-13 17:01             ` Laszlo Ersek
  2018-02-13 19:37           ` Kevin O'Connor
  1 sibling, 1 reply; 35+ messages in thread
From: Igor Mammedov @ 2018-02-13 16:34 UTC (permalink / raw)
  To: Laszlo Ersek
  Cc: Stefan Berger, Kevin O'Connor, marcandre.lureau, qemu-devel, mst

On Tue, 13 Feb 2018 17:16:49 +0100
Laszlo Ersek <lersek@redhat.com> wrote:

[...]
> 
> It's possible that I'll scream for additional hw enlightement under OVMF
> later; so I suggest designing in some kind of feature negotiation up-front.
We can add an extra fwcfg file for that later, when/if it's actually needed.

[...]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 16:34           ` Igor Mammedov
@ 2018-02-13 17:01             ` Laszlo Ersek
  0 siblings, 0 replies; 35+ messages in thread
From: Laszlo Ersek @ 2018-02-13 17:01 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: Stefan Berger, Kevin O'Connor, marcandre.lureau, qemu-devel, mst

On 02/13/18 17:34, Igor Mammedov wrote:
> On Tue, 13 Feb 2018 17:16:49 +0100
> Laszlo Ersek <lersek@redhat.com> wrote:
> 
> [...]
>>
>> It's possible that I'll scream for additional hw enlightement under OVMF
>> later; so I suggest designing in some kind of feature negotiation up-front.
> We can add an extra fwcfg file for that later, when/if it's actually needed.

Works for me.

Thanks
Laszlo

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 16:16         ` Laszlo Ersek
  2018-02-13 16:34           ` Igor Mammedov
@ 2018-02-13 19:37           ` Kevin O'Connor
  2018-02-13 19:59             ` Laszlo Ersek
  1 sibling, 1 reply; 35+ messages in thread
From: Kevin O'Connor @ 2018-02-13 19:37 UTC (permalink / raw)
  To: Laszlo Ersek
  Cc: Stefan Berger, marcandre.lureau, qemu-devel, mst, Igor Mammedov

On Tue, Feb 13, 2018 at 05:16:49PM +0100, Laszlo Ersek wrote:
> On 02/12/18 21:49, Stefan Berger wrote:
> > On 02/12/2018 03:46 PM, Kevin O'Connor wrote:
> >> I'm not sure I fully understand the goals of the PPI interface.
> >> Here's what I understand so far:
> >>
> >> The TPM specs define some actions that are considered privileged.  An
> >> example of this would be disabling the TPM itself.  In order to
> >> prevent an attacker from performing these actions without
> >> authorization, the TPM specs define a mechanism to assert "physical
> >> presence" before the privileged action can be done.  They do this by
> >> having the firmware present a menu during early boot that permits
> >> these privileged operations, and then the firmware locks the TPM chip
> >> so the actions can no longer be done by any software that runs after
> >> the firmware.  Thus "physical presence" is asserted by demonstrating
> >> one has console access to the machine during early boot.
> >>
> >> The PPI spec implements a work around for this - presumably some found
> >> the enforcement mechanism too onerous.  It allows the OS to provide a
> >> request code to the firmware, and on the next boot the firmware will
> >> take the requested action before it locks the chip.  Thus allowing the
> >> OS to indirectly perform the privileged action even after the chip has
> >> been locked.  Thus, the PPI system seems to be an "elaborate hack" to
> >> allow users to circumvent the physical presence mechanism (if they
> >> choose to).
> > 
> > Correct.
> >>
> >> Here's what I understand the proposed implementation involves:
> >>
> >> 1 - in addition to emulating the TPM device itself, QEMU will also
> >>      introduce a virtual memory device with 0x400 bytes.
> > Correct.
> >>
> >> 2 - on first boot the firmware (seabios and uefi) will populate the
> >>      memory region created in step 1.  In particular it will fill an
> >>      array with the list of request codes it supports.  (Each request
> >>      is an 8bit value, the array has 256 entries.)
> > Correct. Each firmware would fill out the 256 byte array depending on
> > what it supports. The 8 bit values are basically flags and so on.
> >> 3 - QEMU will produce AML code implementing the standard PPI ACPI
> >>      interface.  This AML code will take the request, find the table
> >>      produced in step 1, compare it to the list of accepted requests
> >>      produced in step 2, and then place the 8bit request in another
> >>      qemu virtual memory device (at 0xFFFF0000 or 0xFED45000).
> > 
> > Correct.
> > 
> > Now EDK2 wants to store the code in a UEFI variable in NVRAM. We
> > therefore would need to trigger an SMI. In SeaBIOS we wouldn't have to
> > do this.
> > 
> >> 4 - the OS will signal a reboot, qemu will do its normal reboot logic,
> >>      and the firmware will be run again.
> >>
> >> 5 - the firmware will extract the code written in stage 3, and if the
> >>      tpm device has been configured to accept PPI codes from the OS, it
> >>      will invoke the requested action.
> > 
> > SeaBIOS would look into memory to find the code. EDK2 will read the code
> > from a UEFI variable.
> > 
> >> Did I understand the above correctly?
> > I think so. With the fine differences between SeaBIOS and EDK2 pointed out.
> 
> Here's what I suggest:
> 
> Please everyone continue working on this, according to Kevin's &
> Stefan's description, but focus on QEMU and SeaBIOS *only*. Ignore edk2
> for now.

If this were targetted at SeaBIOS, I'd look for a simpler
QEMU/firmware interface.  Something like:

A - QEMU produces AML code implementing the standard PPI ACPI
    interface that generates a request code and stores it in the
    device memory of an existing device (eg, writable fw_cfg or an
    extension field in the existing emulated TPM device).

B - after a reboot the firmware extracts the PPI request code
    (produced in step A) and performs the requested action (if the TPM
    is configured to accept OS generated codes).

That is, skip steps 1 and 2 from the original proposal.

-Kevin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 19:37           ` Kevin O'Connor
@ 2018-02-13 19:59             ` Laszlo Ersek
  2018-02-13 20:29               ` Stefan Berger
  0 siblings, 1 reply; 35+ messages in thread
From: Laszlo Ersek @ 2018-02-13 19:59 UTC (permalink / raw)
  To: Kevin O'Connor
  Cc: Stefan Berger, marcandre.lureau, qemu-devel, mst, Igor Mammedov

On 02/13/18 20:37, Kevin O'Connor wrote:
> On Tue, Feb 13, 2018 at 05:16:49PM +0100, Laszlo Ersek wrote:
>> On 02/12/18 21:49, Stefan Berger wrote:
>>> On 02/12/2018 03:46 PM, Kevin O'Connor wrote:
>>>> I'm not sure I fully understand the goals of the PPI interface.
>>>> Here's what I understand so far:
>>>>
>>>> The TPM specs define some actions that are considered privileged.  An
>>>> example of this would be disabling the TPM itself.  In order to
>>>> prevent an attacker from performing these actions without
>>>> authorization, the TPM specs define a mechanism to assert "physical
>>>> presence" before the privileged action can be done.  They do this by
>>>> having the firmware present a menu during early boot that permits
>>>> these privileged operations, and then the firmware locks the TPM chip
>>>> so the actions can no longer be done by any software that runs after
>>>> the firmware.  Thus "physical presence" is asserted by demonstrating
>>>> one has console access to the machine during early boot.
>>>>
>>>> The PPI spec implements a work around for this - presumably some found
>>>> the enforcement mechanism too onerous.  It allows the OS to provide a
>>>> request code to the firmware, and on the next boot the firmware will
>>>> take the requested action before it locks the chip.  Thus allowing the
>>>> OS to indirectly perform the privileged action even after the chip has
>>>> been locked.  Thus, the PPI system seems to be an "elaborate hack" to
>>>> allow users to circumvent the physical presence mechanism (if they
>>>> choose to).
>>>
>>> Correct.
>>>>
>>>> Here's what I understand the proposed implementation involves:
>>>>
>>>> 1 - in addition to emulating the TPM device itself, QEMU will also
>>>>      introduce a virtual memory device with 0x400 bytes.
>>> Correct.
>>>>
>>>> 2 - on first boot the firmware (seabios and uefi) will populate the
>>>>      memory region created in step 1.  In particular it will fill an
>>>>      array with the list of request codes it supports.  (Each request
>>>>      is an 8bit value, the array has 256 entries.)
>>> Correct. Each firmware would fill out the 256 byte array depending on
>>> what it supports. The 8 bit values are basically flags and so on.
>>>> 3 - QEMU will produce AML code implementing the standard PPI ACPI
>>>>      interface.  This AML code will take the request, find the table
>>>>      produced in step 1, compare it to the list of accepted requests
>>>>      produced in step 2, and then place the 8bit request in another
>>>>      qemu virtual memory device (at 0xFFFF0000 or 0xFED45000).
>>>
>>> Correct.
>>>
>>> Now EDK2 wants to store the code in a UEFI variable in NVRAM. We
>>> therefore would need to trigger an SMI. In SeaBIOS we wouldn't have to
>>> do this.
>>>
>>>> 4 - the OS will signal a reboot, qemu will do its normal reboot logic,
>>>>      and the firmware will be run again.
>>>>
>>>> 5 - the firmware will extract the code written in stage 3, and if the
>>>>      tpm device has been configured to accept PPI codes from the OS, it
>>>>      will invoke the requested action.
>>>
>>> SeaBIOS would look into memory to find the code. EDK2 will read the code
>>> from a UEFI variable.
>>>
>>>> Did I understand the above correctly?
>>> I think so. With the fine differences between SeaBIOS and EDK2 pointed out.
>>
>> Here's what I suggest:
>>
>> Please everyone continue working on this, according to Kevin's &
>> Stefan's description, but focus on QEMU and SeaBIOS *only*. Ignore edk2
>> for now.
> 
> If this were targetted at SeaBIOS, I'd look for a simpler
> QEMU/firmware interface.  Something like:
> 
> A - QEMU produces AML code implementing the standard PPI ACPI
>     interface that generates a request code and stores it in the
>     device memory of an existing device (eg, writable fw_cfg or an
>     extension field in the existing emulated TPM device).
> 
> B - after a reboot the firmware extracts the PPI request code
>     (produced in step A) and performs the requested action (if the TPM
>     is configured to accept OS generated codes).
> 
> That is, skip steps 1 and 2 from the original proposal.

I think A/B can work fine, as long as
- the firmware can somehow dynamically recognize the device / "register
  block" that the request codes have to be pulled from, and
- QEMU is free to move the device or register block around, from release
  to release, without disturbing migration.

Thanks!
Laszlo

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 19:59             ` Laszlo Ersek
@ 2018-02-13 20:29               ` Stefan Berger
  2018-02-13 21:04                 ` Laszlo Ersek
  2018-02-14 18:39                 ` Kevin O'Connor
  0 siblings, 2 replies; 35+ messages in thread
From: Stefan Berger @ 2018-02-13 20:29 UTC (permalink / raw)
  To: Laszlo Ersek, Kevin O'Connor
  Cc: marcandre.lureau, qemu-devel, mst, Igor Mammedov

On 02/13/2018 02:59 PM, Laszlo Ersek wrote:
> On 02/13/18 20:37, Kevin O'Connor wrote:
>> On Tue, Feb 13, 2018 at 05:16:49PM +0100, Laszlo Ersek wrote:
>>> On 02/12/18 21:49, Stefan Berger wrote:
>>>> On 02/12/2018 03:46 PM, Kevin O'Connor wrote:
>>>>> I'm not sure I fully understand the goals of the PPI interface.
>>>>> Here's what I understand so far:
>>>>>
>>>>> The TPM specs define some actions that are considered privileged.  An
>>>>> example of this would be disabling the TPM itself.  In order to
>>>>> prevent an attacker from performing these actions without
>>>>> authorization, the TPM specs define a mechanism to assert "physical
>>>>> presence" before the privileged action can be done.  They do this by
>>>>> having the firmware present a menu during early boot that permits
>>>>> these privileged operations, and then the firmware locks the TPM chip
>>>>> so the actions can no longer be done by any software that runs after
>>>>> the firmware.  Thus "physical presence" is asserted by demonstrating
>>>>> one has console access to the machine during early boot.
>>>>>
>>>>> The PPI spec implements a work around for this - presumably some found
>>>>> the enforcement mechanism too onerous.  It allows the OS to provide a
>>>>> request code to the firmware, and on the next boot the firmware will
>>>>> take the requested action before it locks the chip.  Thus allowing the
>>>>> OS to indirectly perform the privileged action even after the chip has
>>>>> been locked.  Thus, the PPI system seems to be an "elaborate hack" to
>>>>> allow users to circumvent the physical presence mechanism (if they
>>>>> choose to).
>>>> Correct.
>>>>> Here's what I understand the proposed implementation involves:
>>>>>
>>>>> 1 - in addition to emulating the TPM device itself, QEMU will also
>>>>>       introduce a virtual memory device with 0x400 bytes.
>>>> Correct.
>>>>> 2 - on first boot the firmware (seabios and uefi) will populate the
>>>>>       memory region created in step 1.  In particular it will fill an
>>>>>       array with the list of request codes it supports.  (Each request
>>>>>       is an 8bit value, the array has 256 entries.)
>>>> Correct. Each firmware would fill out the 256 byte array depending on
>>>> what it supports. The 8 bit values are basically flags and so on.
>>>>> 3 - QEMU will produce AML code implementing the standard PPI ACPI
>>>>>       interface.  This AML code will take the request, find the table
>>>>>       produced in step 1, compare it to the list of accepted requests
>>>>>       produced in step 2, and then place the 8bit request in another
>>>>>       qemu virtual memory device (at 0xFFFF0000 or 0xFED45000).
>>>> Correct.
>>>>
>>>> Now EDK2 wants to store the code in a UEFI variable in NVRAM. We
>>>> therefore would need to trigger an SMI. In SeaBIOS we wouldn't have to
>>>> do this.
>>>>
>>>>> 4 - the OS will signal a reboot, qemu will do its normal reboot logic,
>>>>>       and the firmware will be run again.
>>>>>
>>>>> 5 - the firmware will extract the code written in stage 3, and if the
>>>>>       tpm device has been configured to accept PPI codes from the OS, it
>>>>>       will invoke the requested action.
>>>> SeaBIOS would look into memory to find the code. EDK2 will read the code
>>>> from a UEFI variable.
>>>>
>>>>> Did I understand the above correctly?
>>>> I think so. With the fine differences between SeaBIOS and EDK2 pointed out.
>>> Here's what I suggest:
>>>
>>> Please everyone continue working on this, according to Kevin's &
>>> Stefan's description, but focus on QEMU and SeaBIOS *only*. Ignore edk2
>>> for now.
>> If this were targetted at SeaBIOS, I'd look for a simpler
>> QEMU/firmware interface.  Something like:
>>
>> A - QEMU produces AML code implementing the standard PPI ACPI
>>      interface that generates a request code and stores it in the
>>      device memory of an existing device (eg, writable fw_cfg or an
>>      extension field in the existing emulated TPM device).

ACPI code writing into fw_cfg sounds difficult.
I initially had PPI SeaBIOS code write into the TPM TIS device's memory 
into some custom addresses. I'd consider this a hack. Now we have that 
virtual memory device with those 0x400 bytes...

In these 0x400 bytes we have 256 bytes that are used for configuration 
flags describing the supported opcode as you previously described. This 
array allows us to decouple the firmware implementation from the ACPI 
code and we need not hard code what is supported in the firmware inside 
the ACPI code (which would be difficult to do anyway since in QEMU we 
would not what firmware will be started and what PPI opcodes are 
support) and the ppi sysfs entries in Linux for example show exactly 
those PPI opcodes that are supported. The firmware needs to set those 
flags and the firmware knows what it supports.

I hope we can settle that this device is the right path.

>>
>> B - after a reboot the firmware extracts the PPI request code
>>      (produced in step A) and performs the requested action (if the TPM
>>      is configured to accept OS generated codes).
>>
>> That is, skip steps 1 and 2 from the original proposal.
> I think A/B can work fine, as long as
> - the firmware can somehow dynamically recognize the device / "register
>    block" that the request codes have to be pulled from, and

I experimented with what Igor had suggested with the fw_cfg file 
callback and so on.

> - QEMU is free to move the device or register block around, from release
>    to release, without disturbing migration.

I think we should basically limit the firmware to writing two addresses 
into this fw_cfg file:
- SeaBIOS: write back the same address that QEMU suggested in the file 
(currently 0xfed4 5000)
- EDK2: write back 0xffff 0000

No other address would be accepted by QEMU since presumably QEMU knows 
where otherwise address collisions can occur.

>
> Thanks!
> Laszlo
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 20:29               ` Stefan Berger
@ 2018-02-13 21:04                 ` Laszlo Ersek
  2018-02-13 21:32                   ` Stefan Berger
  2018-02-14 18:39                 ` Kevin O'Connor
  1 sibling, 1 reply; 35+ messages in thread
From: Laszlo Ersek @ 2018-02-13 21:04 UTC (permalink / raw)
  To: Stefan Berger, Kevin O'Connor
  Cc: marcandre.lureau, qemu-devel, mst, Igor Mammedov

On 02/13/18 21:29, Stefan Berger wrote:
> On 02/13/2018 02:59 PM, Laszlo Ersek wrote:
>> On 02/13/18 20:37, Kevin O'Connor wrote:
>>> On Tue, Feb 13, 2018 at 05:16:49PM +0100, Laszlo Ersek wrote:
>>>> On 02/12/18 21:49, Stefan Berger wrote:
>>>>> On 02/12/2018 03:46 PM, Kevin O'Connor wrote:
>>>>>> I'm not sure I fully understand the goals of the PPI interface.
>>>>>> Here's what I understand so far:
>>>>>>
>>>>>> The TPM specs define some actions that are considered privileged.  An
>>>>>> example of this would be disabling the TPM itself.  In order to
>>>>>> prevent an attacker from performing these actions without
>>>>>> authorization, the TPM specs define a mechanism to assert "physical
>>>>>> presence" before the privileged action can be done.  They do this by
>>>>>> having the firmware present a menu during early boot that permits
>>>>>> these privileged operations, and then the firmware locks the TPM chip
>>>>>> so the actions can no longer be done by any software that runs after
>>>>>> the firmware.  Thus "physical presence" is asserted by demonstrating
>>>>>> one has console access to the machine during early boot.
>>>>>>
>>>>>> The PPI spec implements a work around for this - presumably some
>>>>>> found
>>>>>> the enforcement mechanism too onerous.  It allows the OS to provide a
>>>>>> request code to the firmware, and on the next boot the firmware will
>>>>>> take the requested action before it locks the chip.  Thus allowing
>>>>>> the
>>>>>> OS to indirectly perform the privileged action even after the chip
>>>>>> has
>>>>>> been locked.  Thus, the PPI system seems to be an "elaborate hack" to
>>>>>> allow users to circumvent the physical presence mechanism (if they
>>>>>> choose to).
>>>>> Correct.
>>>>>> Here's what I understand the proposed implementation involves:
>>>>>>
>>>>>> 1 - in addition to emulating the TPM device itself, QEMU will also
>>>>>>       introduce a virtual memory device with 0x400 bytes.
>>>>> Correct.
>>>>>> 2 - on first boot the firmware (seabios and uefi) will populate the
>>>>>>       memory region created in step 1.  In particular it will fill an
>>>>>>       array with the list of request codes it supports.  (Each
>>>>>> request
>>>>>>       is an 8bit value, the array has 256 entries.)
>>>>> Correct. Each firmware would fill out the 256 byte array depending on
>>>>> what it supports. The 8 bit values are basically flags and so on.
>>>>>> 3 - QEMU will produce AML code implementing the standard PPI ACPI
>>>>>>       interface.  This AML code will take the request, find the table
>>>>>>       produced in step 1, compare it to the list of accepted requests
>>>>>>       produced in step 2, and then place the 8bit request in another
>>>>>>       qemu virtual memory device (at 0xFFFF0000 or 0xFED45000).
>>>>> Correct.
>>>>>
>>>>> Now EDK2 wants to store the code in a UEFI variable in NVRAM. We
>>>>> therefore would need to trigger an SMI. In SeaBIOS we wouldn't have to
>>>>> do this.
>>>>>
>>>>>> 4 - the OS will signal a reboot, qemu will do its normal reboot
>>>>>> logic,
>>>>>>       and the firmware will be run again.
>>>>>>
>>>>>> 5 - the firmware will extract the code written in stage 3, and if the
>>>>>>       tpm device has been configured to accept PPI codes from the
>>>>>> OS, it
>>>>>>       will invoke the requested action.
>>>>> SeaBIOS would look into memory to find the code. EDK2 will read the
>>>>> code
>>>>> from a UEFI variable.
>>>>>
>>>>>> Did I understand the above correctly?
>>>>> I think so. With the fine differences between SeaBIOS and EDK2
>>>>> pointed out.
>>>> Here's what I suggest:
>>>>
>>>> Please everyone continue working on this, according to Kevin's &
>>>> Stefan's description, but focus on QEMU and SeaBIOS *only*. Ignore edk2
>>>> for now.
>>> If this were targetted at SeaBIOS, I'd look for a simpler
>>> QEMU/firmware interface.  Something like:
>>>
>>> A - QEMU produces AML code implementing the standard PPI ACPI
>>>      interface that generates a request code and stores it in the
>>>      device memory of an existing device (eg, writable fw_cfg or an
>>>      extension field in the existing emulated TPM device).
> 
> ACPI code writing into fw_cfg sounds difficult.
> I initially had PPI SeaBIOS code write into the TPM TIS device's memory
> into some custom addresses. I'd consider this a hack. Now we have that
> virtual memory device with those 0x400 bytes...
> 
> In these 0x400 bytes we have 256 bytes that are used for configuration
> flags describing the supported opcode as you previously described. This
> array allows us to decouple the firmware implementation from the ACPI
> code and we need not hard code what is supported in the firmware inside
> the ACPI code (which would be difficult to do anyway since in QEMU we
> would not what firmware will be started and what PPI opcodes are
> support) and the ppi sysfs entries in Linux for example show exactly
> those PPI opcodes that are supported. The firmware needs to set those
> flags and the firmware knows what it supports.
> 
> I hope we can settle that this device is the right path.
> 
>>>
>>> B - after a reboot the firmware extracts the PPI request code
>>>      (produced in step A) and performs the requested action (if the TPM
>>>      is configured to accept OS generated codes).
>>>
>>> That is, skip steps 1 and 2 from the original proposal.
>> I think A/B can work fine, as long as
>> - the firmware can somehow dynamically recognize the device / "register
>>    block" that the request codes have to be pulled from, and
> 
> I experimented with what Igor had suggested with the fw_cfg file
> callback and so on.
> 
>> - QEMU is free to move the device or register block around, from release
>>    to release, without disturbing migration.
> 
> I think we should basically limit the firmware to writing two addresses
> into this fw_cfg file:
> - SeaBIOS: write back the same address that QEMU suggested in the file
> (currently 0xfed4 5000)
> - EDK2: write back 0xffff 0000
Given that I intend to rip the SecurityPkg ASL code out of OVMF's
solution for this, I don't think that the address 0xffff_0000 has any
relevance for OVMF. If the QEMU x86 MMIO space generally has a suitable
gap at 0xfed4_5000 (and I do think it has, even considering pflash &
LAPIC), then just put the register block there.

As long as Igor agrees. :)

Another question just occurred to me. If the guest OS queues some PPI
operations, and then the VM is powered down fully (instead of a reboot),
then we're at liberty to forget the queued PPI ops, right?

Thanks
Laszlo

> 
> No other address would be accepted by QEMU since presumably QEMU knows
> where otherwise address collisions can occur.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 21:04                 ` Laszlo Ersek
@ 2018-02-13 21:32                   ` Stefan Berger
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Berger @ 2018-02-13 21:32 UTC (permalink / raw)
  To: Laszlo Ersek, Kevin O'Connor
  Cc: marcandre.lureau, qemu-devel, mst, Igor Mammedov

On 02/13/2018 04:04 PM, Laszlo Ersek wrote:
> On 02/13/18 21:29, Stefan Berger wrote:
>> On 02/13/2018 02:59 PM, Laszlo Ersek wrote:
>>> On 02/13/18 20:37, Kevin O'Connor wrote:
>>>> On Tue, Feb 13, 2018 at 05:16:49PM +0100, Laszlo Ersek wrote:
>>>>> On 02/12/18 21:49, Stefan Berger wrote:
>>>>>> On 02/12/2018 03:46 PM, Kevin O'Connor wrote:
>>>>>>> I'm not sure I fully understand the goals of the PPI interface.
>>>>>>> Here's what I understand so far:
>>>>>>>
>>>>>>> The TPM specs define some actions that are considered privileged.  An
>>>>>>> example of this would be disabling the TPM itself.  In order to
>>>>>>> prevent an attacker from performing these actions without
>>>>>>> authorization, the TPM specs define a mechanism to assert "physical
>>>>>>> presence" before the privileged action can be done.  They do this by
>>>>>>> having the firmware present a menu during early boot that permits
>>>>>>> these privileged operations, and then the firmware locks the TPM chip
>>>>>>> so the actions can no longer be done by any software that runs after
>>>>>>> the firmware.  Thus "physical presence" is asserted by demonstrating
>>>>>>> one has console access to the machine during early boot.
>>>>>>>
>>>>>>> The PPI spec implements a work around for this - presumably some
>>>>>>> found
>>>>>>> the enforcement mechanism too onerous.  It allows the OS to provide a
>>>>>>> request code to the firmware, and on the next boot the firmware will
>>>>>>> take the requested action before it locks the chip.  Thus allowing
>>>>>>> the
>>>>>>> OS to indirectly perform the privileged action even after the chip
>>>>>>> has
>>>>>>> been locked.  Thus, the PPI system seems to be an "elaborate hack" to
>>>>>>> allow users to circumvent the physical presence mechanism (if they
>>>>>>> choose to).
>>>>>> Correct.
>>>>>>> Here's what I understand the proposed implementation involves:
>>>>>>>
>>>>>>> 1 - in addition to emulating the TPM device itself, QEMU will also
>>>>>>>        introduce a virtual memory device with 0x400 bytes.
>>>>>> Correct.
>>>>>>> 2 - on first boot the firmware (seabios and uefi) will populate the
>>>>>>>        memory region created in step 1.  In particular it will fill an
>>>>>>>        array with the list of request codes it supports.  (Each
>>>>>>> request
>>>>>>>        is an 8bit value, the array has 256 entries.)
>>>>>> Correct. Each firmware would fill out the 256 byte array depending on
>>>>>> what it supports. The 8 bit values are basically flags and so on.
>>>>>>> 3 - QEMU will produce AML code implementing the standard PPI ACPI
>>>>>>>        interface.  This AML code will take the request, find the table
>>>>>>>        produced in step 1, compare it to the list of accepted requests
>>>>>>>        produced in step 2, and then place the 8bit request in another
>>>>>>>        qemu virtual memory device (at 0xFFFF0000 or 0xFED45000).
>>>>>> Correct.
>>>>>>
>>>>>> Now EDK2 wants to store the code in a UEFI variable in NVRAM. We
>>>>>> therefore would need to trigger an SMI. In SeaBIOS we wouldn't have to
>>>>>> do this.
>>>>>>
>>>>>>> 4 - the OS will signal a reboot, qemu will do its normal reboot
>>>>>>> logic,
>>>>>>>        and the firmware will be run again.
>>>>>>>
>>>>>>> 5 - the firmware will extract the code written in stage 3, and if the
>>>>>>>        tpm device has been configured to accept PPI codes from the
>>>>>>> OS, it
>>>>>>>        will invoke the requested action.
>>>>>> SeaBIOS would look into memory to find the code. EDK2 will read the
>>>>>> code
>>>>>> from a UEFI variable.
>>>>>>
>>>>>>> Did I understand the above correctly?
>>>>>> I think so. With the fine differences between SeaBIOS and EDK2
>>>>>> pointed out.
>>>>> Here's what I suggest:
>>>>>
>>>>> Please everyone continue working on this, according to Kevin's &
>>>>> Stefan's description, but focus on QEMU and SeaBIOS *only*. Ignore edk2
>>>>> for now.
>>>> If this were targetted at SeaBIOS, I'd look for a simpler
>>>> QEMU/firmware interface.  Something like:
>>>>
>>>> A - QEMU produces AML code implementing the standard PPI ACPI
>>>>       interface that generates a request code and stores it in the
>>>>       device memory of an existing device (eg, writable fw_cfg or an
>>>>       extension field in the existing emulated TPM device).
>> ACPI code writing into fw_cfg sounds difficult.
>> I initially had PPI SeaBIOS code write into the TPM TIS device's memory
>> into some custom addresses. I'd consider this a hack. Now we have that
>> virtual memory device with those 0x400 bytes...
>>
>> In these 0x400 bytes we have 256 bytes that are used for configuration
>> flags describing the supported opcode as you previously described. This
>> array allows us to decouple the firmware implementation from the ACPI
>> code and we need not hard code what is supported in the firmware inside
>> the ACPI code (which would be difficult to do anyway since in QEMU we
>> would not what firmware will be started and what PPI opcodes are
>> support) and the ppi sysfs entries in Linux for example show exactly
>> those PPI opcodes that are supported. The firmware needs to set those
>> flags and the firmware knows what it supports.
>>
>> I hope we can settle that this device is the right path.
>>
>>>> B - after a reboot the firmware extracts the PPI request code
>>>>       (produced in step A) and performs the requested action (if the TPM
>>>>       is configured to accept OS generated codes).
>>>>
>>>> That is, skip steps 1 and 2 from the original proposal.
>>> I think A/B can work fine, as long as
>>> - the firmware can somehow dynamically recognize the device / "register
>>>     block" that the request codes have to be pulled from, and
>> I experimented with what Igor had suggested with the fw_cfg file
>> callback and so on.
>>
>>> - QEMU is free to move the device or register block around, from release
>>>     to release, without disturbing migration.
>> I think we should basically limit the firmware to writing two addresses
>> into this fw_cfg file:
>> - SeaBIOS: write back the same address that QEMU suggested in the file
>> (currently 0xfed4 5000)
>> - EDK2: write back 0xffff 0000
> Given that I intend to rip the SecurityPkg ASL code out of OVMF's
> solution for this, I don't think that the address 0xffff_0000 has any
> relevance for OVMF. If the QEMU x86 MMIO space generally has a suitable
> gap at 0xfed4_5000 (and I do think it has, even considering pflash &
> LAPIC), then just put the register block there.
>
> As long as Igor agrees. :)
>
> Another question just occurred to me. If the guest OS queues some PPI
> operations, and then the VM is powered down fully (instead of a reboot),
> then we're at liberty to forget the queued PPI ops, right?

I would say that is implementation-dependent. UEFI wouldn't forget due 
to these UEFI Variables in NVRAM...

     Stefan

>
> Thanks
> Laszlo
>
>> No other address would be accepted by QEMU since presumably QEMU knows
>> where otherwise address collisions can occur.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-13 20:29               ` Stefan Berger
  2018-02-13 21:04                 ` Laszlo Ersek
@ 2018-02-14 18:39                 ` Kevin O'Connor
  2018-02-15 11:52                   ` Stefan Berger
  1 sibling, 1 reply; 35+ messages in thread
From: Kevin O'Connor @ 2018-02-14 18:39 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Laszlo Ersek, marcandre.lureau, qemu-devel, mst, Igor Mammedov

On Tue, Feb 13, 2018 at 03:29:20PM -0500, Stefan Berger wrote:
[...]
> In these 0x400 bytes we have 256 bytes that are used for configuration flags
> describing the supported opcode as you previously described. This array
> allows us to decouple the firmware implementation from the ACPI code and we
> need not hard code what is supported in the firmware inside the ACPI code
> (which would be difficult to do anyway since in QEMU we would not what
> firmware will be started and what PPI opcodes are support) and the ppi sysfs
> entries in Linux for example show exactly those PPI opcodes that are
> supported. The firmware needs to set those flags and the firmware knows what
> it supports.
> 
> I hope we can settle that this device is the right path.

It seems that the primary purpose of the 0x400 virtual device is to
pass information from firmware to QEMU (specifically to pass the list
of supported PPI opcodes to the QEMU generated AML code).  Passing
information between firmware and QEMU is not new territory, and fw_cfg
was specifically built to do this.  I'd prefer to use fw_cfg if we
need to pass information between firmware and QEMU.

That said, I don't see why this list is needed - why not just
implement the same opcodes in both UEFI and SeaBIOS and be done with
it?  The spec defines 22 actions, and most of these are permutations
of 4 basic features (Enable, Activate, Clear, SetOwnerInstall).

[...]
> I initially had PPI SeaBIOS code write into the TPM TIS device's memory into
> some custom addresses. I'd consider this a hack.

Well, sure, it could be considered a hack.  But, it seems to me the
whole PPI spec is a bit of a hack.  If elegance isn't an option,
settle for simplicity?

-Kevin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface
  2018-02-14 18:39                 ` Kevin O'Connor
@ 2018-02-15 11:52                   ` Stefan Berger
  0 siblings, 0 replies; 35+ messages in thread
From: Stefan Berger @ 2018-02-15 11:52 UTC (permalink / raw)
  To: Kevin O'Connor
  Cc: Laszlo Ersek, marcandre.lureau, qemu-devel, mst, Igor Mammedov

On 02/14/2018 01:39 PM, Kevin O'Connor wrote:
> On Tue, Feb 13, 2018 at 03:29:20PM -0500, Stefan Berger wrote:
> [...]
>> In these 0x400 bytes we have 256 bytes that are used for configuration flags
>> describing the supported opcode as you previously described. This array
>> allows us to decouple the firmware implementation from the ACPI code and we
>> need not hard code what is supported in the firmware inside the ACPI code
>> (which would be difficult to do anyway since in QEMU we would not what
>> firmware will be started and what PPI opcodes are support) and the ppi sysfs
>> entries in Linux for example show exactly those PPI opcodes that are
>> supported. The firmware needs to set those flags and the firmware knows what
>> it supports.
>>
>> I hope we can settle that this device is the right path.
> It seems that the primary purpose of the 0x400 virtual device is to
> pass information from firmware to QEMU (specifically to pass the list
> of supported PPI opcodes to the QEMU generated AML code).  Passing
> information between firmware and QEMU is not new territory, and fw_cfg
> was specifically built to do this.  I'd prefer to use fw_cfg if we
> need to pass information between firmware and QEMU.
>
> That said, I don't see why this list is needed - why not just
> implement the same opcodes in both UEFI and SeaBIOS and be done with
> it?  The spec defines 22 actions, and most of these are permutations
> of 4 basic features (Enable, Activate, Clear, SetOwnerInstall).

... which may be a substantial amount of work to implement. There are 
another 23 or so defined for TPM 2, some of which are optional.

>
> [...]
>> I initially had PPI SeaBIOS code write into the TPM TIS device's memory into
>> some custom addresses. I'd consider this a hack.
> Well, sure, it could be considered a hack.  But, it seems to me the
> whole PPI spec is a bit of a hack.  If elegance isn't an option,
> settle for simplicity?
>
> -Kevin
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2018-02-15 11:52 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-16 15:51 [Qemu-devel] [PATCH v2 0/4] Implement Physical Presence interface for TPM 1.2 and 2 Stefan Berger
2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 1/4] tpm: implement virtual memory device for TPM PPI Stefan Berger
2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 2/4] acpi: build QEMU table for PPI virtual memory device Stefan Berger
2018-01-16 16:35   ` Michael S. Tsirkin
2018-01-16 20:42   ` Laszlo Ersek
2018-01-16 21:20     ` Stefan Berger
2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 3/4] acpi: implement aml_lless_equal Stefan Berger
2018-01-16 16:13   ` Michael S. Tsirkin
2018-01-16 15:51 ` [Qemu-devel] [PATCH v2 4/4] acpi: build TPM Physical Presence interface Stefan Berger
2018-02-09 20:19   ` Stefan Berger
2018-02-12 14:27     ` Igor Mammedov
2018-02-12 16:44       ` Stefan Berger
2018-02-12 17:52         ` Igor Mammedov
2018-02-12 18:45           ` Stefan Berger
2018-02-13 12:50             ` Igor Mammedov
2018-02-12 19:45     ` Kevin O'Connor
2018-02-12 20:17       ` Stefan Berger
2018-02-13 12:57         ` Igor Mammedov
2018-02-13 13:31           ` Laszlo Ersek
2018-02-13 14:17             ` Igor Mammedov
2018-02-13 15:39               ` Laszlo Ersek
2018-02-13 16:19                 ` Igor Mammedov
2018-02-13 16:34                   ` Laszlo Ersek
2018-02-12 20:46     ` Kevin O'Connor
2018-02-12 20:49       ` Stefan Berger
2018-02-13 16:16         ` Laszlo Ersek
2018-02-13 16:34           ` Igor Mammedov
2018-02-13 17:01             ` Laszlo Ersek
2018-02-13 19:37           ` Kevin O'Connor
2018-02-13 19:59             ` Laszlo Ersek
2018-02-13 20:29               ` Stefan Berger
2018-02-13 21:04                 ` Laszlo Ersek
2018-02-13 21:32                   ` Stefan Berger
2018-02-14 18:39                 ` Kevin O'Connor
2018-02-15 11:52                   ` Stefan Berger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.