All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
@ 2016-01-04 15:23 Stefan Berger
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM Stefan Berger
                   ` (6 more replies)
  0 siblings, 7 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-04 15:23 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanb, mst, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

The following series of patches extends TPM support with an
external TPM that offers a Linux CUSE (character device in userspace)
interface. This TPM lets each VM access its own private vTPM.
The CUSE TPM supports suspend/resume and migration. Much
out-of-band functionality necessary to control the CUSE TPM is
implemented using ioctls.

This series of patches applies to 38a762fe.

Stefan Berger (4):
  Provide support for the CUSE TPM
  Introduce condition to notify waiters of completed command
  Introduce condition in TPM backend for notification
  Add support for VM suspend/resume for TPM TIS

 hmp.c                        |   6 +
 hw/tpm/tpm_int.h             |   4 +
 hw/tpm/tpm_ioctl.h           | 215 +++++++++++++++++++++++
 hw/tpm/tpm_passthrough.c     | 409 +++++++++++++++++++++++++++++++++++++++++--
 hw/tpm/tpm_tis.c             | 151 +++++++++++++++-
 hw/tpm/tpm_tis.h             |   2 +
 hw/tpm/tpm_util.c            | 223 +++++++++++++++++++++++
 hw/tpm/tpm_util.h            |   7 +
 include/sysemu/tpm_backend.h |  12 ++
 qapi-schema.json             |  18 +-
 qemu-options.hx              |  21 ++-
 qmp-commands.hx              |   2 +-
 tpm.c                        |  11 +-
 13 files changed, 1062 insertions(+), 19 deletions(-)
 create mode 100644 hw/tpm/tpm_ioctl.h

-- 
2.4.3

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-04 15:23 [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Stefan Berger
@ 2016-01-04 15:23 ` Stefan Berger
  2016-01-20 15:00   ` Daniel P. Berrange
  2016-01-20 15:20   ` Michael S. Tsirkin
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 2/4] Introduce condition to notify waiters of completed command Stefan Berger
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-04 15:23 UTC (permalink / raw)
  To: qemu-devel
  Cc: stefanb, mst, Stefan Berger, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

From: Stefan Berger <stefanb@linux.vnet.ibm.com>

Rather than integrating TPM functionality into QEMU directly
using the TPM emulation of libtpms, we now integrate an external
emulated TPM device. This device is expected to implement a Linux
CUSE interface (CUSE = character device in userspace).

QEMU talks to the CUSE TPM using much functionality of the
passthrough driver. For example, the TPM commands and responses
are sent to the CUSE TPM using the read()/write() interface.
However, some out-of-band control needs to be done using the CUSE
TPM's ioctls. The CUSE TPM currently defines and implements 15
different ioctls for controlling certain life-cycle aspects of
the emulated TPM. The ioctls can be regarded as a replacement for
direct function calls to a TPM emulator if the TPM were to be
directly integrated into QEMU.

One of the ioctls allows to get a bitmask of supported capabilities.
Each returned bit indicates which capabilities have been implemented.
An include file defining the various ioctls is added to QEMU.

The CUSE TPM and associated tools can be found here:

https://github.com/stefanberger/swtpm

(please use the latest version)

To use the external CUSE TPM, the CUSE TPM should be started as follows:

# terminate previously started CUSE TPM
/usr/bin/swtpm_ioctl -s /dev/vtpm-test

# start CUSE TPM
/usr/bin/swtpm_cuse -n vtpm-test

QEMU can then be started using the following parameters:

qemu-system-x86_64 \
	[...] \
        -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
        -device tpm-tis,id=tpm0,tpmdev=tpm0 \
	[...]


Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Eric Blake <eblake@redhat.com>
---
 hmp.c                    |   6 ++
 hw/tpm/tpm_int.h         |   1 +
 hw/tpm/tpm_ioctl.h       | 215 +++++++++++++++++++++++++++++++++++++
 hw/tpm/tpm_passthrough.c | 274 +++++++++++++++++++++++++++++++++++++++++++++--
 qapi-schema.json         |  18 +++-
 qemu-options.hx          |  21 +++-
 qmp-commands.hx          |   2 +-
 tpm.c                    |  11 +-
 8 files changed, 530 insertions(+), 18 deletions(-)
 create mode 100644 hw/tpm/tpm_ioctl.h

diff --git a/hmp.c b/hmp.c
index c2b2c16..5f70aac 100644
--- a/hmp.c
+++ b/hmp.c
@@ -863,6 +863,12 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
                            tpo->has_cancel_path ? ",cancel-path=" : "",
                            tpo->has_cancel_path ? tpo->cancel_path : "");
             break;
+        case TPM_TYPE_OPTIONS_KIND_CUSE_TPM:
+            tpo = ti->options->u.passthrough;
+            monitor_printf(mon, "%s%s",
+                           tpo->has_path ? ",path=" : "",
+                           tpo->has_path ? tpo->path : "");
+            break;
         case TPM_TYPE_OPTIONS_KIND__MAX:
             break;
         }
diff --git a/hw/tpm/tpm_int.h b/hw/tpm/tpm_int.h
index f2f285b..6b2c9c9 100644
--- a/hw/tpm/tpm_int.h
+++ b/hw/tpm/tpm_int.h
@@ -61,6 +61,7 @@ struct tpm_resp_hdr {
 #define TPM_TAG_RSP_AUTH1_COMMAND 0xc5
 #define TPM_TAG_RSP_AUTH2_COMMAND 0xc6
 
+#define TPM_SUCCESS               0
 #define TPM_FAIL                  9
 
 #define TPM_ORD_ContinueSelfTest  0x53
diff --git a/hw/tpm/tpm_ioctl.h b/hw/tpm/tpm_ioctl.h
new file mode 100644
index 0000000..a341e15
--- /dev/null
+++ b/hw/tpm/tpm_ioctl.h
@@ -0,0 +1,215 @@
+/*
+ * tpm_ioctl.h
+ *
+ * (c) Copyright IBM Corporation 2014, 2015.
+ *
+ * This file is licensed under the terms of the 3-clause BSD license
+ */
+#ifndef _TPM_IOCTL_H_
+#define _TPM_IOCTL_H_
+
+#include <stdint.h>
+#include <sys/uio.h>
+#include <sys/types.h>
+#include <sys/ioctl.h>
+
+/*
+ * Every response from a command involving a TPM command execution must hold
+ * the ptm_res as the first element.
+ * ptm_res corresponds to the error code of a command executed by the TPM.
+ */
+
+typedef uint32_t ptm_res;
+
+/* PTM_GET_TPMESTABLISHED: get the establishment bit */
+struct ptm_est {
+    union {
+        struct {
+            ptm_res tpm_result;
+            unsigned char bit; /* TPM established bit */
+        } resp; /* response */
+    } u;
+};
+
+/* PTM_RESET_TPMESTABLISHED: reset establishment bit */
+struct ptm_reset_est {
+    union {
+        struct {
+            uint8_t loc; /* locality to use */
+        } req; /* request */
+        struct {
+            ptm_res tpm_result;
+        } resp; /* response */
+    } u;
+};
+
+/* PTM_INIT */
+struct ptm_init {
+    union {
+        struct {
+            uint32_t init_flags; /* see definitions below */
+        } req; /* request */
+        struct {
+            ptm_res tpm_result;
+        } resp; /* response */
+    } u;
+};
+
+/* above init_flags */
+#define PTM_INIT_FLAG_DELETE_VOLATILE (1 << 0)
+    /* delete volatile state file after reading it */
+
+/* PTM_SET_LOCALITY */
+struct ptm_loc {
+    union {
+        struct {
+            uint8_t loc; /* locality to set */
+        } req; /* request */
+        struct {
+            ptm_res tpm_result;
+        } resp; /* response */
+    } u;
+};
+
+/* PTM_HASH_DATA: hash given data */
+struct ptm_hdata {
+    union {
+        struct {
+            uint32_t length;
+            uint8_t data[4096];
+        } req; /* request */
+        struct {
+            ptm_res tpm_result;
+        } resp; /* response */
+    } u;
+};
+
+/*
+ * size of the TPM state blob to transfer; x86_64 can handle 8k,
+ * ppc64le only ~7k; keep the response below a 4k page size
+ */
+#define PTM_STATE_BLOB_SIZE (3 * 1024)
+
+/*
+ * The following is the data structure to get state blobs from the TPM.
+ * If the size of the state blob exceeds the PTM_STATE_BLOB_SIZE, multiple reads
+ * with this ioctl and with adjusted offset are necessary. All bytes
+ * must be transferred and the transfer is done once the last byte has been
+ * returned.
+ * It is possible to use the read() interface for reading the data; however,
+ * the first bytes of the state blob will be part of the response to the ioctl();
+ * a subsequent read() is only necessary if the total length (totlength) exceeds
+ * the number of received bytes. seek() is not supported.
+ */
+struct ptm_getstate {
+    union {
+        struct {
+            uint32_t state_flags; /* may be: PTM_STATE_FLAG_DECRYPTED */
+            uint32_t type;        /* which blob to pull */
+            uint32_t offset;      /* offset from where to read */
+        } req; /* request */
+        struct {
+            ptm_res tpm_result;
+            uint32_t state_flags; /* may be: PTM_STATE_FLAG_ENCRYPTED */
+            uint32_t totlength;   /* total length that will be transferred */
+            uint32_t length;      /* number of bytes in following buffer */
+            uint8_t  data[PTM_STATE_BLOB_SIZE];
+        } resp; /* response */
+    } u;
+};
+
+/* TPM state blob types */
+#define PTM_BLOB_TYPE_PERMANENT  1
+#define PTM_BLOB_TYPE_VOLATILE   2
+#define PTM_BLOB_TYPE_SAVESTATE  3
+
+/* state_flags above : */
+#define PTM_STATE_FLAG_DECRYPTED     1 /* on input:  get decrypted state */
+#define PTM_STATE_FLAG_ENCRYPTED     2 /* on output: state is encrypted */
+
+/*
+ * The following is the data structure to set state blobs in the TPM.
+ * If the size of the state blob exceeds the PTM_STATE_BLOB_SIZE, multiple
+ * 'writes' using this ioctl are necessary. The last packet is indicated
+ * by the length being smaller than the PTM_STATE_BLOB_SIZE.
+ * The very first packet may have a length indicator of '0' enabling
+ * a write() with all the bytes from a buffer. If the write() interface
+ * is used, a final ioctl with a non-full buffer must be made to indicate
+ * that all data were transferred (a write with 0 bytes would not work).
+ */
+struct ptm_setstate {
+    union {
+        struct {
+            uint32_t state_flags; /* may be PTM_STATE_FLAG_ENCRYPTED */
+            uint32_t type;        /* which blob to set */
+            uint32_t length;      /* length of the data;
+                                     use 0 on the first packet to
+                                     transfer using write() */
+            uint8_t data[PTM_STATE_BLOB_SIZE];
+        } req; /* request */
+        struct {
+            ptm_res tpm_result;
+        } resp; /* response */
+    } u;
+};
+
+/*
+ * PTM_GET_CONFIG: Data structure to get runtime configuration information
+ * such as which keys are applied.
+ */
+struct ptm_getconfig {
+    union {
+        struct {
+            ptm_res tpm_result;
+            uint32_t flags;
+        } resp; /* response */
+    } u;
+};
+
+#define PTM_CONFIG_FLAG_FILE_KEY        0x1
+#define PTM_CONFIG_FLAG_MIGRATION_KEY   0x2
+
+
+typedef uint64_t ptm_cap;
+typedef struct ptm_est ptm_est;
+typedef struct ptm_reset_est ptm_reset_est;
+typedef struct ptm_loc ptm_loc;
+typedef struct ptm_hdata ptm_hdata;
+typedef struct ptm_init ptm_init;
+typedef struct ptm_getstate ptm_getstate;
+typedef struct ptm_setstate ptm_setstate;
+typedef struct ptm_getconfig ptm_getconfig;
+
+/* capability flags returned by PTM_GET_CAPABILITY */
+#define PTM_CAP_INIT               (1)
+#define PTM_CAP_SHUTDOWN           (1<<1)
+#define PTM_CAP_GET_TPMESTABLISHED (1<<2)
+#define PTM_CAP_SET_LOCALITY       (1<<3)
+#define PTM_CAP_HASHING            (1<<4)
+#define PTM_CAP_CANCEL_TPM_CMD     (1<<5)
+#define PTM_CAP_STORE_VOLATILE     (1<<6)
+#define PTM_CAP_RESET_TPMESTABLISHED (1<<7)
+#define PTM_CAP_GET_STATEBLOB      (1<<8)
+#define PTM_CAP_SET_STATEBLOB      (1<<9)
+#define PTM_CAP_STOP               (1<<10)
+#define PTM_CAP_GET_CONFIG         (1<<11)
+
+enum {
+    PTM_GET_CAPABILITY     = _IOR('P', 0, ptm_cap),
+    PTM_INIT               = _IOWR('P', 1, ptm_init),
+    PTM_SHUTDOWN           = _IOR('P', 2, ptm_res),
+    PTM_GET_TPMESTABLISHED = _IOR('P', 3, ptm_est),
+    PTM_SET_LOCALITY       = _IOWR('P', 4, ptm_loc),
+    PTM_HASH_START         = _IOR('P', 5, ptm_res),
+    PTM_HASH_DATA          = _IOWR('P', 6, ptm_hdata),
+    PTM_HASH_END           = _IOR('P', 7, ptm_res),
+    PTM_CANCEL_TPM_CMD     = _IOR('P', 8, ptm_res),
+    PTM_STORE_VOLATILE     = _IOR('P', 9, ptm_res),
+    PTM_RESET_TPMESTABLISHED = _IOWR('P', 10, ptm_reset_est),
+    PTM_GET_STATEBLOB      = _IOWR('P', 11, ptm_getstate),
+    PTM_SET_STATEBLOB      = _IOWR('P', 12, ptm_setstate),
+    PTM_STOP               = _IOR('P', 13, ptm_res),
+    PTM_GET_CONFIG         = _IOR('P', 14, ptm_getconfig),
+};
+
+#endif /* _TPM_IOCTL_H */
diff --git a/hw/tpm/tpm_passthrough.c b/hw/tpm/tpm_passthrough.c
index be160c1..a4f4fe0 100644
--- a/hw/tpm/tpm_passthrough.c
+++ b/hw/tpm/tpm_passthrough.c
@@ -33,6 +33,7 @@
 #include "sysemu/tpm_backend_int.h"
 #include "tpm_tis.h"
 #include "tpm_util.h"
+#include "tpm_ioctl.h"
 
 #define DEBUG_TPM 0
 
@@ -45,6 +46,7 @@
 #define TYPE_TPM_PASSTHROUGH "tpm-passthrough"
 #define TPM_PASSTHROUGH(obj) \
     OBJECT_CHECK(TPMPassthruState, (obj), TYPE_TPM_PASSTHROUGH)
+#define TYPE_TPM_CUSE "tpm-cuse"
 
 static const TPMDriverOps tpm_passthrough_driver;
 
@@ -71,12 +73,18 @@ struct TPMPassthruState {
     bool had_startup_error;
 
     TPMVersion tpm_version;
+    ptm_cap cuse_cap; /* capabilities of the CUSE TPM */
+    uint8_t cur_locty_number; /* last set locality */
 };
 
 typedef struct TPMPassthruState TPMPassthruState;
 
 #define TPM_PASSTHROUGH_DEFAULT_DEVICE "/dev/tpm0"
 
+#define TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt) (tpm_pt->cuse_cap != 0)
+
+#define TPM_CUSE_IMPLEMENTS_ALL(S, cap) (((S)->cuse_cap & (cap)) == (cap))
+
 /* functions */
 
 static void tpm_passthrough_cancel_cmd(TPMBackend *tb);
@@ -123,7 +131,28 @@ static bool tpm_passthrough_is_selftest(const uint8_t *in, uint32_t in_len)
     return false;
 }
 
+static int tpm_passthrough_set_locality(TPMPassthruState *tpm_pt,
+                                        uint8_t locty_number)
+{
+    ptm_loc loc;
+
+    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+        if (tpm_pt->cur_locty_number != locty_number) {
+            loc.u.req.loc = locty_number;
+            if (ioctl(tpm_pt->tpm_fd, PTM_SET_LOCALITY, &loc) < 0) {
+                error_report("tpm_cuse: could not set locality on "
+                             "CUSE TPM: %s",
+                             strerror(errno));
+                return -1;
+            }
+            tpm_pt->cur_locty_number = locty_number;
+        }
+    }
+    return 0;
+}
+
 static int tpm_passthrough_unix_tx_bufs(TPMPassthruState *tpm_pt,
+                                        uint8_t locality_number,
                                         const uint8_t *in, uint32_t in_len,
                                         uint8_t *out, uint32_t out_len,
                                         bool *selftest_done)
@@ -132,6 +161,11 @@ static int tpm_passthrough_unix_tx_bufs(TPMPassthruState *tpm_pt,
     bool is_selftest;
     const struct tpm_resp_hdr *hdr;
 
+    ret = tpm_passthrough_set_locality(tpm_pt, locality_number);
+    if (ret < 0) {
+        goto err_exit;
+    }
+
     tpm_pt->tpm_op_canceled = false;
     tpm_pt->tpm_executing = true;
     *selftest_done = false;
@@ -182,10 +216,12 @@ err_exit:
 }
 
 static int tpm_passthrough_unix_transfer(TPMPassthruState *tpm_pt,
+                                         uint8_t locality_number,
                                          const TPMLocality *locty_data,
                                          bool *selftest_done)
 {
     return tpm_passthrough_unix_tx_bufs(tpm_pt,
+                                        locality_number,
                                         locty_data->w_buffer.buffer,
                                         locty_data->w_offset,
                                         locty_data->r_buffer.buffer,
@@ -206,6 +242,7 @@ static void tpm_passthrough_worker_thread(gpointer data,
     switch (cmd) {
     case TPM_BACKEND_CMD_PROCESS_CMD:
         tpm_passthrough_unix_transfer(tpm_pt,
+                                      thr_parms->tpm_state->locty_number,
                                       thr_parms->tpm_state->locty_data,
                                       &selftest_done);
 
@@ -222,6 +259,93 @@ static void tpm_passthrough_worker_thread(gpointer data,
 }
 
 /*
+ * Gracefully shut down the external CUSE TPM
+ */
+static void tpm_passthrough_shutdown(TPMPassthruState *tpm_pt)
+{
+    ptm_res res;
+
+    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+        if (ioctl(tpm_pt->tpm_fd, PTM_SHUTDOWN, &res) < 0) {
+            error_report("tpm_cuse: Could not cleanly shut down "
+                         "the CUSE TPM: %s",
+                         strerror(errno));
+        }
+    }
+}
+
+/*
+ * Probe for the CUSE TPM by sending an ioctl() requesting its
+ * capability flags.
+ */
+static int tpm_passthrough_cuse_probe(TPMPassthruState *tpm_pt)
+{
+    int rc = 0;
+
+    if (ioctl(tpm_pt->tpm_fd, PTM_GET_CAPABILITY, &tpm_pt->cuse_cap) < 0) {
+        error_report("Error: CUSE TPM was requested, but probing failed");
+        rc = -1;
+    }
+
+    return rc;
+}
+
+static int tpm_passthrough_cuse_check_caps(TPMPassthruState *tpm_pt)
+{
+    int rc = 0;
+    ptm_cap caps = 0;
+    const char *tpm = NULL;
+
+    /* check for min. required capabilities */
+    switch (tpm_pt->tpm_version) {
+    case TPM_VERSION_1_2:
+        caps = PTM_CAP_INIT | PTM_CAP_SHUTDOWN | PTM_CAP_GET_TPMESTABLISHED |
+               PTM_CAP_SET_LOCALITY;
+        tpm = "1.2";
+        break;
+    case TPM_VERSION_2_0:
+        caps = PTM_CAP_INIT | PTM_CAP_SHUTDOWN | PTM_CAP_GET_TPMESTABLISHED |
+               PTM_CAP_SET_LOCALITY | PTM_CAP_RESET_TPMESTABLISHED;
+        tpm = "2";
+        break;
+    case TPM_VERSION_UNSPEC:
+        error_report("tpm_cuse: %s: TPM version has not been set",
+                     __func__);
+        return -1;
+    }
+
+    if (!TPM_CUSE_IMPLEMENTS_ALL(tpm_pt, caps)) {
+        error_report("tpm_cuse: TPM does not implement minimum set of required "
+                     "capabilities for TPM %s (0x%x)", tpm, (int)caps);
+        rc = -1;
+    }
+
+    return rc;
+}
+
+/*
+ * Initialize the external CUSE TPM
+ */
+static int tpm_passthrough_cuse_init(TPMPassthruState *tpm_pt)
+{
+    int rc = 0;
+    ptm_init init = {
+        .u.req.init_flags = PTM_INIT_FLAG_DELETE_VOLATILE,
+    };
+
+    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+        if (ioctl(tpm_pt->tpm_fd, PTM_INIT, &init) < 0) {
+            error_report("tpm_cuse: Detected CUSE TPM but could not "
+                         "send INIT: %s",
+                         strerror(errno));
+            rc = -1;
+        }
+    }
+
+    return rc;
+}
+
+/*
  * Start the TPM (thread). If it had been started before, then terminate
  * and start it again.
  */
@@ -236,6 +360,8 @@ static int tpm_passthrough_startup_tpm(TPMBackend *tb)
                               tpm_passthrough_worker_thread,
                               &tpm_pt->tpm_thread_params);
 
+    tpm_passthrough_cuse_init(tpm_pt);
+
     return 0;
 }
 
@@ -266,14 +392,43 @@ static int tpm_passthrough_init(TPMBackend *tb, TPMState *s,
 
 static bool tpm_passthrough_get_tpm_established_flag(TPMBackend *tb)
 {
+    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
+    ptm_est est;
+
+    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+        if (ioctl(tpm_pt->tpm_fd, PTM_GET_TPMESTABLISHED, &est) < 0) {
+            error_report("tpm_cuse: Could not get the TPM established "
+                         "flag from the CUSE TPM: %s",
+                         strerror(errno));
+            return false;
+        }
+        return (est.u.resp.bit != 0);
+    }
     return false;
 }
 
 static int tpm_passthrough_reset_tpm_established_flag(TPMBackend *tb,
                                                       uint8_t locty)
 {
+    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
+    int rc = 0;
+    ptm_reset_est ptmreset_est;
+
     /* only a TPM 2.0 will support this */
-    return 0;
+    if (tpm_pt->tpm_version == TPM_VERSION_2_0) {
+        if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+            ptmreset_est.u.req.loc = tpm_pt->cur_locty_number;
+
+            if (ioctl(tpm_pt->tpm_fd, PTM_RESET_TPMESTABLISHED,
+                      &ptmreset_est) < 0) {
+                error_report("tpm_cuse: Could not reset the establishment bit "
+                             "failed: %s",
+                             strerror(errno));
+                rc = -1;
+            }
+        }
+    }
+    return rc;
 }
 
 static bool tpm_passthrough_get_startup_error(TPMBackend *tb)
@@ -304,7 +459,8 @@ static void tpm_passthrough_deliver_request(TPMBackend *tb)
 static void tpm_passthrough_cancel_cmd(TPMBackend *tb)
 {
     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
-    int n;
+    ptm_res res;
+    static bool error_printed;
 
     /*
      * As of Linux 3.7 the tpm_tis driver does not properly cancel
@@ -313,17 +469,34 @@ static void tpm_passthrough_cancel_cmd(TPMBackend *tb)
      * command, e.g., a command executed on the host.
      */
     if (tpm_pt->tpm_executing) {
-        if (tpm_pt->cancel_fd >= 0) {
-            n = write(tpm_pt->cancel_fd, "-", 1);
-            if (n != 1) {
-                error_report("Canceling TPM command failed: %s",
-                             strerror(errno));
-            } else {
-                tpm_pt->tpm_op_canceled = true;
+        if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+            if (TPM_CUSE_IMPLEMENTS_ALL(tpm_pt, PTM_CAP_CANCEL_TPM_CMD)) {
+                if (ioctl(tpm_pt->tpm_fd, PTM_CANCEL_TPM_CMD, &res) < 0) {
+                    error_report("tpm_cuse: Could not cancel command on "
+                                 "CUSE TPM: %s",
+                                 strerror(errno));
+                } else if (res != TPM_SUCCESS) {
+                    if (!error_printed) {
+                        error_report("TPM error code from command "
+                                     "cancellation of CUSE TPM: 0x%x", res);
+                        error_printed = true;
+                    }
+                } else {
+                    tpm_pt->tpm_op_canceled = true;
+                }
             }
         } else {
-            error_report("Cannot cancel TPM command due to missing "
-                         "TPM sysfs cancel entry");
+            if (tpm_pt->cancel_fd >= 0) {
+                if (write(tpm_pt->cancel_fd, "-", 1) != 1) {
+                    error_report("Canceling TPM command failed: %s",
+                                 strerror(errno));
+                } else {
+                    tpm_pt->tpm_op_canceled = true;
+                }
+            } else {
+                error_report("Cannot cancel TPM command due to missing "
+                             "TPM sysfs cancel entry");
+            }
         }
     }
 }
@@ -353,6 +526,11 @@ static int tpm_passthrough_open_sysfs_cancel(TPMBackend *tb)
     char *dev;
     char path[PATH_MAX];
 
+    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+        /* not needed, but so we have a fd */
+        return qemu_open("/dev/null", O_WRONLY);
+    }
+
     if (tb->cancel_path) {
         fd = qemu_open(tb->cancel_path, O_WRONLY);
         if (fd < 0) {
@@ -387,12 +565,22 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
 {
     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
     const char *value;
+    bool have_cuse = false;
+
+    value = qemu_opt_get(opts, "type");
+    if (value != NULL && !strcmp("cuse-tpm", value)) {
+        have_cuse = true;
+    }
 
     value = qemu_opt_get(opts, "cancel-path");
     tb->cancel_path = g_strdup(value);
 
     value = qemu_opt_get(opts, "path");
     if (!value) {
+        if (have_cuse) {
+            error_report("Missing path to access CUSE TPM");
+            goto err_free_parameters;
+        }
         value = TPM_PASSTHROUGH_DEFAULT_DEVICE;
     }
 
@@ -407,15 +595,36 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
         goto err_free_parameters;
     }
 
+    tpm_pt->cur_locty_number = ~0;
+
+    if (have_cuse) {
+        if (tpm_passthrough_cuse_probe(tpm_pt)) {
+            goto err_close_tpmdev;
+        }
+        /* init TPM for probing */
+        if (tpm_passthrough_cuse_init(tpm_pt)) {
+            goto err_close_tpmdev;
+        }
+    }
+
     if (tpm_util_test_tpmdev(tpm_pt->tpm_fd, &tpm_pt->tpm_version)) {
         error_report("'%s' is not a TPM device.",
                      tpm_pt->tpm_dev);
         goto err_close_tpmdev;
     }
 
+    if (have_cuse) {
+        if (tpm_passthrough_cuse_check_caps(tpm_pt)) {
+            goto err_close_tpmdev;
+        }
+    }
+
+
     return 0;
 
  err_close_tpmdev:
+    tpm_passthrough_shutdown(tpm_pt);
+
     qemu_close(tpm_pt->tpm_fd);
     tpm_pt->tpm_fd = -1;
 
@@ -466,6 +675,8 @@ static void tpm_passthrough_destroy(TPMBackend *tb)
 
     tpm_backend_thread_end(&tpm_pt->tbt);
 
+    tpm_passthrough_shutdown(tpm_pt);
+
     qemu_close(tpm_pt->tpm_fd);
     qemu_close(tpm_pt->cancel_fd);
 
@@ -539,3 +750,44 @@ static void tpm_passthrough_register(void)
 }
 
 type_init(tpm_passthrough_register)
+
+/* CUSE TPM */
+static const char *tpm_passthrough_cuse_create_desc(void)
+{
+    return "CUSE TPM backend driver";
+}
+
+static const TPMDriverOps tpm_cuse_driver = {
+    .type                     = TPM_TYPE_CUSE_TPM,
+    .opts                     = tpm_passthrough_cmdline_opts,
+    .desc                     = tpm_passthrough_cuse_create_desc,
+    .create                   = tpm_passthrough_create,
+    .destroy                  = tpm_passthrough_destroy,
+    .init                     = tpm_passthrough_init,
+    .startup_tpm              = tpm_passthrough_startup_tpm,
+    .realloc_buffer           = tpm_passthrough_realloc_buffer,
+    .reset                    = tpm_passthrough_reset,
+    .had_startup_error        = tpm_passthrough_get_startup_error,
+    .deliver_request          = tpm_passthrough_deliver_request,
+    .cancel_cmd               = tpm_passthrough_cancel_cmd,
+    .get_tpm_established_flag = tpm_passthrough_get_tpm_established_flag,
+    .reset_tpm_established_flag = tpm_passthrough_reset_tpm_established_flag,
+    .get_tpm_version          = tpm_passthrough_get_tpm_version,
+};
+
+static const TypeInfo tpm_cuse_info = {
+    .name = TYPE_TPM_CUSE,
+    .parent = TYPE_TPM_BACKEND,
+    .instance_size = sizeof(TPMPassthruState),
+    .class_init = tpm_passthrough_class_init,
+    .instance_init = tpm_passthrough_inst_init,
+    .instance_finalize = tpm_passthrough_inst_finalize,
+};
+
+static void tpm_cuse_register(void)
+{
+    type_register_static(&tpm_cuse_info);
+    tpm_register_driver(&tpm_cuse_driver);
+}
+
+type_init(tpm_cuse_register)
diff --git a/qapi-schema.json b/qapi-schema.json
index 2e31733..e0ef212 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -3335,10 +3335,12 @@
 # An enumeration of TPM types
 #
 # @passthrough: TPM passthrough type
+# @cuse-tpm: CUSE TPM type
+#            Since: 2.6
 #
 # Since: 1.5
 ##
-{ 'enum': 'TpmType', 'data': [ 'passthrough' ] }
+{ 'enum': 'TpmType', 'data': [ 'passthrough', 'cuse-tpm' ] }
 
 ##
 # @query-tpm-types:
@@ -3367,6 +3369,17 @@
                                              '*cancel-path' : 'str'} }
 
 ##
+# @TPMCuseOptions:
+#
+# Information about the CUSE TPM type
+#
+# @path: string describing the path used for accessing the TPM device
+#
+# Since: 2.6
+##
+{ 'struct': 'TPMCuseOptions', 'data': { 'path' : 'str'}}
+
+##
 # @TpmTypeOptions:
 #
 # A union referencing different TPM backend types' configuration options
@@ -3376,7 +3389,8 @@
 # Since: 1.5
 ##
 { 'union': 'TpmTypeOptions',
-   'data': { 'passthrough' : 'TPMPassthroughOptions' } }
+   'data': { 'passthrough' : 'TPMPassthroughOptions',
+             'cuse-tpm' : 'TPMCuseOptions' } }
 
 ##
 # @TpmInfo:
diff --git a/qemu-options.hx b/qemu-options.hx
index 215d00d..6ea3e10 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2650,7 +2650,10 @@ DEF("tpmdev", HAS_ARG, QEMU_OPTION_tpmdev, \
     "-tpmdev passthrough,id=id[,path=path][,cancel-path=path]\n"
     "                use path to provide path to a character device; default is /dev/tpm0\n"
     "                use cancel-path to provide path to TPM's cancel sysfs entry; if\n"
-    "                not provided it will be searched for in /sys/class/misc/tpm?/device\n",
+    "                not provided it will be searched for in /sys/class/misc/tpm?/device\n"
+    "-tpmdev cuse-tpm,id=id,path=path\n"
+    "                use path to provide path to a character device to talk to the\n"
+    "                TPM emulator providing a CUSE interface\n",
     QEMU_ARCH_ALL)
 STEXI
 
@@ -2659,8 +2662,8 @@ The general form of a TPM device option is:
 
 @item -tpmdev @var{backend} ,id=@var{id} [,@var{options}]
 @findex -tpmdev
-Backend type must be:
-@option{passthrough}.
+Backend type must be either one of the following:
+@option{passthrough}, @option{cuse-tpm}.
 
 The specific backend type will determine the applicable options.
 The @code{-tpmdev} option creates the TPM backend and requires a
@@ -2710,6 +2713,18 @@ To create a passthrough TPM use the following two options:
 Note that the @code{-tpmdev} id is @code{tpm0} and is referenced by
 @code{tpmdev=tpm0} in the device option.
 
+@item -tpmdev cuse-tpm, id=@var{id}, path=@var{path}
+
+(Linux-host only) Enable access to a TPM emulator with a CUSE interface.
+
+@option{path} specifies the path to the CUSE TPM character device.
+
+To create a backend device accessing the CUSE TPM emulator using /dev/vtpm
+use the following two options:
+@example
+-tpmdev cuse-tpm,id=tpm0,path=/dev/vtpm -device tpm-tis,tpmdev=tpm0
+@end example
+
 @end table
 
 ETEXI
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 7b235ee..53f6d9e 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -3875,7 +3875,7 @@ Arguments: None
 Example:
 
 -> { "execute": "query-tpm-types" }
-<- { "return": [ "passthrough" ] }
+<- { "return": [ "passthrough", "cuse-tpm" ] }
 
 EQMP
 
diff --git a/tpm.c b/tpm.c
index 0a3e3d5..c05d5d9 100644
--- a/tpm.c
+++ b/tpm.c
@@ -25,7 +25,7 @@ static QLIST_HEAD(, TPMBackend) tpm_backends =
 
 
 #define TPM_MAX_MODELS      1
-#define TPM_MAX_DRIVERS     1
+#define TPM_MAX_DRIVERS     2
 
 static TPMDriverOps const *be_drivers[TPM_MAX_DRIVERS] = {
     NULL,
@@ -272,6 +272,15 @@ static TPMInfo *qmp_query_tpm_inst(TPMBackend *drv)
             tpo->has_cancel_path = true;
         }
         break;
+    case TPM_TYPE_CUSE_TPM:
+        res->options->type = TPM_TYPE_OPTIONS_KIND_CUSE_TPM;
+        tpo = g_new0(TPMPassthroughOptions, 1);
+        res->options->u.passthrough = tpo;
+        if (drv->path) {
+            tpo->path = g_strdup(drv->path);
+            tpo->has_path = true;
+        }
+        break;
     case TPM_TYPE__MAX:
         break;
     }
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [Qemu-devel] [PATCH v5 2/4] Introduce condition to notify waiters of completed command
  2016-01-04 15:23 [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Stefan Berger
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM Stefan Berger
@ 2016-01-04 15:23 ` Stefan Berger
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 3/4] Introduce condition in TPM backend for notification Stefan Berger
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-04 15:23 UTC (permalink / raw)
  To: qemu-devel
  Cc: stefanb, mst, Stefan Berger, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

From: Stefan Berger <stefanb@linux.vnet.ibm.com>

Introduce a lock and a condition to notify anyone waiting for the completion
of the execution of a TPM command by the backend (thread). The backend
uses the condition to signal anyone waiting for command completion.
We need to place the condition in two locations: one is invoked by the
backend thread, the other by the bottom half thread.
We will use the signalling to wait for command completion before VM
suspend.

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
---
 hw/tpm/tpm_int.h |  3 +++
 hw/tpm/tpm_tis.c | 14 ++++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/hw/tpm/tpm_int.h b/hw/tpm/tpm_int.h
index 6b2c9c9..70be1ad 100644
--- a/hw/tpm/tpm_int.h
+++ b/hw/tpm/tpm_int.h
@@ -30,6 +30,9 @@ struct TPMState {
     char *backend;
     TPMBackend *be_driver;
     TPMVersion be_tpm_version;
+
+    QemuMutex state_lock;
+    QemuCond cmd_complete;
 };
 
 #define TPM(obj) OBJECT_CHECK(TPMState, (obj), TYPE_TPM_TIS)
diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
index ff073d5..57f540e 100644
--- a/hw/tpm/tpm_tis.c
+++ b/hw/tpm/tpm_tis.c
@@ -366,6 +366,8 @@ static void tpm_tis_receive_bh(void *opaque)
     TPMTISEmuState *tis = &s->s.tis;
     uint8_t locty = s->locty_number;
 
+    qemu_mutex_lock(&s->state_lock);
+
     tpm_tis_sts_set(&tis->loc[locty],
                     TPM_TIS_STS_VALID | TPM_TIS_STS_DATA_AVAILABLE);
     tis->loc[locty].state = TPM_TIS_STATE_COMPLETION;
@@ -382,6 +384,10 @@ static void tpm_tis_receive_bh(void *opaque)
     tpm_tis_raise_irq(s, locty,
                       TPM_TIS_INT_DATA_AVAILABLE | TPM_TIS_INT_STS_VALID);
 #endif
+
+    /* notify of completed command */
+    qemu_cond_signal(&s->cmd_complete);
+    qemu_mutex_unlock(&s->state_lock);
 }
 
 /*
@@ -401,6 +407,11 @@ static void tpm_tis_receive_cb(TPMState *s, uint8_t locty,
         }
     }
 
+    qemu_mutex_lock(&s->state_lock);
+    /* notify of completed command */
+    qemu_cond_signal(&s->cmd_complete);
+    qemu_mutex_unlock(&s->state_lock);
+
     qemu_bh_schedule(tis->bh);
 }
 
@@ -1070,6 +1081,9 @@ static void tpm_tis_initfn(Object *obj)
     memory_region_init_io(&s->mmio, OBJECT(s), &tpm_tis_memory_ops,
                           s, "tpm-tis-mmio",
                           TPM_TIS_NUM_LOCALITIES << TPM_TIS_LOCALITY_SHIFT);
+
+    qemu_mutex_init(&s->state_lock);
+    qemu_cond_init(&s->cmd_complete);
 }
 
 static void tpm_tis_class_init(ObjectClass *klass, void *data)
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [Qemu-devel] [PATCH v5 3/4] Introduce condition in TPM backend for notification
  2016-01-04 15:23 [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Stefan Berger
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM Stefan Berger
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 2/4] Introduce condition to notify waiters of completed command Stefan Berger
@ 2016-01-04 15:23 ` Stefan Berger
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 4/4] Add support for VM suspend/resume for TPM TIS Stefan Berger
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-04 15:23 UTC (permalink / raw)
  To: qemu-devel
  Cc: stefanb, mst, Stefan Berger, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

From: Stefan Berger <stefanb@linux.vnet.ibm.com>

TPM backends will suspend independently of the frontends. Also
here we need to be able to wait for the TPM command to have been
completely processed.

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
---
 hw/tpm/tpm_passthrough.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/hw/tpm/tpm_passthrough.c b/hw/tpm/tpm_passthrough.c
index a4f4fe0..fda27e3 100644
--- a/hw/tpm/tpm_passthrough.c
+++ b/hw/tpm/tpm_passthrough.c
@@ -75,6 +75,10 @@ struct TPMPassthruState {
     TPMVersion tpm_version;
     ptm_cap cuse_cap; /* capabilities of the CUSE TPM */
     uint8_t cur_locty_number; /* last set locality */
+
+    QemuMutex state_lock;
+    QemuCond cmd_complete;  /* singnaled once tpm_busy is false */
+    bool tpm_busy;
 };
 
 typedef struct TPMPassthruState TPMPassthruState;
@@ -249,6 +253,11 @@ static void tpm_passthrough_worker_thread(gpointer data,
         thr_parms->recv_data_callback(thr_parms->tpm_state,
                                       thr_parms->tpm_state->locty_number,
                                       selftest_done);
+        /* result delivered */
+        qemu_mutex_lock(&tpm_pt->state_lock);
+        tpm_pt->tpm_busy = false;
+        qemu_cond_signal(&tpm_pt->cmd_complete);
+        qemu_mutex_unlock(&tpm_pt->state_lock);
         break;
     case TPM_BACKEND_CMD_INIT:
     case TPM_BACKEND_CMD_END:
@@ -376,6 +385,7 @@ static void tpm_passthrough_reset(TPMBackend *tb)
     tpm_backend_thread_end(&tpm_pt->tbt);
 
     tpm_pt->had_startup_error = false;
+    tpm_pt->tpm_busy = false;
 }
 
 static int tpm_passthrough_init(TPMBackend *tb, TPMState *s,
@@ -453,6 +463,11 @@ static void tpm_passthrough_deliver_request(TPMBackend *tb)
 {
     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
 
+    /* TPM considered busy once TPM Request scheduled for processing */
+    qemu_mutex_lock(&tpm_pt->state_lock);
+    tpm_pt->tpm_busy = true;
+    qemu_mutex_unlock(&tpm_pt->state_lock);
+
     tpm_backend_thread_deliver_request(&tpm_pt->tbt);
 }
 
@@ -721,6 +736,11 @@ static const TPMDriverOps tpm_passthrough_driver = {
 
 static void tpm_passthrough_inst_init(Object *obj)
 {
+    TPMBackend *tb = TPM_BACKEND(obj);
+    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
+
+    qemu_mutex_init(&tpm_pt->state_lock);
+    qemu_cond_init(&tpm_pt->cmd_complete);
 }
 
 static void tpm_passthrough_inst_finalize(Object *obj)
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [Qemu-devel] [PATCH v5 4/4] Add support for VM suspend/resume for TPM TIS
  2016-01-04 15:23 [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Stefan Berger
                   ` (2 preceding siblings ...)
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 3/4] Introduce condition in TPM backend for notification Stefan Berger
@ 2016-01-04 15:23 ` Stefan Berger
  2016-01-05  1:26 ` [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Xu, Quan
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-04 15:23 UTC (permalink / raw)
  To: qemu-devel
  Cc: stefanb, mst, Stefan Berger, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

From: Stefan Berger <stefanb@linux.vnet.ibm.com>

Extend the TPM TIS code to support suspend/resume. In case a command
is being processed by the external TPM when suspending, wait for the command
to complete to catch the result. In case the bottom half did not run,
run the one function the bottom half is supposed to run. This then
makes the resume operation work.

The passthrough backend does not support suspend/resume operation
and is therefore blocked from suspend/resume and migration.

The CUSE TPM's supported capabilities are tested and if sufficient
capabilities are implemented, suspend/resume, snapshotting and
migration are supported by the CUSE TPM.

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
---
 hw/tpm/tpm_passthrough.c     | 129 +++++++++++++++++++++++--
 hw/tpm/tpm_tis.c             | 137 +++++++++++++++++++++++++-
 hw/tpm/tpm_tis.h             |   2 +
 hw/tpm/tpm_util.c            | 223 +++++++++++++++++++++++++++++++++++++++++++
 hw/tpm/tpm_util.h            |   7 ++
 include/sysemu/tpm_backend.h |  12 +++
 6 files changed, 502 insertions(+), 8 deletions(-)

diff --git a/hw/tpm/tpm_passthrough.c b/hw/tpm/tpm_passthrough.c
index fda27e3..cef3696 100644
--- a/hw/tpm/tpm_passthrough.c
+++ b/hw/tpm/tpm_passthrough.c
@@ -34,6 +34,7 @@
 #include "tpm_tis.h"
 #include "tpm_util.h"
 #include "tpm_ioctl.h"
+#include "migration/migration.h"
 
 #define DEBUG_TPM 0
 
@@ -49,6 +50,7 @@
 #define TYPE_TPM_CUSE "tpm-cuse"
 
 static const TPMDriverOps tpm_passthrough_driver;
+static const VMStateDescription vmstate_tpm_cuse;
 
 /* data structures */
 typedef struct TPMPassthruThreadParams {
@@ -79,6 +81,10 @@ struct TPMPassthruState {
     QemuMutex state_lock;
     QemuCond cmd_complete;  /* singnaled once tpm_busy is false */
     bool tpm_busy;
+
+    Error *migration_blocker;
+
+    TPMBlobBuffers tpm_blobs;
 };
 
 typedef struct TPMPassthruState TPMPassthruState;
@@ -281,6 +287,10 @@ static void tpm_passthrough_shutdown(TPMPassthruState *tpm_pt)
                          strerror(errno));
         }
     }
+    if (tpm_pt->migration_blocker) {
+        migrate_del_blocker(tpm_pt->migration_blocker);
+        error_free(tpm_pt->migration_blocker);
+    }
 }
 
 /*
@@ -335,12 +345,14 @@ static int tpm_passthrough_cuse_check_caps(TPMPassthruState *tpm_pt)
 /*
  * Initialize the external CUSE TPM
  */
-static int tpm_passthrough_cuse_init(TPMPassthruState *tpm_pt)
+static int tpm_passthrough_cuse_init(TPMPassthruState *tpm_pt,
+                                     bool is_resume)
 {
     int rc = 0;
-    ptm_init init = {
-        .u.req.init_flags = PTM_INIT_FLAG_DELETE_VOLATILE,
-    };
+    ptm_init init;
+    if (is_resume) {
+        init.u.req.init_flags = PTM_INIT_FLAG_DELETE_VOLATILE;
+    }
 
     if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
         if (ioctl(tpm_pt->tpm_fd, PTM_INIT, &init) < 0) {
@@ -369,7 +381,7 @@ static int tpm_passthrough_startup_tpm(TPMBackend *tb)
                               tpm_passthrough_worker_thread,
                               &tpm_pt->tpm_thread_params);
 
-    tpm_passthrough_cuse_init(tpm_pt);
+    tpm_passthrough_cuse_init(tpm_pt, false);
 
     return 0;
 }
@@ -441,6 +453,32 @@ static int tpm_passthrough_reset_tpm_established_flag(TPMBackend *tb,
     return rc;
 }
 
+static int tpm_cuse_get_state_blobs(TPMBackend *tb,
+                                    bool decrypted_blobs,
+                                    TPMBlobBuffers *tpm_blobs)
+{
+    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
+
+    assert(TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt));
+
+    return tpm_util_cuse_get_state_blobs(tpm_pt->tpm_fd, decrypted_blobs,
+                                         tpm_blobs);
+}
+
+static int tpm_cuse_set_state_blobs(TPMBackend *tb,
+                                    TPMBlobBuffers *tpm_blobs)
+{
+    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
+
+    assert(TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt));
+
+    if (tpm_util_cuse_set_state_blobs(tpm_pt->tpm_fd, tpm_blobs)) {
+        return 1;
+    }
+
+    return tpm_passthrough_cuse_init(tpm_pt, true);
+}
+
 static bool tpm_passthrough_get_startup_error(TPMBackend *tb)
 {
     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
@@ -463,7 +501,7 @@ static void tpm_passthrough_deliver_request(TPMBackend *tb)
 {
     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
 
-    /* TPM considered busy once TPM Request scheduled for processing */
+    /* TPM considered busy once TPM request scheduled for processing */
     qemu_mutex_lock(&tpm_pt->state_lock);
     tpm_pt->tpm_busy = true;
     qemu_mutex_unlock(&tpm_pt->state_lock);
@@ -576,6 +614,25 @@ static int tpm_passthrough_open_sysfs_cancel(TPMBackend *tb)
     return fd;
 }
 
+static void tpm_passthrough_block_migration(TPMPassthruState *tpm_pt)
+{
+    ptm_cap caps;
+
+    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+        caps = PTM_CAP_GET_STATEBLOB | PTM_CAP_SET_STATEBLOB |
+               PTM_CAP_STOP;
+        if (!TPM_CUSE_IMPLEMENTS_ALL(tpm_pt, caps)) {
+            error_setg(&tpm_pt->migration_blocker,
+                       "Migration disabled: CUSE TPM lacks necessary capabilities");
+            migrate_add_blocker(tpm_pt->migration_blocker);
+        }
+    } else {
+        error_setg(&tpm_pt->migration_blocker,
+                   "Migration disabled: Passthrough TPM does not support migration");
+        migrate_add_blocker(tpm_pt->migration_blocker);
+    }
+}
+
 static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
 {
     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
@@ -617,7 +674,7 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
             goto err_close_tpmdev;
         }
         /* init TPM for probing */
-        if (tpm_passthrough_cuse_init(tpm_pt)) {
+        if (tpm_passthrough_cuse_init(tpm_pt, false)) {
             goto err_close_tpmdev;
         }
     }
@@ -634,6 +691,7 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
         }
     }
 
+    tpm_passthrough_block_migration(tpm_pt);
 
     return 0;
 
@@ -741,10 +799,13 @@ static void tpm_passthrough_inst_init(Object *obj)
 
     qemu_mutex_init(&tpm_pt->state_lock);
     qemu_cond_init(&tpm_pt->cmd_complete);
+
+    vmstate_register(NULL, -1, &vmstate_tpm_cuse, obj);
 }
 
 static void tpm_passthrough_inst_finalize(Object *obj)
 {
+    vmstate_unregister(NULL, &vmstate_tpm_cuse, obj);
 }
 
 static void tpm_passthrough_class_init(ObjectClass *klass, void *data)
@@ -777,6 +838,60 @@ static const char *tpm_passthrough_cuse_create_desc(void)
     return "CUSE TPM backend driver";
 }
 
+static void tpm_cuse_pre_save(void *opaque)
+{
+    TPMPassthruState *tpm_pt = opaque;
+    TPMBackend *tb = &tpm_pt->parent;
+
+     qemu_mutex_lock(&tpm_pt->state_lock);
+     /* wait for TPM to finish processing */
+     if (tpm_pt->tpm_busy) {
+        qemu_cond_wait(&tpm_pt->cmd_complete, &tpm_pt->state_lock);
+     }
+     qemu_mutex_unlock(&tpm_pt->state_lock);
+
+    /* get the decrypted state blobs from the TPM */
+    tpm_cuse_get_state_blobs(tb, TRUE, &tpm_pt->tpm_blobs);
+}
+
+static int tpm_cuse_post_load(void *opaque,
+                              int version_id __attribute__((unused)))
+{
+    TPMPassthruState *tpm_pt = opaque;
+    TPMBackend *tb = &tpm_pt->parent;
+
+    return tpm_cuse_set_state_blobs(tb, &tpm_pt->tpm_blobs);
+}
+
+static const VMStateDescription vmstate_tpm_cuse = {
+    .name = "cuse-tpm",
+    .version_id = 1,
+    .minimum_version_id = 0,
+    .minimum_version_id_old = 0,
+    .pre_save  = tpm_cuse_pre_save,
+    .post_load = tpm_cuse_post_load,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT32(tpm_blobs.permanent_flags, TPMPassthruState),
+        VMSTATE_UINT32(tpm_blobs.permanent.size, TPMPassthruState),
+        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.permanent.buffer,
+                                     TPMPassthruState, 1, NULL, 0,
+                                     tpm_blobs.permanent.size),
+
+        VMSTATE_UINT32(tpm_blobs.volatil_flags, TPMPassthruState),
+        VMSTATE_UINT32(tpm_blobs.volatil.size, TPMPassthruState),
+        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.volatil.buffer,
+                                     TPMPassthruState, 1, NULL, 0,
+                                     tpm_blobs.volatil.size),
+
+        VMSTATE_UINT32(tpm_blobs.savestate_flags, TPMPassthruState),
+        VMSTATE_UINT32(tpm_blobs.savestate.size, TPMPassthruState),
+        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.savestate.buffer,
+                                     TPMPassthruState, 1, NULL, 0,
+                                     tpm_blobs.savestate.size),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
 static const TPMDriverOps tpm_cuse_driver = {
     .type                     = TPM_TYPE_CUSE_TPM,
     .opts                     = tpm_passthrough_cmdline_opts,
diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
index 57f540e..61b26d1 100644
--- a/hw/tpm/tpm_tis.c
+++ b/hw/tpm/tpm_tis.c
@@ -366,6 +366,8 @@ static void tpm_tis_receive_bh(void *opaque)
     TPMTISEmuState *tis = &s->s.tis;
     uint8_t locty = s->locty_number;
 
+    tis->bh_scheduled = false;
+
     qemu_mutex_lock(&s->state_lock);
 
     tpm_tis_sts_set(&tis->loc[locty],
@@ -413,6 +415,8 @@ static void tpm_tis_receive_cb(TPMState *s, uint8_t locty,
     qemu_mutex_unlock(&s->state_lock);
 
     qemu_bh_schedule(tis->bh);
+
+    tis->bh_scheduled = true;
 }
 
 /*
@@ -1028,9 +1032,140 @@ static void tpm_tis_reset(DeviceState *dev)
     tpm_tis_do_startup_tpm(s);
 }
 
+
+/* persistent state handling */
+
+static void tpm_tis_pre_save(void *opaque)
+{
+    TPMState *s = opaque;
+    TPMTISEmuState *tis = &s->s.tis;
+    uint8_t locty = tis->active_locty;
+
+    DPRINTF("tpm_tis: suspend: locty = %d : r_offset = %d, w_offset = %d\n",
+            locty, tis->loc[0].r_offset, tis->loc[0].w_offset);
+#ifdef DEBUG_TIS
+    tpm_tis_dump_state(opaque, 0);
+#endif
+
+    qemu_mutex_lock(&s->state_lock);
+
+    /* wait for outstanding request to complete */
+    if (TPM_TIS_IS_VALID_LOCTY(locty) &&
+        tis->loc[locty].state == TPM_TIS_STATE_EXECUTION) {
+        /*
+         * If we get here when the bh is scheduled but did not run,
+         * we won't get notified...
+         */
+        if (!tis->bh_scheduled) {
+            /* backend thread to notify us */
+            qemu_cond_wait(&s->cmd_complete, &s->state_lock);
+        }
+        if (tis->loc[locty].state == TPM_TIS_STATE_EXECUTION) {
+            /* bottom half did not run - run its function */
+            qemu_mutex_unlock(&s->state_lock);
+            tpm_tis_receive_bh(opaque);
+            qemu_mutex_lock(&s->state_lock);
+        }
+    }
+
+    qemu_mutex_unlock(&s->state_lock);
+
+    /* copy current active read or write buffer into the buffer
+       written to disk */
+    if (TPM_TIS_IS_VALID_LOCTY(locty)) {
+        switch (tis->loc[locty].state) {
+        case TPM_TIS_STATE_RECEPTION:
+            memcpy(tis->buf,
+                   tis->loc[locty].w_buffer.buffer,
+                   MIN(sizeof(tis->buf),
+                       tis->loc[locty].w_buffer.size));
+            tis->offset = tis->loc[locty].w_offset;
+        break;
+        case TPM_TIS_STATE_COMPLETION:
+            memcpy(tis->buf,
+                   tis->loc[locty].r_buffer.buffer,
+                   MIN(sizeof(tis->buf),
+                       tis->loc[locty].r_buffer.size));
+            tis->offset = tis->loc[locty].r_offset;
+        break;
+        default:
+            /* leak nothing */
+            memset(tis->buf, 0x0, sizeof(tis->buf));
+        break;
+        }
+    }
+}
+
+static int tpm_tis_post_load(void *opaque,
+                             int version_id __attribute__((unused)))
+{
+    TPMState *s = opaque;
+    TPMTISEmuState *tis = &s->s.tis;
+
+    uint8_t locty = tis->active_locty;
+
+    if (TPM_TIS_IS_VALID_LOCTY(locty)) {
+        switch (tis->loc[locty].state) {
+        case TPM_TIS_STATE_RECEPTION:
+            memcpy(tis->loc[locty].w_buffer.buffer,
+                   tis->buf,
+                   MIN(sizeof(tis->buf),
+                       tis->loc[locty].w_buffer.size));
+            tis->loc[locty].w_offset = tis->offset;
+        break;
+        case TPM_TIS_STATE_COMPLETION:
+            memcpy(tis->loc[locty].r_buffer.buffer,
+                   tis->buf,
+                   MIN(sizeof(tis->buf),
+                       tis->loc[locty].r_buffer.size));
+            tis->loc[locty].r_offset = tis->offset;
+        break;
+        default:
+        break;
+        }
+    }
+
+    DPRINTF("tpm_tis: resume : locty = %d : r_offset = %d, w_offset = %d\n",
+            locty, tis->loc[0].r_offset, tis->loc[0].w_offset);
+
+    return 0;
+}
+
+static const VMStateDescription vmstate_locty = {
+    .name = "loc",
+    .version_id = 1,
+    .minimum_version_id = 0,
+    .minimum_version_id_old = 0,
+    .fields      = (VMStateField[]) {
+        VMSTATE_UINT32(state, TPMLocality),
+        VMSTATE_UINT32(inte, TPMLocality),
+        VMSTATE_UINT32(ints, TPMLocality),
+        VMSTATE_UINT8(access, TPMLocality),
+        VMSTATE_UINT32(sts, TPMLocality),
+        VMSTATE_UINT32(iface_id, TPMLocality),
+        VMSTATE_END_OF_LIST(),
+    }
+};
+
 static const VMStateDescription vmstate_tpm_tis = {
     .name = "tpm",
-    .unmigratable = 1,
+    .version_id = 1,
+    .minimum_version_id = 0,
+    .minimum_version_id_old = 0,
+    .pre_save  = tpm_tis_pre_save,
+    .post_load = tpm_tis_post_load,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT32(s.tis.offset, TPMState),
+        VMSTATE_BUFFER(s.tis.buf, TPMState),
+        VMSTATE_UINT8(s.tis.active_locty, TPMState),
+        VMSTATE_UINT8(s.tis.aborting_locty, TPMState),
+        VMSTATE_UINT8(s.tis.next_locty, TPMState),
+
+        VMSTATE_STRUCT_ARRAY(s.tis.loc, TPMState, TPM_TIS_NUM_LOCALITIES, 1,
+                             vmstate_locty, TPMLocality),
+
+        VMSTATE_END_OF_LIST()
+    }
 };
 
 static Property tpm_tis_properties[] = {
diff --git a/hw/tpm/tpm_tis.h b/hw/tpm/tpm_tis.h
index a1df41f..b7fc0ea 100644
--- a/hw/tpm/tpm_tis.h
+++ b/hw/tpm/tpm_tis.h
@@ -54,6 +54,8 @@ typedef struct TPMLocality {
 
 typedef struct TPMTISEmuState {
     QEMUBH *bh;
+    bool bh_scheduled; /* bh scheduled but did not run yet */
+
     uint32_t offset;
     uint8_t buf[TPM_TIS_BUFFER_MAX];
 
diff --git a/hw/tpm/tpm_util.c b/hw/tpm/tpm_util.c
index 4ace585..0e0fcb8 100644
--- a/hw/tpm/tpm_util.c
+++ b/hw/tpm/tpm_util.c
@@ -21,6 +21,17 @@
 
 #include "tpm_util.h"
 #include "tpm_int.h"
+#include "tpm_ioctl.h"
+#include "qemu/error-report.h"
+
+#define DEBUG_TPM 0
+
+#define DPRINTF(fmt, ...) do { \
+    if (DEBUG_TPM) { \
+        fprintf(stderr, fmt, ## __VA_ARGS__); \
+    } \
+} while (0)
+
 
 /*
  * A basic test of a TPM device. We expect a well formatted response header
@@ -124,3 +135,215 @@ int tpm_util_test_tpmdev(int tpm_fd, TPMVersion *tpm_version)
 
     return 1;
 }
+
+static void tpm_sized_buffer_reset(TPMSizedBuffer *tsb)
+{
+    g_free(tsb->buffer);
+    tsb->buffer = NULL;
+    tsb->size = 0;
+}
+
+/*
+ * Transfer a TPM state blob from the TPM into a provided buffer.
+ *
+ * @fd: file descriptor to talk to the CUSE TPM
+ * @type: the type of blob to transfer
+ * @decrypted_blob: whether we request to receive decrypted blobs
+ * @tsb: the TPMSizeBuffer to fill with the blob
+ * @flags: the flags to return to the caller
+ */
+static int tpm_util_cuse_get_state_blob(int fd,
+                                        uint8_t type,
+                                        bool decrypted_blob,
+                                        TPMSizedBuffer *tsb,
+                                        uint32_t *flags)
+{
+    ptm_getstate pgs;
+    uint16_t offset = 0;
+    ptm_res res;
+    ssize_t n;
+    size_t to_read;
+
+    tpm_sized_buffer_reset(tsb);
+
+    pgs.u.req.state_flags = (decrypted_blob) ? PTM_STATE_FLAG_DECRYPTED : 0;
+    pgs.u.req.type = type;
+    pgs.u.req.offset = offset;
+
+    if (ioctl(fd, PTM_GET_STATEBLOB, &pgs) < 0) {
+        error_report("CUSE TPM PTM_GET_STATEBLOB ioctl failed: %s",
+                     strerror(errno));
+        goto err_exit;
+    }
+    res = pgs.u.resp.tpm_result;
+    if (res != 0 && (res & 0x800) == 0) {
+        error_report("Getting the stateblob (type %d) failed with a TPM "
+                     "error 0x%x", type, res);
+        goto err_exit;
+    }
+
+    *flags = pgs.u.resp.state_flags;
+
+    tsb->buffer = g_malloc(pgs.u.resp.totlength);
+    memcpy(tsb->buffer, pgs.u.resp.data, pgs.u.resp.length);
+    tsb->size = pgs.u.resp.length;
+
+    /* if there are bytes left to get use read() interface */
+    while (tsb->size < pgs.u.resp.totlength) {
+        to_read = pgs.u.resp.totlength - tsb->size;
+        if (unlikely(to_read > SSIZE_MAX)) {
+            to_read = SSIZE_MAX;
+        }
+
+        n = read(fd, &tsb->buffer[tsb->size], to_read);
+        if (n != to_read) {
+            error_report("Could not read stateblob (type %d) : %s",
+                         type, strerror(errno));
+            goto err_exit;
+        }
+        tsb->size += to_read;
+    }
+
+    DPRINTF("tpm_util: got state blob type %d, %d bytes, flags 0x%08x, "
+            "decrypted=%d\n", type, tsb->size, *flags, decrypted_blob);
+
+    return 0;
+
+err_exit:
+    return 1;
+}
+
+int tpm_util_cuse_get_state_blobs(int tpm_fd,
+                                  bool decrypted_blobs,
+                                  TPMBlobBuffers *tpm_blobs)
+{
+    if (tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_PERMANENT,
+                                     decrypted_blobs,
+                                     &tpm_blobs->permanent,
+                                     &tpm_blobs->permanent_flags) ||
+       tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_VOLATILE,
+                                     decrypted_blobs,
+                                     &tpm_blobs->volatil,
+                                     &tpm_blobs->volatil_flags) ||
+       tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_SAVESTATE,
+                                     decrypted_blobs,
+                                     &tpm_blobs->savestate,
+                                     &tpm_blobs->savestate_flags)) {
+        goto err_exit;
+    }
+
+    return 0;
+
+ err_exit:
+    tpm_sized_buffer_reset(&tpm_blobs->volatil);
+    tpm_sized_buffer_reset(&tpm_blobs->permanent);
+    tpm_sized_buffer_reset(&tpm_blobs->savestate);
+
+    return 1;
+}
+
+static int tpm_util_cuse_do_set_stateblob_ioctl(int fd,
+                                                uint32_t flags,
+                                                uint32_t type,
+                                                uint32_t length)
+{
+    ptm_setstate pss;
+
+    pss.u.req.state_flags = flags;
+    pss.u.req.type = type;
+    pss.u.req.length = length;
+
+    if (ioctl(fd, PTM_SET_STATEBLOB, &pss) < 0) {
+        error_report("CUSE TPM PTM_SET_STATEBLOB ioctl failed: %s",
+                     strerror(errno));
+        return 1;
+    }
+
+    if (pss.u.resp.tpm_result != 0) {
+        error_report("Setting the stateblob (type %d) failed with a TPM "
+                     "error 0x%x", type, pss.u.resp.tpm_result);
+        return 1;
+    }
+
+    return 0;
+}
+
+
+/*
+ * Transfer a TPM state blob to the CUSE TPM.
+ *
+ * @fd: file descriptor to talk to the CUSE TPM
+ * @type: the type of TPM state blob to transfer
+ * @tsb: TPMSizeBuffer containing the TPM state blob
+ * @flags: Flags describing the (encryption) state of the TPM state blob
+ */
+static int tpm_util_cuse_set_state_blob(int fd,
+                                        uint32_t type,
+                                        TPMSizedBuffer *tsb,
+                                        uint32 flags)
+{
+    uint32_t offset = 0;
+    ssize_t n;
+    size_t to_write;
+
+    /* initiate the transfer to the CUSE TPM */
+    if (tpm_util_cuse_do_set_stateblob_ioctl(fd, flags, type, 0)) {
+        return 1;
+    }
+
+    /* use the write() interface for transferring the state blob */
+    while (offset < tsb->size) {
+        to_write = tsb->size - offset;
+        if (unlikely(to_write > SSIZE_MAX)) {
+            to_write = SSIZE_MAX;
+        }
+
+        n = write(fd, &tsb->buffer[offset], to_write);
+        if (n != to_write) {
+            error_report("Writing the stateblob (type %d) failed: %s",
+                         type, strerror(errno));
+            goto err_exit;
+        }
+        offset += to_write;
+    }
+
+    /* inidicate that the transfer is finished */
+    if (tpm_util_cuse_do_set_stateblob_ioctl(fd, flags, type, 0)) {
+        goto err_exit;
+    }
+
+    DPRINTF("tpm_util: set the state blob type %d, %d bytes, flags 0x%08x\n",
+            type, tsb->size, flags);
+
+    return 0;
+
+err_exit:
+    return 1;
+}
+
+int tpm_util_cuse_set_state_blobs(int tpm_fd,
+                                  TPMBlobBuffers *tpm_blobs)
+{
+    ptm_res res;
+
+    if (ioctl(tpm_fd, PTM_STOP, &res) < 0) {
+        error_report("tpm_passthrough: Could not stop "
+                     "the CUSE TPM: %s (%i)",
+                     strerror(errno), errno);
+        return 1;
+    }
+
+    if (tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_PERMANENT,
+                                     &tpm_blobs->permanent,
+                                     tpm_blobs->permanent_flags) ||
+        tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_VOLATILE,
+                                     &tpm_blobs->volatil,
+                                     tpm_blobs->volatil_flags) ||
+        tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_SAVESTATE,
+                                     &tpm_blobs->savestate,
+                                     tpm_blobs->savestate_flags)) {
+        return 1;
+    }
+
+    return 0;
+}
diff --git a/hw/tpm/tpm_util.h b/hw/tpm/tpm_util.h
index e7f354a..04f5afd 100644
--- a/hw/tpm/tpm_util.h
+++ b/hw/tpm/tpm_util.h
@@ -25,4 +25,11 @@
 
 int tpm_util_test_tpmdev(int tpm_fd, TPMVersion *tpm_version);
 
+int tpm_util_cuse_get_state_blobs(int tpm_fd,
+                                  bool decrypted_blobs,
+                                  TPMBlobBuffers *tpm_blobs);
+
+int tpm_util_cuse_set_state_blobs(int tpm_fd,
+                                  TPMBlobBuffers *tpm_blobs);
+
 #endif /* TPM_TPM_UTILS_H */
diff --git a/include/sysemu/tpm_backend.h b/include/sysemu/tpm_backend.h
index 0a366be..92bc3e4 100644
--- a/include/sysemu/tpm_backend.h
+++ b/include/sysemu/tpm_backend.h
@@ -63,6 +63,18 @@ typedef struct TPMSizedBuffer {
     uint8_t  *buffer;
 } TPMSizedBuffer;
 
+/* blobs from the TPM; part of VM state when migrating */
+typedef struct TPMBlobBuffers {
+    uint32_t permanent_flags;
+    TPMSizedBuffer permanent;
+
+    uint32_t volatil_flags;
+    TPMSizedBuffer volatil;
+
+    uint32_t savestate_flags;
+    TPMSizedBuffer savestate;
+} TPMBlobBuffers;
+
 struct TPMDriverOps {
     enum TpmType type;
     const QemuOptDesc *opts;
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-04 15:23 [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Stefan Berger
                   ` (3 preceding siblings ...)
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 4/4] Add support for VM suspend/resume for TPM TIS Stefan Berger
@ 2016-01-05  1:26 ` Xu, Quan
  2016-01-05  3:36   ` Stefan Berger
  2016-01-20  1:40 ` Xu, Quan
  2016-01-20 14:58 ` Daniel P. Berrange
  6 siblings, 1 reply; 96+ messages in thread
From: Xu, Quan @ 2016-01-05  1:26 UTC (permalink / raw)
  To: Stefan Berger, qemu-devel; +Cc: hagen.lauer, jb613w, silviu.vlasceanu, mst

On January 04 2016 11:23 PM, <stefanb@us.ibm.com> wrote:
> The following series of patches extends TPM support with an external TPM that
> offers a Linux CUSE (character device in userspace) interface. This TPM lets
> each VM access its own private vTPM.
> The CUSE TPM supports suspend/resume and migration. Much out-of-band
> functionality necessary to control the CUSE TPM is implemented using ioctls.
> 

Stefan,
it is a good solution. Could you share more about this architecture? If you have an existing doc.


Quan

> This series of patches applies to 38a762fe.
> 
> Stefan Berger (4):
>   Provide support for the CUSE TPM
>   Introduce condition to notify waiters of completed command
>   Introduce condition in TPM backend for notification
>   Add support for VM suspend/resume for TPM TIS
> 

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-05  1:26 ` [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Xu, Quan
@ 2016-01-05  3:36   ` Stefan Berger
  0 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-05  3:36 UTC (permalink / raw)
  To: Xu, Quan; +Cc: hagen.lauer, jb613w, silviu.vlasceanu, qemu-devel, mst

[-- Attachment #1: Type: text/plain, Size: 1818 bytes --]

"Xu, Quan" <quan.xu@intel.com> wrote on 01/04/2016 08:26:03 PM:

> Date: 01/04/2016 08:26 PM
> Subject: RE: [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
> 
> On January 04 2016 11:23 PM, <stefanb@us.ibm.com> wrote:
> > The following series of patches extends TPM support with an 
> external TPM that
> > offers a Linux CUSE (character device in userspace) interface. This 
TPM lets
> > each VM access its own private vTPM.
> > The CUSE TPM supports suspend/resume and migration. Much out-of-band
> > functionality necessary to control the CUSE TPM is implemented using 
ioctls.
> > 
> 
> Stefan,
> it is a good solution. Could you share more about this architecture?
> If you have an existing doc.

The architecture is as follows:

An extern tool (i.e., libvirt) start the CUSE TPM, which then provides 
/dev/vtpm-<uuid> for the QEMU VM to talk to. QEMU receives the open 
filedescriptor or device name on the command line. All TPM commands from 
the guest go right into /dev/vtpm-<uuid> via read/write() interface, so 
just like the passthrough. Out-of-band control, which we need for proper 
vTPM emualtipon, such as setting the locality, getting and setting of the 
state blobs of the vTPM following suspend/resume/snapshotting/migration, 
resetting the vTPM following a VM reset, shutdown of the vTPM process 
following VM shutdown, is done through the ioctl interface. The ioctl 
interface is defined in this file here:

https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h

I do not have an existing doc but the github swtpm project contains a man 
page describing the ioctls:

https://github.com/stefanberger/swtpm/blob/master/man/man3/swtpm_ioctls.pod


I hope this helps us to make progress.

Thanks and regards,
   Stefan



[-- Attachment #2: Type: text/html, Size: 2498 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-04 15:23 [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Stefan Berger
                   ` (4 preceding siblings ...)
  2016-01-05  1:26 ` [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Xu, Quan
@ 2016-01-20  1:40 ` Xu, Quan
  2016-01-20  9:23   ` Hagen Lauer
  2016-01-20 14:58 ` Daniel P. Berrange
  6 siblings, 1 reply; 96+ messages in thread
From: Xu, Quan @ 2016-01-20  1:40 UTC (permalink / raw)
  To: Stefan Berger, qemu-devel; +Cc: hagen.lauer, jb613w, silviu.vlasceanu, mst

Are some CCed reviewing this patch set? Thanks.

-Quan

> On January 04, 2016 at 11:23pm, <stefanb@us.ibm.com> wrote:
> The following series of patches extends TPM support with an external TPM that
> offers a Linux CUSE (character device in userspace) interface. This TPM lets
> each VM access its own private vTPM.
> The CUSE TPM supports suspend/resume and migration. Much out-of-band
> functionality necessary to control the CUSE TPM is implemented using ioctls.
> 
> This series of patches applies to 38a762fe.
> 
> Stefan Berger (4):
>   Provide support for the CUSE TPM
>   Introduce condition to notify waiters of completed command
>   Introduce condition in TPM backend for notification
>   Add support for VM suspend/resume for TPM TIS
> 
>  hmp.c                        |   6 +
>  hw/tpm/tpm_int.h             |   4 +
>  hw/tpm/tpm_ioctl.h           | 215 +++++++++++++++++++++++
>  hw/tpm/tpm_passthrough.c     | 409
> +++++++++++++++++++++++++++++++++++++++++--
>  hw/tpm/tpm_tis.c             | 151 +++++++++++++++-
>  hw/tpm/tpm_tis.h             |   2 +
>  hw/tpm/tpm_util.c            | 223 +++++++++++++++++++++++
>  hw/tpm/tpm_util.h            |   7 +
>  include/sysemu/tpm_backend.h |  12 ++
>  qapi-schema.json             |  18 +-
>  qemu-options.hx              |  21 ++-
>  qmp-commands.hx              |   2 +-
>  tpm.c                        |  11 +-
>  13 files changed, 1062 insertions(+), 19 deletions(-)  create mode 100644
> hw/tpm/tpm_ioctl.h
> 
> --
> 2.4.3

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-20  1:40 ` Xu, Quan
@ 2016-01-20  9:23   ` Hagen Lauer
  2016-01-20  9:41     ` Xu, Quan
  0 siblings, 1 reply; 96+ messages in thread
From: Hagen Lauer @ 2016-01-20  9:23 UTC (permalink / raw)
  To: Xu, Quan, Stefan Berger, qemu-devel; +Cc: jb613w, silviu.vlasceanu, mst

Hi Quan,

I'm currently testing and reviewing. I also have some colleagues doing the same thing.

So far: I haven't had any issues with it - this seems to work quite nicely.

I will keep testing but I don't really expect any functional hiccups.

Best Regards,
Hagen

-----Original Message-----
From: Xu, Quan [mailto:quan.xu@intel.com] 
Sent: Wednesday, January 20, 2016 2:40 AM
To: Stefan Berger; qemu-devel@nongnu.org
Cc: mst@redhat.com; Hagen Lauer; silviu.vlasceanu@gmail.com; jb613w@att.com
Subject: RE: [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM

Are some CCed reviewing this patch set? Thanks.

-Quan

> On January 04, 2016 at 11:23pm, <stefanb@us.ibm.com> wrote:
> The following series of patches extends TPM support with an external 
> TPM that offers a Linux CUSE (character device in userspace) 
> interface. This TPM lets each VM access its own private vTPM.
> The CUSE TPM supports suspend/resume and migration. Much out-of-band 
> functionality necessary to control the CUSE TPM is implemented using ioctls.
> 
> This series of patches applies to 38a762fe.
> 
> Stefan Berger (4):
>   Provide support for the CUSE TPM
>   Introduce condition to notify waiters of completed command
>   Introduce condition in TPM backend for notification
>   Add support for VM suspend/resume for TPM TIS
> 
>  hmp.c                        |   6 +
>  hw/tpm/tpm_int.h             |   4 +
>  hw/tpm/tpm_ioctl.h           | 215 +++++++++++++++++++++++
>  hw/tpm/tpm_passthrough.c     | 409
> +++++++++++++++++++++++++++++++++++++++++--
>  hw/tpm/tpm_tis.c             | 151 +++++++++++++++-
>  hw/tpm/tpm_tis.h             |   2 +
>  hw/tpm/tpm_util.c            | 223 +++++++++++++++++++++++
>  hw/tpm/tpm_util.h            |   7 +
>  include/sysemu/tpm_backend.h |  12 ++
>  qapi-schema.json             |  18 +-
>  qemu-options.hx              |  21 ++-
>  qmp-commands.hx              |   2 +-
>  tpm.c                        |  11 +-
>  13 files changed, 1062 insertions(+), 19 deletions(-)  create mode 
> 100644 hw/tpm/tpm_ioctl.h
> 
> --
> 2.4.3

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-20  9:23   ` Hagen Lauer
@ 2016-01-20  9:41     ` Xu, Quan
  0 siblings, 0 replies; 96+ messages in thread
From: Xu, Quan @ 2016-01-20  9:41 UTC (permalink / raw)
  To: Hagen Lauer, Stefan Berger, qemu-devel; +Cc: jb613w, silviu.vlasceanu, mst

Hagen,
     Thanks. Good news. :)

-Quan    
> On January 20, 2016 at 5:24, <hagen.lauer@huawei.com> wrote:
> Hi Quan,
> 
> I'm currently testing and reviewing. I also have some colleagues doing the same
> thing.
> 
> So far: I haven't had any issues with it - this seems to work quite nicely.
> 
> I will keep testing but I don't really expect any functional hiccups.
> 
> Best Regards,
> Hagen
> 
> -----Original Message-----
> From: Xu, Quan [mailto:quan.xu@intel.com]
> Sent: Wednesday, January 20, 2016 2:40 AM
> To: Stefan Berger; qemu-devel@nongnu.org
> Cc: mst@redhat.com; Hagen Lauer; silviu.vlasceanu@gmail.com;
> jb613w@att.com
> Subject: RE: [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
> 
> Are some CCed reviewing this patch set? Thanks.
> 
> -Quan
> 
> > On January 04, 2016 at 11:23pm, <stefanb@us.ibm.com> wrote:
> > The following series of patches extends TPM support with an external
> > TPM that offers a Linux CUSE (character device in userspace)
> > interface. This TPM lets each VM access its own private vTPM.
> > The CUSE TPM supports suspend/resume and migration. Much out-of-band
> > functionality necessary to control the CUSE TPM is implemented using ioctls.
> >
> > This series of patches applies to 38a762fe.
> >
> > Stefan Berger (4):
> >   Provide support for the CUSE TPM
> >   Introduce condition to notify waiters of completed command
> >   Introduce condition in TPM backend for notification
> >   Add support for VM suspend/resume for TPM TIS
> >
> >  hmp.c                        |   6 +
> >  hw/tpm/tpm_int.h             |   4 +
> >  hw/tpm/tpm_ioctl.h           | 215 +++++++++++++++++++++++
> >  hw/tpm/tpm_passthrough.c     | 409
> > +++++++++++++++++++++++++++++++++++++++++--
> >  hw/tpm/tpm_tis.c             | 151 +++++++++++++++-
> >  hw/tpm/tpm_tis.h             |   2 +
> >  hw/tpm/tpm_util.c            | 223 +++++++++++++++++++++++
> >  hw/tpm/tpm_util.h            |   7 +
> >  include/sysemu/tpm_backend.h |  12 ++
> >  qapi-schema.json             |  18 +-
> >  qemu-options.hx              |  21 ++-
> >  qmp-commands.hx              |   2 +-
> >  tpm.c                        |  11 +-
> >  13 files changed, 1062 insertions(+), 19 deletions(-)  create mode
> > 100644 hw/tpm/tpm_ioctl.h
> >
> > --
> > 2.4.3

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-04 15:23 [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Stefan Berger
                   ` (5 preceding siblings ...)
  2016-01-20  1:40 ` Xu, Quan
@ 2016-01-20 14:58 ` Daniel P. Berrange
  2016-01-20 15:23   ` Stefan Berger
       [not found]   ` <201601201523.u0KFNwOH000398@d01av04.pok.ibm.com>
  6 siblings, 2 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2016-01-20 14:58 UTC (permalink / raw)
  To: Stefan Berger
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> The following series of patches extends TPM support with an
> external TPM that offers a Linux CUSE (character device in userspace)
> interface. This TPM lets each VM access its own private vTPM.

What is the backing store for this vTPM ? Are the vTPMs all
multiplexed onto the host's physical TPM or is there something
else going on ?

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM Stefan Berger
@ 2016-01-20 15:00   ` Daniel P. Berrange
  2016-01-20 15:31     ` Stefan Berger
       [not found]     ` <201601201532.u0KFW2q2019737@d03av03.boulder.ibm.com>
  2016-01-20 15:20   ` Michael S. Tsirkin
  1 sibling, 2 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2016-01-20 15:00 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, mst, qemu-devel, jb613w, quan.xu,
	silviu.vlasceanu, hagen.lauer

On Mon, Jan 04, 2016 at 10:23:19AM -0500, Stefan Berger wrote:
> From: Stefan Berger <stefanb@linux.vnet.ibm.com>
> 
> Rather than integrating TPM functionality into QEMU directly
> using the TPM emulation of libtpms, we now integrate an external
> emulated TPM device. This device is expected to implement a Linux
> CUSE interface (CUSE = character device in userspace).
> 
> QEMU talks to the CUSE TPM using much functionality of the
> passthrough driver. For example, the TPM commands and responses
> are sent to the CUSE TPM using the read()/write() interface.
> However, some out-of-band control needs to be done using the CUSE
> TPM's ioctls. The CUSE TPM currently defines and implements 15
> different ioctls for controlling certain life-cycle aspects of
> the emulated TPM. The ioctls can be regarded as a replacement for
> direct function calls to a TPM emulator if the TPM were to be
> directly integrated into QEMU.
> 
> One of the ioctls allows to get a bitmask of supported capabilities.
> Each returned bit indicates which capabilities have been implemented.
> An include file defining the various ioctls is added to QEMU.
> 
> The CUSE TPM and associated tools can be found here:
> 
> https://github.com/stefanberger/swtpm
> 
> (please use the latest version)
> 
> To use the external CUSE TPM, the CUSE TPM should be started as follows:
> 
> # terminate previously started CUSE TPM
> /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> 
> # start CUSE TPM
> /usr/bin/swtpm_cuse -n vtpm-test

IIUC, there needs to be one swtpm_cuse process running per QEMU
TPM device ?  This makes my wonder why we need this separate
process at all - it would make sense if there was a single
swtpm_cuse shared across all QEMU's, but if there's one per
QEMU device, it feels like it'd be much simpler to just have
the functionality linked in QEMU.  That avoids the problem
of having to manage all these extra processes alongside QEMU
which can add a fair bit of mgmt overhead.

> 
> QEMU can then be started using the following parameters:
> 
> qemu-system-x86_64 \
> 	[...] \
>         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
>         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> 	[...]

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM Stefan Berger
  2016-01-20 15:00   ` Daniel P. Berrange
@ 2016-01-20 15:20   ` Michael S. Tsirkin
  2016-01-20 15:36     ` Stefan Berger
       [not found]     ` <201601201536.u0KFanwG004844@d01av04.pok.ibm.com>
  1 sibling, 2 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2016-01-20 15:20 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

On Mon, Jan 04, 2016 at 10:23:19AM -0500, Stefan Berger wrote:
> From: Stefan Berger <stefanb@linux.vnet.ibm.com>
> 
> Rather than integrating TPM functionality into QEMU directly
> using the TPM emulation of libtpms, we now integrate an external
> emulated TPM device. This device is expected to implement a Linux
> CUSE interface (CUSE = character device in userspace).
> 
> QEMU talks to the CUSE TPM using much functionality of the
> passthrough driver. For example, the TPM commands and responses
> are sent to the CUSE TPM using the read()/write() interface.
> However, some out-of-band control needs to be done using the CUSE
> TPM's ioctls. The CUSE TPM currently defines and implements 15
> different ioctls for controlling certain life-cycle aspects of
> the emulated TPM. The ioctls can be regarded as a replacement for
> direct function calls to a TPM emulator if the TPM were to be
> directly integrated into QEMU.
> 
> One of the ioctls allows to get a bitmask of supported capabilities.
> Each returned bit indicates which capabilities have been implemented.
> An include file defining the various ioctls is added to QEMU.
> 
> The CUSE TPM and associated tools can be found here:
> 
> https://github.com/stefanberger/swtpm
> 
> (please use the latest version)
> 
> To use the external CUSE TPM, the CUSE TPM should be started as follows:
> 
> # terminate previously started CUSE TPM
> /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> 
> # start CUSE TPM
> /usr/bin/swtpm_cuse -n vtpm-test
> 
> QEMU can then be started using the following parameters:
> 
> qemu-system-x86_64 \
> 	[...] \
>         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
>         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> 	[...]
> 
> 
> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Eric Blake <eblake@redhat.com>

Before we add a dependency on this interface,
I'd rather see this interface supported in kernel
and not just in CUSE.


> ---
>  hmp.c                    |   6 ++
>  hw/tpm/tpm_int.h         |   1 +
>  hw/tpm/tpm_ioctl.h       | 215 +++++++++++++++++++++++++++++++++++++
>  hw/tpm/tpm_passthrough.c | 274 +++++++++++++++++++++++++++++++++++++++++++++--
>  qapi-schema.json         |  18 +++-
>  qemu-options.hx          |  21 +++-
>  qmp-commands.hx          |   2 +-
>  tpm.c                    |  11 +-
>  8 files changed, 530 insertions(+), 18 deletions(-)
>  create mode 100644 hw/tpm/tpm_ioctl.h
> 
> diff --git a/hmp.c b/hmp.c
> index c2b2c16..5f70aac 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -863,6 +863,12 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
>                             tpo->has_cancel_path ? ",cancel-path=" : "",
>                             tpo->has_cancel_path ? tpo->cancel_path : "");
>              break;
> +        case TPM_TYPE_OPTIONS_KIND_CUSE_TPM:
> +            tpo = ti->options->u.passthrough;
> +            monitor_printf(mon, "%s%s",
> +                           tpo->has_path ? ",path=" : "",
> +                           tpo->has_path ? tpo->path : "");
> +            break;
>          case TPM_TYPE_OPTIONS_KIND__MAX:
>              break;
>          }
> diff --git a/hw/tpm/tpm_int.h b/hw/tpm/tpm_int.h
> index f2f285b..6b2c9c9 100644
> --- a/hw/tpm/tpm_int.h
> +++ b/hw/tpm/tpm_int.h
> @@ -61,6 +61,7 @@ struct tpm_resp_hdr {
>  #define TPM_TAG_RSP_AUTH1_COMMAND 0xc5
>  #define TPM_TAG_RSP_AUTH2_COMMAND 0xc6
>  
> +#define TPM_SUCCESS               0
>  #define TPM_FAIL                  9
>  
>  #define TPM_ORD_ContinueSelfTest  0x53
> diff --git a/hw/tpm/tpm_ioctl.h b/hw/tpm/tpm_ioctl.h
> new file mode 100644
> index 0000000..a341e15
> --- /dev/null
> +++ b/hw/tpm/tpm_ioctl.h
> @@ -0,0 +1,215 @@
> +/*
> + * tpm_ioctl.h
> + *
> + * (c) Copyright IBM Corporation 2014, 2015.
> + *
> + * This file is licensed under the terms of the 3-clause BSD license
> + */
> +#ifndef _TPM_IOCTL_H_
> +#define _TPM_IOCTL_H_
> +
> +#include <stdint.h>
> +#include <sys/uio.h>
> +#include <sys/types.h>
> +#include <sys/ioctl.h>
> +
> +/*
> + * Every response from a command involving a TPM command execution must hold
> + * the ptm_res as the first element.
> + * ptm_res corresponds to the error code of a command executed by the TPM.
> + */
> +
> +typedef uint32_t ptm_res;
> +
> +/* PTM_GET_TPMESTABLISHED: get the establishment bit */
> +struct ptm_est {
> +    union {
> +        struct {
> +            ptm_res tpm_result;
> +            unsigned char bit; /* TPM established bit */
> +        } resp; /* response */
> +    } u;
> +};
> +
> +/* PTM_RESET_TPMESTABLISHED: reset establishment bit */
> +struct ptm_reset_est {
> +    union {
> +        struct {
> +            uint8_t loc; /* locality to use */
> +        } req; /* request */
> +        struct {
> +            ptm_res tpm_result;
> +        } resp; /* response */
> +    } u;
> +};
> +
> +/* PTM_INIT */
> +struct ptm_init {
> +    union {
> +        struct {
> +            uint32_t init_flags; /* see definitions below */
> +        } req; /* request */
> +        struct {
> +            ptm_res tpm_result;
> +        } resp; /* response */
> +    } u;
> +};
> +
> +/* above init_flags */
> +#define PTM_INIT_FLAG_DELETE_VOLATILE (1 << 0)
> +    /* delete volatile state file after reading it */
> +
> +/* PTM_SET_LOCALITY */
> +struct ptm_loc {
> +    union {
> +        struct {
> +            uint8_t loc; /* locality to set */
> +        } req; /* request */
> +        struct {
> +            ptm_res tpm_result;
> +        } resp; /* response */
> +    } u;
> +};
> +
> +/* PTM_HASH_DATA: hash given data */
> +struct ptm_hdata {
> +    union {
> +        struct {
> +            uint32_t length;
> +            uint8_t data[4096];
> +        } req; /* request */
> +        struct {
> +            ptm_res tpm_result;
> +        } resp; /* response */
> +    } u;
> +};
> +
> +/*
> + * size of the TPM state blob to transfer; x86_64 can handle 8k,
> + * ppc64le only ~7k; keep the response below a 4k page size
> + */
> +#define PTM_STATE_BLOB_SIZE (3 * 1024)
> +
> +/*
> + * The following is the data structure to get state blobs from the TPM.
> + * If the size of the state blob exceeds the PTM_STATE_BLOB_SIZE, multiple reads
> + * with this ioctl and with adjusted offset are necessary. All bytes
> + * must be transferred and the transfer is done once the last byte has been
> + * returned.
> + * It is possible to use the read() interface for reading the data; however,
> + * the first bytes of the state blob will be part of the response to the ioctl();
> + * a subsequent read() is only necessary if the total length (totlength) exceeds
> + * the number of received bytes. seek() is not supported.
> + */
> +struct ptm_getstate {
> +    union {
> +        struct {
> +            uint32_t state_flags; /* may be: PTM_STATE_FLAG_DECRYPTED */
> +            uint32_t type;        /* which blob to pull */
> +            uint32_t offset;      /* offset from where to read */
> +        } req; /* request */
> +        struct {
> +            ptm_res tpm_result;
> +            uint32_t state_flags; /* may be: PTM_STATE_FLAG_ENCRYPTED */
> +            uint32_t totlength;   /* total length that will be transferred */
> +            uint32_t length;      /* number of bytes in following buffer */
> +            uint8_t  data[PTM_STATE_BLOB_SIZE];
> +        } resp; /* response */
> +    } u;
> +};
> +
> +/* TPM state blob types */
> +#define PTM_BLOB_TYPE_PERMANENT  1
> +#define PTM_BLOB_TYPE_VOLATILE   2
> +#define PTM_BLOB_TYPE_SAVESTATE  3
> +
> +/* state_flags above : */
> +#define PTM_STATE_FLAG_DECRYPTED     1 /* on input:  get decrypted state */
> +#define PTM_STATE_FLAG_ENCRYPTED     2 /* on output: state is encrypted */
> +
> +/*
> + * The following is the data structure to set state blobs in the TPM.
> + * If the size of the state blob exceeds the PTM_STATE_BLOB_SIZE, multiple
> + * 'writes' using this ioctl are necessary. The last packet is indicated
> + * by the length being smaller than the PTM_STATE_BLOB_SIZE.
> + * The very first packet may have a length indicator of '0' enabling
> + * a write() with all the bytes from a buffer. If the write() interface
> + * is used, a final ioctl with a non-full buffer must be made to indicate
> + * that all data were transferred (a write with 0 bytes would not work).
> + */
> +struct ptm_setstate {
> +    union {
> +        struct {
> +            uint32_t state_flags; /* may be PTM_STATE_FLAG_ENCRYPTED */
> +            uint32_t type;        /* which blob to set */
> +            uint32_t length;      /* length of the data;
> +                                     use 0 on the first packet to
> +                                     transfer using write() */
> +            uint8_t data[PTM_STATE_BLOB_SIZE];
> +        } req; /* request */
> +        struct {
> +            ptm_res tpm_result;
> +        } resp; /* response */
> +    } u;
> +};
> +
> +/*
> + * PTM_GET_CONFIG: Data structure to get runtime configuration information
> + * such as which keys are applied.
> + */
> +struct ptm_getconfig {
> +    union {
> +        struct {
> +            ptm_res tpm_result;
> +            uint32_t flags;
> +        } resp; /* response */
> +    } u;
> +};
> +
> +#define PTM_CONFIG_FLAG_FILE_KEY        0x1
> +#define PTM_CONFIG_FLAG_MIGRATION_KEY   0x2
> +
> +
> +typedef uint64_t ptm_cap;
> +typedef struct ptm_est ptm_est;
> +typedef struct ptm_reset_est ptm_reset_est;
> +typedef struct ptm_loc ptm_loc;
> +typedef struct ptm_hdata ptm_hdata;
> +typedef struct ptm_init ptm_init;
> +typedef struct ptm_getstate ptm_getstate;
> +typedef struct ptm_setstate ptm_setstate;
> +typedef struct ptm_getconfig ptm_getconfig;
> +
> +/* capability flags returned by PTM_GET_CAPABILITY */
> +#define PTM_CAP_INIT               (1)
> +#define PTM_CAP_SHUTDOWN           (1<<1)
> +#define PTM_CAP_GET_TPMESTABLISHED (1<<2)
> +#define PTM_CAP_SET_LOCALITY       (1<<3)
> +#define PTM_CAP_HASHING            (1<<4)
> +#define PTM_CAP_CANCEL_TPM_CMD     (1<<5)
> +#define PTM_CAP_STORE_VOLATILE     (1<<6)
> +#define PTM_CAP_RESET_TPMESTABLISHED (1<<7)
> +#define PTM_CAP_GET_STATEBLOB      (1<<8)
> +#define PTM_CAP_SET_STATEBLOB      (1<<9)
> +#define PTM_CAP_STOP               (1<<10)
> +#define PTM_CAP_GET_CONFIG         (1<<11)
> +
> +enum {
> +    PTM_GET_CAPABILITY     = _IOR('P', 0, ptm_cap),
> +    PTM_INIT               = _IOWR('P', 1, ptm_init),
> +    PTM_SHUTDOWN           = _IOR('P', 2, ptm_res),
> +    PTM_GET_TPMESTABLISHED = _IOR('P', 3, ptm_est),
> +    PTM_SET_LOCALITY       = _IOWR('P', 4, ptm_loc),
> +    PTM_HASH_START         = _IOR('P', 5, ptm_res),
> +    PTM_HASH_DATA          = _IOWR('P', 6, ptm_hdata),
> +    PTM_HASH_END           = _IOR('P', 7, ptm_res),
> +    PTM_CANCEL_TPM_CMD     = _IOR('P', 8, ptm_res),
> +    PTM_STORE_VOLATILE     = _IOR('P', 9, ptm_res),
> +    PTM_RESET_TPMESTABLISHED = _IOWR('P', 10, ptm_reset_est),
> +    PTM_GET_STATEBLOB      = _IOWR('P', 11, ptm_getstate),
> +    PTM_SET_STATEBLOB      = _IOWR('P', 12, ptm_setstate),
> +    PTM_STOP               = _IOR('P', 13, ptm_res),
> +    PTM_GET_CONFIG         = _IOR('P', 14, ptm_getconfig),
> +};
> +
> +#endif /* _TPM_IOCTL_H */
> diff --git a/hw/tpm/tpm_passthrough.c b/hw/tpm/tpm_passthrough.c
> index be160c1..a4f4fe0 100644
> --- a/hw/tpm/tpm_passthrough.c
> +++ b/hw/tpm/tpm_passthrough.c
> @@ -33,6 +33,7 @@
>  #include "sysemu/tpm_backend_int.h"
>  #include "tpm_tis.h"
>  #include "tpm_util.h"
> +#include "tpm_ioctl.h"
>  
>  #define DEBUG_TPM 0
>  
> @@ -45,6 +46,7 @@
>  #define TYPE_TPM_PASSTHROUGH "tpm-passthrough"
>  #define TPM_PASSTHROUGH(obj) \
>      OBJECT_CHECK(TPMPassthruState, (obj), TYPE_TPM_PASSTHROUGH)
> +#define TYPE_TPM_CUSE "tpm-cuse"
>  
>  static const TPMDriverOps tpm_passthrough_driver;
>  
> @@ -71,12 +73,18 @@ struct TPMPassthruState {
>      bool had_startup_error;
>  
>      TPMVersion tpm_version;
> +    ptm_cap cuse_cap; /* capabilities of the CUSE TPM */
> +    uint8_t cur_locty_number; /* last set locality */
>  };
>  
>  typedef struct TPMPassthruState TPMPassthruState;
>  
>  #define TPM_PASSTHROUGH_DEFAULT_DEVICE "/dev/tpm0"
>  
> +#define TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt) (tpm_pt->cuse_cap != 0)
> +
> +#define TPM_CUSE_IMPLEMENTS_ALL(S, cap) (((S)->cuse_cap & (cap)) == (cap))
> +
>  /* functions */
>  
>  static void tpm_passthrough_cancel_cmd(TPMBackend *tb);
> @@ -123,7 +131,28 @@ static bool tpm_passthrough_is_selftest(const uint8_t *in, uint32_t in_len)
>      return false;
>  }
>  
> +static int tpm_passthrough_set_locality(TPMPassthruState *tpm_pt,
> +                                        uint8_t locty_number)
> +{
> +    ptm_loc loc;
> +
> +    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
> +        if (tpm_pt->cur_locty_number != locty_number) {
> +            loc.u.req.loc = locty_number;
> +            if (ioctl(tpm_pt->tpm_fd, PTM_SET_LOCALITY, &loc) < 0) {
> +                error_report("tpm_cuse: could not set locality on "
> +                             "CUSE TPM: %s",
> +                             strerror(errno));
> +                return -1;
> +            }
> +            tpm_pt->cur_locty_number = locty_number;
> +        }
> +    }
> +    return 0;
> +}
> +
>  static int tpm_passthrough_unix_tx_bufs(TPMPassthruState *tpm_pt,
> +                                        uint8_t locality_number,
>                                          const uint8_t *in, uint32_t in_len,
>                                          uint8_t *out, uint32_t out_len,
>                                          bool *selftest_done)
> @@ -132,6 +161,11 @@ static int tpm_passthrough_unix_tx_bufs(TPMPassthruState *tpm_pt,
>      bool is_selftest;
>      const struct tpm_resp_hdr *hdr;
>  
> +    ret = tpm_passthrough_set_locality(tpm_pt, locality_number);
> +    if (ret < 0) {
> +        goto err_exit;
> +    }
> +
>      tpm_pt->tpm_op_canceled = false;
>      tpm_pt->tpm_executing = true;
>      *selftest_done = false;
> @@ -182,10 +216,12 @@ err_exit:
>  }
>  
>  static int tpm_passthrough_unix_transfer(TPMPassthruState *tpm_pt,
> +                                         uint8_t locality_number,
>                                           const TPMLocality *locty_data,
>                                           bool *selftest_done)
>  {
>      return tpm_passthrough_unix_tx_bufs(tpm_pt,
> +                                        locality_number,
>                                          locty_data->w_buffer.buffer,
>                                          locty_data->w_offset,
>                                          locty_data->r_buffer.buffer,
> @@ -206,6 +242,7 @@ static void tpm_passthrough_worker_thread(gpointer data,
>      switch (cmd) {
>      case TPM_BACKEND_CMD_PROCESS_CMD:
>          tpm_passthrough_unix_transfer(tpm_pt,
> +                                      thr_parms->tpm_state->locty_number,
>                                        thr_parms->tpm_state->locty_data,
>                                        &selftest_done);
>  
> @@ -222,6 +259,93 @@ static void tpm_passthrough_worker_thread(gpointer data,
>  }
>  
>  /*
> + * Gracefully shut down the external CUSE TPM
> + */
> +static void tpm_passthrough_shutdown(TPMPassthruState *tpm_pt)
> +{
> +    ptm_res res;
> +
> +    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
> +        if (ioctl(tpm_pt->tpm_fd, PTM_SHUTDOWN, &res) < 0) {
> +            error_report("tpm_cuse: Could not cleanly shut down "
> +                         "the CUSE TPM: %s",
> +                         strerror(errno));
> +        }
> +    }
> +}
> +
> +/*
> + * Probe for the CUSE TPM by sending an ioctl() requesting its
> + * capability flags.
> + */
> +static int tpm_passthrough_cuse_probe(TPMPassthruState *tpm_pt)
> +{
> +    int rc = 0;
> +
> +    if (ioctl(tpm_pt->tpm_fd, PTM_GET_CAPABILITY, &tpm_pt->cuse_cap) < 0) {
> +        error_report("Error: CUSE TPM was requested, but probing failed");
> +        rc = -1;
> +    }
> +
> +    return rc;
> +}
> +
> +static int tpm_passthrough_cuse_check_caps(TPMPassthruState *tpm_pt)
> +{
> +    int rc = 0;
> +    ptm_cap caps = 0;
> +    const char *tpm = NULL;
> +
> +    /* check for min. required capabilities */
> +    switch (tpm_pt->tpm_version) {
> +    case TPM_VERSION_1_2:
> +        caps = PTM_CAP_INIT | PTM_CAP_SHUTDOWN | PTM_CAP_GET_TPMESTABLISHED |
> +               PTM_CAP_SET_LOCALITY;
> +        tpm = "1.2";
> +        break;
> +    case TPM_VERSION_2_0:
> +        caps = PTM_CAP_INIT | PTM_CAP_SHUTDOWN | PTM_CAP_GET_TPMESTABLISHED |
> +               PTM_CAP_SET_LOCALITY | PTM_CAP_RESET_TPMESTABLISHED;
> +        tpm = "2";
> +        break;
> +    case TPM_VERSION_UNSPEC:
> +        error_report("tpm_cuse: %s: TPM version has not been set",
> +                     __func__);
> +        return -1;
> +    }
> +
> +    if (!TPM_CUSE_IMPLEMENTS_ALL(tpm_pt, caps)) {
> +        error_report("tpm_cuse: TPM does not implement minimum set of required "
> +                     "capabilities for TPM %s (0x%x)", tpm, (int)caps);
> +        rc = -1;
> +    }
> +
> +    return rc;
> +}
> +
> +/*
> + * Initialize the external CUSE TPM
> + */
> +static int tpm_passthrough_cuse_init(TPMPassthruState *tpm_pt)
> +{
> +    int rc = 0;
> +    ptm_init init = {
> +        .u.req.init_flags = PTM_INIT_FLAG_DELETE_VOLATILE,
> +    };
> +
> +    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
> +        if (ioctl(tpm_pt->tpm_fd, PTM_INIT, &init) < 0) {
> +            error_report("tpm_cuse: Detected CUSE TPM but could not "
> +                         "send INIT: %s",
> +                         strerror(errno));
> +            rc = -1;
> +        }
> +    }
> +
> +    return rc;
> +}
> +
> +/*
>   * Start the TPM (thread). If it had been started before, then terminate
>   * and start it again.
>   */
> @@ -236,6 +360,8 @@ static int tpm_passthrough_startup_tpm(TPMBackend *tb)
>                                tpm_passthrough_worker_thread,
>                                &tpm_pt->tpm_thread_params);
>  
> +    tpm_passthrough_cuse_init(tpm_pt);
> +
>      return 0;
>  }
>  
> @@ -266,14 +392,43 @@ static int tpm_passthrough_init(TPMBackend *tb, TPMState *s,
>  
>  static bool tpm_passthrough_get_tpm_established_flag(TPMBackend *tb)
>  {
> +    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
> +    ptm_est est;
> +
> +    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
> +        if (ioctl(tpm_pt->tpm_fd, PTM_GET_TPMESTABLISHED, &est) < 0) {
> +            error_report("tpm_cuse: Could not get the TPM established "
> +                         "flag from the CUSE TPM: %s",
> +                         strerror(errno));
> +            return false;
> +        }
> +        return (est.u.resp.bit != 0);
> +    }
>      return false;
>  }
>  
>  static int tpm_passthrough_reset_tpm_established_flag(TPMBackend *tb,
>                                                        uint8_t locty)
>  {
> +    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
> +    int rc = 0;
> +    ptm_reset_est ptmreset_est;
> +
>      /* only a TPM 2.0 will support this */
> -    return 0;
> +    if (tpm_pt->tpm_version == TPM_VERSION_2_0) {
> +        if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
> +            ptmreset_est.u.req.loc = tpm_pt->cur_locty_number;
> +
> +            if (ioctl(tpm_pt->tpm_fd, PTM_RESET_TPMESTABLISHED,
> +                      &ptmreset_est) < 0) {
> +                error_report("tpm_cuse: Could not reset the establishment bit "
> +                             "failed: %s",
> +                             strerror(errno));
> +                rc = -1;
> +            }
> +        }
> +    }
> +    return rc;
>  }
>  
>  static bool tpm_passthrough_get_startup_error(TPMBackend *tb)
> @@ -304,7 +459,8 @@ static void tpm_passthrough_deliver_request(TPMBackend *tb)
>  static void tpm_passthrough_cancel_cmd(TPMBackend *tb)
>  {
>      TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
> -    int n;
> +    ptm_res res;
> +    static bool error_printed;
>  
>      /*
>       * As of Linux 3.7 the tpm_tis driver does not properly cancel
> @@ -313,17 +469,34 @@ static void tpm_passthrough_cancel_cmd(TPMBackend *tb)
>       * command, e.g., a command executed on the host.
>       */
>      if (tpm_pt->tpm_executing) {
> -        if (tpm_pt->cancel_fd >= 0) {
> -            n = write(tpm_pt->cancel_fd, "-", 1);
> -            if (n != 1) {
> -                error_report("Canceling TPM command failed: %s",
> -                             strerror(errno));
> -            } else {
> -                tpm_pt->tpm_op_canceled = true;
> +        if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
> +            if (TPM_CUSE_IMPLEMENTS_ALL(tpm_pt, PTM_CAP_CANCEL_TPM_CMD)) {
> +                if (ioctl(tpm_pt->tpm_fd, PTM_CANCEL_TPM_CMD, &res) < 0) {
> +                    error_report("tpm_cuse: Could not cancel command on "
> +                                 "CUSE TPM: %s",
> +                                 strerror(errno));
> +                } else if (res != TPM_SUCCESS) {
> +                    if (!error_printed) {
> +                        error_report("TPM error code from command "
> +                                     "cancellation of CUSE TPM: 0x%x", res);
> +                        error_printed = true;
> +                    }
> +                } else {
> +                    tpm_pt->tpm_op_canceled = true;
> +                }
>              }
>          } else {
> -            error_report("Cannot cancel TPM command due to missing "
> -                         "TPM sysfs cancel entry");
> +            if (tpm_pt->cancel_fd >= 0) {
> +                if (write(tpm_pt->cancel_fd, "-", 1) != 1) {
> +                    error_report("Canceling TPM command failed: %s",
> +                                 strerror(errno));
> +                } else {
> +                    tpm_pt->tpm_op_canceled = true;
> +                }
> +            } else {
> +                error_report("Cannot cancel TPM command due to missing "
> +                             "TPM sysfs cancel entry");
> +            }
>          }
>      }
>  }
> @@ -353,6 +526,11 @@ static int tpm_passthrough_open_sysfs_cancel(TPMBackend *tb)
>      char *dev;
>      char path[PATH_MAX];
>  
> +    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
> +        /* not needed, but so we have a fd */
> +        return qemu_open("/dev/null", O_WRONLY);
> +    }
> +
>      if (tb->cancel_path) {
>          fd = qemu_open(tb->cancel_path, O_WRONLY);
>          if (fd < 0) {
> @@ -387,12 +565,22 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
>  {
>      TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
>      const char *value;
> +    bool have_cuse = false;
> +
> +    value = qemu_opt_get(opts, "type");
> +    if (value != NULL && !strcmp("cuse-tpm", value)) {
> +        have_cuse = true;
> +    }
>  
>      value = qemu_opt_get(opts, "cancel-path");
>      tb->cancel_path = g_strdup(value);
>  
>      value = qemu_opt_get(opts, "path");
>      if (!value) {
> +        if (have_cuse) {
> +            error_report("Missing path to access CUSE TPM");
> +            goto err_free_parameters;
> +        }
>          value = TPM_PASSTHROUGH_DEFAULT_DEVICE;
>      }
>  
> @@ -407,15 +595,36 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
>          goto err_free_parameters;
>      }
>  
> +    tpm_pt->cur_locty_number = ~0;
> +
> +    if (have_cuse) {
> +        if (tpm_passthrough_cuse_probe(tpm_pt)) {
> +            goto err_close_tpmdev;
> +        }
> +        /* init TPM for probing */
> +        if (tpm_passthrough_cuse_init(tpm_pt)) {
> +            goto err_close_tpmdev;
> +        }
> +    }
> +
>      if (tpm_util_test_tpmdev(tpm_pt->tpm_fd, &tpm_pt->tpm_version)) {
>          error_report("'%s' is not a TPM device.",
>                       tpm_pt->tpm_dev);
>          goto err_close_tpmdev;
>      }
>  
> +    if (have_cuse) {
> +        if (tpm_passthrough_cuse_check_caps(tpm_pt)) {
> +            goto err_close_tpmdev;
> +        }
> +    }
> +
> +
>      return 0;
>  
>   err_close_tpmdev:
> +    tpm_passthrough_shutdown(tpm_pt);
> +
>      qemu_close(tpm_pt->tpm_fd);
>      tpm_pt->tpm_fd = -1;
>  
> @@ -466,6 +675,8 @@ static void tpm_passthrough_destroy(TPMBackend *tb)
>  
>      tpm_backend_thread_end(&tpm_pt->tbt);
>  
> +    tpm_passthrough_shutdown(tpm_pt);
> +
>      qemu_close(tpm_pt->tpm_fd);
>      qemu_close(tpm_pt->cancel_fd);
>  
> @@ -539,3 +750,44 @@ static void tpm_passthrough_register(void)
>  }
>  
>  type_init(tpm_passthrough_register)
> +
> +/* CUSE TPM */
> +static const char *tpm_passthrough_cuse_create_desc(void)
> +{
> +    return "CUSE TPM backend driver";
> +}
> +
> +static const TPMDriverOps tpm_cuse_driver = {
> +    .type                     = TPM_TYPE_CUSE_TPM,
> +    .opts                     = tpm_passthrough_cmdline_opts,
> +    .desc                     = tpm_passthrough_cuse_create_desc,
> +    .create                   = tpm_passthrough_create,
> +    .destroy                  = tpm_passthrough_destroy,
> +    .init                     = tpm_passthrough_init,
> +    .startup_tpm              = tpm_passthrough_startup_tpm,
> +    .realloc_buffer           = tpm_passthrough_realloc_buffer,
> +    .reset                    = tpm_passthrough_reset,
> +    .had_startup_error        = tpm_passthrough_get_startup_error,
> +    .deliver_request          = tpm_passthrough_deliver_request,
> +    .cancel_cmd               = tpm_passthrough_cancel_cmd,
> +    .get_tpm_established_flag = tpm_passthrough_get_tpm_established_flag,
> +    .reset_tpm_established_flag = tpm_passthrough_reset_tpm_established_flag,
> +    .get_tpm_version          = tpm_passthrough_get_tpm_version,
> +};
> +
> +static const TypeInfo tpm_cuse_info = {
> +    .name = TYPE_TPM_CUSE,
> +    .parent = TYPE_TPM_BACKEND,
> +    .instance_size = sizeof(TPMPassthruState),
> +    .class_init = tpm_passthrough_class_init,
> +    .instance_init = tpm_passthrough_inst_init,
> +    .instance_finalize = tpm_passthrough_inst_finalize,
> +};
> +
> +static void tpm_cuse_register(void)
> +{
> +    type_register_static(&tpm_cuse_info);
> +    tpm_register_driver(&tpm_cuse_driver);
> +}
> +
> +type_init(tpm_cuse_register)
> diff --git a/qapi-schema.json b/qapi-schema.json
> index 2e31733..e0ef212 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -3335,10 +3335,12 @@
>  # An enumeration of TPM types
>  #
>  # @passthrough: TPM passthrough type
> +# @cuse-tpm: CUSE TPM type
> +#            Since: 2.6
>  #
>  # Since: 1.5
>  ##
> -{ 'enum': 'TpmType', 'data': [ 'passthrough' ] }
> +{ 'enum': 'TpmType', 'data': [ 'passthrough', 'cuse-tpm' ] }
>  
>  ##
>  # @query-tpm-types:
> @@ -3367,6 +3369,17 @@
>                                               '*cancel-path' : 'str'} }
>  
>  ##
> +# @TPMCuseOptions:
> +#
> +# Information about the CUSE TPM type
> +#
> +# @path: string describing the path used for accessing the TPM device
> +#
> +# Since: 2.6
> +##
> +{ 'struct': 'TPMCuseOptions', 'data': { 'path' : 'str'}}
> +
> +##
>  # @TpmTypeOptions:
>  #
>  # A union referencing different TPM backend types' configuration options
> @@ -3376,7 +3389,8 @@
>  # Since: 1.5
>  ##
>  { 'union': 'TpmTypeOptions',
> -   'data': { 'passthrough' : 'TPMPassthroughOptions' } }
> +   'data': { 'passthrough' : 'TPMPassthroughOptions',
> +             'cuse-tpm' : 'TPMCuseOptions' } }
>  
>  ##
>  # @TpmInfo:
> diff --git a/qemu-options.hx b/qemu-options.hx
> index 215d00d..6ea3e10 100644
> --- a/qemu-options.hx
> +++ b/qemu-options.hx
> @@ -2650,7 +2650,10 @@ DEF("tpmdev", HAS_ARG, QEMU_OPTION_tpmdev, \
>      "-tpmdev passthrough,id=id[,path=path][,cancel-path=path]\n"
>      "                use path to provide path to a character device; default is /dev/tpm0\n"
>      "                use cancel-path to provide path to TPM's cancel sysfs entry; if\n"
> -    "                not provided it will be searched for in /sys/class/misc/tpm?/device\n",
> +    "                not provided it will be searched for in /sys/class/misc/tpm?/device\n"
> +    "-tpmdev cuse-tpm,id=id,path=path\n"
> +    "                use path to provide path to a character device to talk to the\n"
> +    "                TPM emulator providing a CUSE interface\n",
>      QEMU_ARCH_ALL)
>  STEXI
>  
> @@ -2659,8 +2662,8 @@ The general form of a TPM device option is:
>  
>  @item -tpmdev @var{backend} ,id=@var{id} [,@var{options}]
>  @findex -tpmdev
> -Backend type must be:
> -@option{passthrough}.
> +Backend type must be either one of the following:
> +@option{passthrough}, @option{cuse-tpm}.
>  
>  The specific backend type will determine the applicable options.
>  The @code{-tpmdev} option creates the TPM backend and requires a
> @@ -2710,6 +2713,18 @@ To create a passthrough TPM use the following two options:
>  Note that the @code{-tpmdev} id is @code{tpm0} and is referenced by
>  @code{tpmdev=tpm0} in the device option.
>  
> +@item -tpmdev cuse-tpm, id=@var{id}, path=@var{path}
> +
> +(Linux-host only) Enable access to a TPM emulator with a CUSE interface.
> +
> +@option{path} specifies the path to the CUSE TPM character device.
> +
> +To create a backend device accessing the CUSE TPM emulator using /dev/vtpm
> +use the following two options:
> +@example
> +-tpmdev cuse-tpm,id=tpm0,path=/dev/vtpm -device tpm-tis,tpmdev=tpm0
> +@end example
> +
>  @end table
>  
>  ETEXI
> diff --git a/qmp-commands.hx b/qmp-commands.hx
> index 7b235ee..53f6d9e 100644
> --- a/qmp-commands.hx
> +++ b/qmp-commands.hx
> @@ -3875,7 +3875,7 @@ Arguments: None
>  Example:
>  
>  -> { "execute": "query-tpm-types" }
> -<- { "return": [ "passthrough" ] }
> +<- { "return": [ "passthrough", "cuse-tpm" ] }
>  
>  EQMP
>  
> diff --git a/tpm.c b/tpm.c
> index 0a3e3d5..c05d5d9 100644
> --- a/tpm.c
> +++ b/tpm.c
> @@ -25,7 +25,7 @@ static QLIST_HEAD(, TPMBackend) tpm_backends =
>  
>  
>  #define TPM_MAX_MODELS      1
> -#define TPM_MAX_DRIVERS     1
> +#define TPM_MAX_DRIVERS     2
>  
>  static TPMDriverOps const *be_drivers[TPM_MAX_DRIVERS] = {
>      NULL,
> @@ -272,6 +272,15 @@ static TPMInfo *qmp_query_tpm_inst(TPMBackend *drv)
>              tpo->has_cancel_path = true;
>          }
>          break;
> +    case TPM_TYPE_CUSE_TPM:
> +        res->options->type = TPM_TYPE_OPTIONS_KIND_CUSE_TPM;
> +        tpo = g_new0(TPMPassthroughOptions, 1);
> +        res->options->u.passthrough = tpo;
> +        if (drv->path) {
> +            tpo->path = g_strdup(drv->path);
> +            tpo->has_path = true;
> +        }
> +        break;
>      case TPM_TYPE__MAX:
>          break;
>      }
> -- 
> 2.4.3

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-20 14:58 ` Daniel P. Berrange
@ 2016-01-20 15:23   ` Stefan Berger
       [not found]   ` <201601201523.u0KFNwOH000398@d01av04.pok.ibm.com>
  1 sibling, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 15:23 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 1547 bytes --]

"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 09:58:39 
AM:


> Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a 
> QEMU-external TPM
> 
> On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> > The following series of patches extends TPM support with an
> > external TPM that offers a Linux CUSE (character device in userspace)
> > interface. This TPM lets each VM access its own private vTPM.
> 
> What is the backing store for this vTPM ? Are the vTPMs all
> multiplexed onto the host's physical TPM or is there something
> else going on ?

The vTPM writes its state into a plain file. In case the user started the 
vTPM, the user gets to choose the directory. In case of libvirt, libvirt 
sets up the directory and starts the vTPM with the directory as a 
parameter. The expectation for VMs (also containers) is that each VM can 
use the full set of TPM commands with the vTPM and due to how the TPM 
works, it cannot use the hardware TPM for that. SeaBIOS has been extended 
with TPM 1.2 support and initializes the vTPM in the same way it would 
initialize a hardware TPM.

Regards,
   Stefan

> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    
http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             
http://virt-manager.org :|
> |: http://autobuild.org       -o-         
http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       
http://live.gnome.org/gtk-vnc :|
> 



[-- Attachment #2: Type: text/html, Size: 2820 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 15:00   ` Daniel P. Berrange
@ 2016-01-20 15:31     ` Stefan Berger
       [not found]     ` <201601201532.u0KFW2q2019737@d03av03.boulder.ibm.com>
  1 sibling, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 15:31 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: mst, Stefan Berger, qemu-devel, jb613w, quan.xu,
	silviu.vlasceanu, hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 3506 bytes --]

"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41 
AM:

> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
TPM
> 
> On Mon, Jan 04, 2016 at 10:23:19AM -0500, Stefan Berger wrote:
> > From: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > 
> > Rather than integrating TPM functionality into QEMU directly
> > using the TPM emulation of libtpms, we now integrate an external
> > emulated TPM device. This device is expected to implement a Linux
> > CUSE interface (CUSE = character device in userspace).
> > 
> > QEMU talks to the CUSE TPM using much functionality of the
> > passthrough driver. For example, the TPM commands and responses
> > are sent to the CUSE TPM using the read()/write() interface.
> > However, some out-of-band control needs to be done using the CUSE
> > TPM's ioctls. The CUSE TPM currently defines and implements 15
> > different ioctls for controlling certain life-cycle aspects of
> > the emulated TPM. The ioctls can be regarded as a replacement for
> > direct function calls to a TPM emulator if the TPM were to be
> > directly integrated into QEMU.
> > 
> > One of the ioctls allows to get a bitmask of supported capabilities.
> > Each returned bit indicates which capabilities have been implemented.
> > An include file defining the various ioctls is added to QEMU.
> > 
> > The CUSE TPM and associated tools can be found here:
> > 
> > https://github.com/stefanberger/swtpm
> > 
> > (please use the latest version)
> > 
> > To use the external CUSE TPM, the CUSE TPM should be started as 
follows:
> > 
> > # terminate previously started CUSE TPM
> > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > 
> > # start CUSE TPM
> > /usr/bin/swtpm_cuse -n vtpm-test
> 
> IIUC, there needs to be one swtpm_cuse process running per QEMU
> TPM device ?  This makes my wonder why we need this separate

Correct. See reason in answer to previous email.

> process at all - it would make sense if there was a single
> swtpm_cuse shared across all QEMU's, but if there's one per
> QEMU device, it feels like it'd be much simpler to just have
> the functionality linked in QEMU.  That avoids the problem

I tried having it linked in QEMU before. It was basically rejected.

> of having to manage all these extra processes alongside QEMU
> which can add a fair bit of mgmt overhead.

For libvirt, yes, there is mgmt. overhead but it's quite transparent. So 
libvirt is involved in the creation of the directory for the vTPMs, the 
command line creation for the external process as well as the startup of 
the process, but otherwise it's not a big issue (anymore). I have the 
patches that do just for an older libvirt version that along with setting 
up SELinux labels, cgroups etc. for each VM that wants an attached vTPM.

Regards,
   Stefan

> 
> > 
> > QEMU can then be started using the following parameters:
> > 
> > qemu-system-x86_64 \
> >    [...] \
> >         -tpmdev 
cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
> >         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> >    [...]
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    
http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             
http://virt-manager.org :|
> |: http://autobuild.org       -o-         
http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       
http://live.gnome.org/gtk-vnc :|
> 



[-- Attachment #2: Type: text/html, Size: 5400 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 15:20   ` Michael S. Tsirkin
@ 2016-01-20 15:36     ` Stefan Berger
       [not found]     ` <201601201536.u0KFanwG004844@d01av04.pok.ibm.com>
  1 sibling, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 15:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 1479 bytes --]

"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58 AM:

> From: "Michael S. Tsirkin" <mst@redhat.com>

> > 
> > The CUSE TPM and associated tools can be found here:
> > 
> > https://github.com/stefanberger/swtpm
> > 
> > (please use the latest version)
> > 
> > To use the external CUSE TPM, the CUSE TPM should be started as 
follows:
> > 
> > # terminate previously started CUSE TPM
> > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > 
> > # start CUSE TPM
> > /usr/bin/swtpm_cuse -n vtpm-test
> > 
> > QEMU can then be started using the following parameters:
> > 
> > qemu-system-x86_64 \
> >    [...] \
> >         -tpmdev 
cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
> >         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> >    [...]
> > 
> > 
> > Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > Cc: Eric Blake <eblake@redhat.com>
> 
> Before we add a dependency on this interface,
> I'd rather see this interface supported in kernel
> and not just in CUSE.

For using the single hardware TPM, we have the passthrough type. It's 
usage is limited.

CUSE extends the TPM character device interface with ioctl's. Behind the 
character device we can implement a TPM 1.2 and a TPM 2. Both TPM 
implementations require large amounts of code, which I don't think should 
go into the Linux kernel itself. So I don't know who would implement this 
interface inside the Linux kernel.

  Stefan



[-- Attachment #2: Type: text/html, Size: 2087 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
       [not found]   ` <201601201523.u0KFNwOH000398@d01av04.pok.ibm.com>
@ 2016-01-20 15:42     ` Daniel P. Berrange
  2016-01-20 19:51       ` Stefan Berger
       [not found]       ` <OF1010A111.39918A93-ON00257F40.006CA5ED-85257F40.006D2225@LocalDomain>
  0 siblings, 2 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2016-01-20 15:42 UTC (permalink / raw)
  To: Stefan Berger
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

On Wed, Jan 20, 2016 at 10:23:50AM -0500, Stefan Berger wrote:
> "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 09:58:39 
> AM:
> 
> 
> > Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a 
> > QEMU-external TPM
> > 
> > On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> > > The following series of patches extends TPM support with an
> > > external TPM that offers a Linux CUSE (character device in userspace)
> > > interface. This TPM lets each VM access its own private vTPM.
> > 
> > What is the backing store for this vTPM ? Are the vTPMs all
> > multiplexed onto the host's physical TPM or is there something
> > else going on ?
> 
> The vTPM writes its state into a plain file. In case the user started the 
> vTPM, the user gets to choose the directory. In case of libvirt, libvirt 
> sets up the directory and starts the vTPM with the directory as a 
> parameter. The expectation for VMs (also containers) is that each VM can 
> use the full set of TPM commands with the vTPM and due to how the TPM 
> works, it cannot use the hardware TPM for that. SeaBIOS has been extended 
> with TPM 1.2 support and initializes the vTPM in the same way it would 
> initialize a hardware TPM.

So if its using a plain file, then when snapshotting VMs we have to
do full copies of the file and keep them all around in sync with
the disk snapshots. By not having this functionality in QEMU we don't
immediately have a way to use qcow2 for the vTPM file backing store
to deal with snapshot management. The vTPM needs around snapshotting
feel fairly similar to the NVRAM needs, so it would be desiralbe to
have a ability to do a consistent thing for both.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
       [not found]     ` <201601201532.u0KFW2q2019737@d03av03.boulder.ibm.com>
@ 2016-01-20 15:46       ` Daniel P. Berrange
  2016-01-20 15:54         ` Stefan Berger
  2016-01-28 13:15       ` Daniel P. Berrange
  1 sibling, 1 reply; 96+ messages in thread
From: Daniel P. Berrange @ 2016-01-20 15:46 UTC (permalink / raw)
  To: Stefan Berger
  Cc: mst, Stefan Berger, qemu-devel, jb613w, quan.xu,
	silviu.vlasceanu, hagen.lauer

On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41 
> AM:
> 
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
> TPM
> > 
> > On Mon, Jan 04, 2016 at 10:23:19AM -0500, Stefan Berger wrote:
> > > From: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > > 
> > > Rather than integrating TPM functionality into QEMU directly
> > > using the TPM emulation of libtpms, we now integrate an external
> > > emulated TPM device. This device is expected to implement a Linux
> > > CUSE interface (CUSE = character device in userspace).
> > > 
> > > QEMU talks to the CUSE TPM using much functionality of the
> > > passthrough driver. For example, the TPM commands and responses
> > > are sent to the CUSE TPM using the read()/write() interface.
> > > However, some out-of-band control needs to be done using the CUSE
> > > TPM's ioctls. The CUSE TPM currently defines and implements 15
> > > different ioctls for controlling certain life-cycle aspects of
> > > the emulated TPM. The ioctls can be regarded as a replacement for
> > > direct function calls to a TPM emulator if the TPM were to be
> > > directly integrated into QEMU.
> > > 
> > > One of the ioctls allows to get a bitmask of supported capabilities.
> > > Each returned bit indicates which capabilities have been implemented.
> > > An include file defining the various ioctls is added to QEMU.
> > > 
> > > The CUSE TPM and associated tools can be found here:
> > > 
> > > https://github.com/stefanberger/swtpm
> > > 
> > > (please use the latest version)
> > > 
> > > To use the external CUSE TPM, the CUSE TPM should be started as 
> follows:
> > > 
> > > # terminate previously started CUSE TPM
> > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > 
> > > # start CUSE TPM
> > > /usr/bin/swtpm_cuse -n vtpm-test
> > 
> > IIUC, there needs to be one swtpm_cuse process running per QEMU
> > TPM device ?  This makes my wonder why we need this separate
> 
> Correct. See reason in answer to previous email.
> 
> > process at all - it would make sense if there was a single
> > swtpm_cuse shared across all QEMU's, but if there's one per
> > QEMU device, it feels like it'd be much simpler to just have
> > the functionality linked in QEMU.  That avoids the problem
> 
> I tried having it linked in QEMU before. It was basically rejected.

I remember an impl you did many years(?) ago now, but don't recall
the results of the discussion. Can you elaborate on why it was
rejected as an approach ? It just doesn't make much sense to me
to have to create an external daemon, a CUSE device and comms
protocol, simply to be able to read/write a plain file containing
the TPM state. Its massive over engineering IMHO and adding way
more complexity and thus scope for failure


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 15:46       ` Daniel P. Berrange
@ 2016-01-20 15:54         ` Stefan Berger
  2016-01-20 16:03           ` Michael S. Tsirkin
  2016-01-20 16:22           ` Daniel P. Berrange
  0 siblings, 2 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 15:54 UTC (permalink / raw)
  To: Daniel P. Berrange, Stefan Berger
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
>> "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41
>> AM:
>>
>>
>>> process at all - it would make sense if there was a single
>>> swtpm_cuse shared across all QEMU's, but if there's one per
>>> QEMU device, it feels like it'd be much simpler to just have
>>> the functionality linked in QEMU.  That avoids the problem
>> I tried having it linked in QEMU before. It was basically rejected.
> I remember an impl you did many years(?) ago now, but don't recall
> the results of the discussion. Can you elaborate on why it was
> rejected as an approach ? It just doesn't make much sense to me
> to have to create an external daemon, a CUSE device and comms
> protocol, simply to be able to read/write a plain file containing
> the TPM state. Its massive over engineering IMHO and adding way
> more complexity and thus scope for failure

The TPM 1.2 implementation adds 10s of thousands of lines of code. The 
TPM 2 implementation is in the same range. The concern was having this 
code right in the QEMU address space. It's big, it can have bugs, so we 
don't want it to harm QEMU. So we now put this into an external process 
implemented by the swtpm project that builds on libtpms which provides 
TPM 1.2 functionality (to be extended with TPM 2). We cannot call APIs 
of libtpms directly anymore, so we need a control channel, which is 
implemented through ioctls on the CUSE device.

    Stefan
>
>
> Regards,
> Daniel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
       [not found]     ` <201601201536.u0KFanwG004844@d01av04.pok.ibm.com>
@ 2016-01-20 15:58       ` Michael S. Tsirkin
  2016-01-20 16:06         ` Stefan Berger
  2016-01-20 16:15         ` Daniel P. Berrange
  0 siblings, 2 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2016-01-20 15:58 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> "Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58 AM:
> 
> > From: "Michael S. Tsirkin" <mst@redhat.com>
> 
> > >
> > > The CUSE TPM and associated tools can be found here:
> > >
> > > https://github.com/stefanberger/swtpm
> > >
> > > (please use the latest version)
> > >
> > > To use the external CUSE TPM, the CUSE TPM should be started as follows:
> > >
> > > # terminate previously started CUSE TPM
> > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > >
> > > # start CUSE TPM
> > > /usr/bin/swtpm_cuse -n vtpm-test
> > >
> > > QEMU can then be started using the following parameters:
> > >
> > > qemu-system-x86_64 \
> > >    [...] \
> > >         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test
> \
> > >         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > >    [...]
> > >
> > >
> > > Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > > Cc: Eric Blake <eblake@redhat.com>
> >
> > Before we add a dependency on this interface,
> > I'd rather see this interface supported in kernel
> > and not just in CUSE.
> 
> For using the single hardware TPM, we have the passthrough type. It's usage is
> limited.
> 
> CUSE extends the TPM character device interface with ioctl's. Behind the
> character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> implementations require large amounts of code, which I don't think should go
> into the Linux kernel itself. So I don't know who would implement this
> interface inside the Linux kernel.
> 
>   Stefan
> 

BTW I'm not talking about the code - I'm talking about the interfaces here.

One way would be to add support for these interface support in the kernel.

Maybe others can be replaced with QMP events so management
can take the necessary action.

As long as this is not the case, I suspect this code will have to stay
out of tree :( We can't depend on interfaces provided solely by cuse
devices on github.



-- 
MST

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 15:54         ` Stefan Berger
@ 2016-01-20 16:03           ` Michael S. Tsirkin
  2016-01-20 16:13             ` Stefan Berger
  2016-01-20 16:22           ` Daniel P. Berrange
  1 sibling, 1 reply; 96+ messages in thread
From: Michael S. Tsirkin @ 2016-01-20 16:03 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> >>"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41
> >>AM:
> >>
> >>
> >>>process at all - it would make sense if there was a single
> >>>swtpm_cuse shared across all QEMU's, but if there's one per
> >>>QEMU device, it feels like it'd be much simpler to just have
> >>>the functionality linked in QEMU.  That avoids the problem
> >>I tried having it linked in QEMU before. It was basically rejected.
> >I remember an impl you did many years(?) ago now, but don't recall
> >the results of the discussion. Can you elaborate on why it was
> >rejected as an approach ? It just doesn't make much sense to me
> >to have to create an external daemon, a CUSE device and comms
> >protocol, simply to be able to read/write a plain file containing
> >the TPM state. Its massive over engineering IMHO and adding way
> >more complexity and thus scope for failure
> 
> The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> implementation is in the same range. The concern was having this code right
> in the QEMU address space. It's big, it can have bugs, so we don't want it
> to harm QEMU. So we now put this into an external process implemented by the
> swtpm project that builds on libtpms which provides TPM 1.2 functionality
> (to be extended with TPM 2). We cannot call APIs of libtpms directly
> anymore, so we need a control channel, which is implemented through ioctls
> on the CUSE device.
> 
>    Stefan

If that's the only reason for it, you can package it as part of QEMU
source, run it as a sub-process.

> >
> >
> >Regards,
> >Daniel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 15:58       ` Michael S. Tsirkin
@ 2016-01-20 16:06         ` Stefan Berger
  2016-01-20 18:54           ` Michael S. Tsirkin
  2016-01-20 16:15         ` Daniel P. Berrange
  1 sibling, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 16:06 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 2865 bytes --]

"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:58:02 AM:

> From: "Michael S. Tsirkin" <mst@redhat.com>

> 
> On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> > "Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58 AM:
> > 
> > > From: "Michael S. Tsirkin" <mst@redhat.com>
> > 
> > > >
> > > > The CUSE TPM and associated tools can be found here:
> > > >
> > > > https://github.com/stefanberger/swtpm
> > > >
> > > > (please use the latest version)
> > > >
> > > > To use the external CUSE TPM, the CUSE TPM should be started as 
follows:
> > > >
> > > > # terminate previously started CUSE TPM
> > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > >
> > > > # start CUSE TPM
> > > > /usr/bin/swtpm_cuse -n vtpm-test
> > > >
> > > > QEMU can then be started using the following parameters:
> > > >
> > > > qemu-system-x86_64 \
> > > >    [...] \
> > > >         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> dev/vtpm-test
> > \
> > > >         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > > >    [...]
> > > >
> > > >
> > > > Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > > > Cc: Eric Blake <eblake@redhat.com>
> > >
> > > Before we add a dependency on this interface,
> > > I'd rather see this interface supported in kernel
> > > and not just in CUSE.
> > 
> > For using the single hardware TPM, we have the passthrough type. 
> It's usage is
> > limited.
> > 
> > CUSE extends the TPM character device interface with ioctl's. Behind 
the
> > character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> > implementations require large amounts of code, which I don't 
thinkshould go
> > into the Linux kernel itself. So I don't know who would implement this
> > interface inside the Linux kernel.
> > 
> >   Stefan
> > 
> 
> BTW I'm not talking about the code - I'm talking about the interfaces 
here.
> 
> One way would be to add support for these interface support in the 
kernel.
> 
> Maybe others can be replaced with QMP events so management
> can take the necessary action.
> 
> As long as this is not the case, I suspect this code will have to stay
> out of tree :( We can't depend on interfaces provided solely by cuse
> devices on github.

Why is that? I know that the existing ioctls cannot be modified anymore 
once QEMU accepts the code. So I don't understand it. Some of the ioctls 
are only useful when emulating a hardware device, so there's no need for 
them in a kernel interface unless one was to put the vTPM code into the 
Linux kernel, but I don't see that this is happening. What is better about 
a kernel interface versus one implemented by a project on github assuming 
that the existing ioctls are stable? What is the real reason here?

   Stefan


> 
> 
> 
> -- 
> MST
> 



[-- Attachment #2: Type: text/html, Size: 3941 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 16:03           ` Michael S. Tsirkin
@ 2016-01-20 16:13             ` Stefan Berger
  0 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 16:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 2606 bytes --]

"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 11:03:11 AM:

> From: "Michael S. Tsirkin" <mst@redhat.com>

> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
TPM
> 
> On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > >>"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
10:00:41
> > >>AM:
> > >>
> > >>
> > >>>process at all - it would make sense if there was a single
> > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > >>>QEMU device, it feels like it'd be much simpler to just have
> > >>>the functionality linked in QEMU.  That avoids the problem
> > >>I tried having it linked in QEMU before. It was basically rejected.
> > >I remember an impl you did many years(?) ago now, but don't recall
> > >the results of the discussion. Can you elaborate on why it was
> > >rejected as an approach ? It just doesn't make much sense to me
> > >to have to create an external daemon, a CUSE device and comms
> > >protocol, simply to be able to read/write a plain file containing
> > >the TPM state. Its massive over engineering IMHO and adding way
> > >more complexity and thus scope for failure
> > 
> > The TPM 1.2 implementation adds 10s of thousands of lines of code.The 
TPM 2
> > implementation is in the same range. The concern was having this code 
right
> > in the QEMU address space. It's big, it can have bugs, so we don't 
want it
> > to harm QEMU. So we now put this into an external process implemented 
by the
> > swtpm project that builds on libtpms which provides TPM 1.2 
functionality
> > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > anymore, so we need a control channel, which is implemented through 
ioctls
> > on the CUSE device.
> > 
> >    Stefan
> 
> If that's the only reason for it, you can package it as part of QEMU
> source, run it as a sub-process.


I am packaging libtpms in Fedora. Once we have the CUSE interface in QEMU, 
I would package the swtpm project for Fedora as well. Both projects have 
at least been prepared for Debian packaging also.

https://github.com/stefanberger/libtpms
https://github.com/stefanberger/swtpm

The 'dist' directory has the spec file.

If I would combine things, then on the libvirt layer. Introduce an RPM 
dependency there.

I doubt I will have the possibility to go back to integrating it directly 
into QEMU directly.

Regards,
   Stefan



[-- Attachment #2: Type: text/html, Size: 3512 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 15:58       ` Michael S. Tsirkin
  2016-01-20 16:06         ` Stefan Berger
@ 2016-01-20 16:15         ` Daniel P. Berrange
  1 sibling, 0 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2016-01-20 16:15 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, Stefan Berger, qemu-devel, jb613w, quan.xu,
	silviu.vlasceanu, hagen.lauer

On Wed, Jan 20, 2016 at 05:58:02PM +0200, Michael S. Tsirkin wrote:
> On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> > "Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58 AM:
> > 
> > > From: "Michael S. Tsirkin" <mst@redhat.com>
> > 
> > > >
> > > > The CUSE TPM and associated tools can be found here:
> > > >
> > > > https://github.com/stefanberger/swtpm
> > > >
> > > > (please use the latest version)
> > > >
> > > > To use the external CUSE TPM, the CUSE TPM should be started as follows:
> > > >
> > > > # terminate previously started CUSE TPM
> > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > >
> > > > # start CUSE TPM
> > > > /usr/bin/swtpm_cuse -n vtpm-test
> > > >
> > > > QEMU can then be started using the following parameters:
> > > >
> > > > qemu-system-x86_64 \
> > > >    [...] \
> > > >         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test
> > \
> > > >         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > > >    [...]
> > > >
> > > >
> > > > Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > > > Cc: Eric Blake <eblake@redhat.com>
> > >
> > > Before we add a dependency on this interface,
> > > I'd rather see this interface supported in kernel
> > > and not just in CUSE.
> > 
> > For using the single hardware TPM, we have the passthrough type. It's usage is
> > limited.
> > 
> > CUSE extends the TPM character device interface with ioctl's. Behind the
> > character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> > implementations require large amounts of code, which I don't think should go
> > into the Linux kernel itself. So I don't know who would implement this
> > interface inside the Linux kernel.
> > 
> >   Stefan
> > 
> 
> BTW I'm not talking about the code - I'm talking about the interfaces here.
> 
> One way would be to add support for these interface support in the kernel.
> 
> Maybe others can be replaced with QMP events so management
> can take the necessary action.
> 
> As long as this is not the case, I suspect this code will have to stay
> out of tree :( We can't depend on interfaces provided solely by cuse
> devices on github.

The kernel already has a userspace device interface for TPMs doesn't
it - it is what our existing 'tpm-passthrough' backend in QEMU surely
uses.

If swtpm is going to the trouble of providing device node emulation
with CUSE, I would have thought it could emulate the same interface
as the existing kernel TPM device nodes, thus removing the need for
any extra driver in QEMU ?  Otherwise it doesn't seem like there's
much point in using CUSE, as opposed to a general userspace RPC
protocol that doesn't need kernel support at all.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 15:54         ` Stefan Berger
  2016-01-20 16:03           ` Michael S. Tsirkin
@ 2016-01-20 16:22           ` Daniel P. Berrange
  2016-01-21 11:36             ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 96+ messages in thread
From: Daniel P. Berrange @ 2016-01-20 16:22 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, mst, qemu-devel, jb613w, quan.xu,
	silviu.vlasceanu, hagen.lauer

On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> >>"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41
> >>AM:
> >>
> >>
> >>>process at all - it would make sense if there was a single
> >>>swtpm_cuse shared across all QEMU's, but if there's one per
> >>>QEMU device, it feels like it'd be much simpler to just have
> >>>the functionality linked in QEMU.  That avoids the problem
> >>I tried having it linked in QEMU before. It was basically rejected.
> >I remember an impl you did many years(?) ago now, but don't recall
> >the results of the discussion. Can you elaborate on why it was
> >rejected as an approach ? It just doesn't make much sense to me
> >to have to create an external daemon, a CUSE device and comms
> >protocol, simply to be able to read/write a plain file containing
> >the TPM state. Its massive over engineering IMHO and adding way
> >more complexity and thus scope for failure
> 
> The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> implementation is in the same range. The concern was having this code right
> in the QEMU address space. It's big, it can have bugs, so we don't want it
> to harm QEMU. So we now put this into an external process implemented by the
> swtpm project that builds on libtpms which provides TPM 1.2 functionality
> (to be extended with TPM 2). We cannot call APIs of libtpms directly
> anymore, so we need a control channel, which is implemented through ioctls
> on the CUSE device.

Ok, the security separation concern does make some sense. The use of CUSE
still seems fairly questionable to me. CUSE makes sense if you want to
provide a drop-in replacement for the kernel TPM device driver, which
would avoid ned for a new QEMU backend. If you're not emulating an existing
kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
transport between 2 userspace processes.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 16:06         ` Stefan Berger
@ 2016-01-20 18:54           ` Michael S. Tsirkin
  2016-01-20 21:25             ` Stefan Berger
  0 siblings, 1 reply; 96+ messages in thread
From: Michael S. Tsirkin @ 2016-01-20 18:54 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
> "Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:58:02 AM:
> 
> > From: "Michael S. Tsirkin" <mst@redhat.com>
> 
> >
> > On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58 AM:
> > >
> > > > From: "Michael S. Tsirkin" <mst@redhat.com>
> > >
> > > > >
> > > > > The CUSE TPM and associated tools can be found here:
> > > > >
> > > > > https://github.com/stefanberger/swtpm
> > > > >
> > > > > (please use the latest version)
> > > > >
> > > > > To use the external CUSE TPM, the CUSE TPM should be started as
> follows:
> > > > >
> > > > > # terminate previously started CUSE TPM
> > > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > > >
> > > > > # start CUSE TPM
> > > > > /usr/bin/swtpm_cuse -n vtpm-test
> > > > >
> > > > > QEMU can then be started using the following parameters:
> > > > >
> > > > > qemu-system-x86_64 \
> > > > >    [...] \
> > > > >         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> > dev/vtpm-test
> > > \
> > > > >         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > > > >    [...]
> > > > >
> > > > >
> > > > > Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > > > > Cc: Eric Blake <eblake@redhat.com>
> > > >
> > > > Before we add a dependency on this interface,
> > > > I'd rather see this interface supported in kernel
> > > > and not just in CUSE.
> > >
> > > For using the single hardware TPM, we have the passthrough type.
> > It's usage is
> > > limited.
> > >
> > > CUSE extends the TPM character device interface with ioctl's. Behind the
> > > character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> > > implementations require large amounts of code, which I don't thinkshould go
> > > into the Linux kernel itself. So I don't know who would implement this
> > > interface inside the Linux kernel.
> > >
> > >   Stefan
> > >
> >
> > BTW I'm not talking about the code - I'm talking about the interfaces here.
> >
> > One way would be to add support for these interface support in the kernel.
> >
> > Maybe others can be replaced with QMP events so management
> > can take the necessary action.
> >
> > As long as this is not the case, I suspect this code will have to stay
> > out of tree :( We can't depend on interfaces provided solely by cuse
> > devices on github.
> 
> Why is that? I know that the existing ioctls cannot be modified anymore once
> QEMU accepts the code. So I don't understand it. Some of the ioctls are only
> useful when emulating a hardware device,

Maybe they can be replaced with QMP events?
These could be emitted unconditionally, and ignored
by management in passthrough case.

> so there's no need for them in a
> kernel interface unless one was to put the vTPM code into the Linux kernel, but
> I don't see that this is happening. What is better about a kernel interface
> versus one implemented by a project on github assuming that the existing ioctls
> are stable? What is the real reason here?
> 
>    Stefan
> 

That someone went to the trouble of reviewing the interface for
long-term maintainability, portability etc. That it obeys some existing
standards for API use, coding style etc and will continue to.

In other words, kernel is already a dependency for QEMU.

> >
> >
> >
> > --
> > MST
> >
> 

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-20 15:42     ` Daniel P. Berrange
@ 2016-01-20 19:51       ` Stefan Berger
       [not found]       ` <OF1010A111.39918A93-ON00257F40.006CA5ED-85257F40.006D2225@LocalDomain>
  1 sibling, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 19:51 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 2587 bytes --]

"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:42:09 
AM:

> 
> On Wed, Jan 20, 2016 at 10:23:50AM -0500, Stefan Berger wrote:
> > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
09:58:39 
> > AM:
> > 
> > 
> > > Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a 
> > > QEMU-external TPM
> > > 
> > > On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> > > > The following series of patches extends TPM support with an
> > > > external TPM that offers a Linux CUSE (character device in 
userspace)
> > > > interface. This TPM lets each VM access its own private vTPM.
> > > 
> > > What is the backing store for this vTPM ? Are the vTPMs all
> > > multiplexed onto the host's physical TPM or is there something
> > > else going on ?
> > 
> > The vTPM writes its state into a plain file. In case the user started 
the 
> > vTPM, the user gets to choose the directory. In case of libvirt, 
libvirt 
> > sets up the directory and starts the vTPM with the directory as a 
> > parameter. The expectation for VMs (also containers) is that each VM 
can 
> > use the full set of TPM commands with the vTPM and due to how the TPM 
> > works, it cannot use the hardware TPM for that. SeaBIOS has been 
extended 
> > with TPM 1.2 support and initializes the vTPM in the same way it would 

> > initialize a hardware TPM.
> 
> So if its using a plain file, then when snapshotting VMs we have to
> do full copies of the file and keep them all around in sync with
> the disk snapshots. By not having this functionality in QEMU we don't
> immediately have a way to use qcow2 for the vTPM file backing store
> to deal with snapshot management. The vTPM needs around snapshotting
> feel fairly similar to the NVRAM needs, so it would be desiralbe to
> have a ability to do a consistent thing for both.

The plain file serves as the current state of the TPM. In case of 
migration, suspend, snapshotting, the vTPM state blobs are retrieved from 
the vTPM using ioctls and in case of a snapshot they are written into the 
QCoW2. Upon resume the state blobs are set in the vTPM. I is working as it 
is.

   Stefan


> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    
http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             
http://virt-manager.org :|
> |: http://autobuild.org       -o-         
http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       
http://live.gnome.org/gtk-vnc :|
> 



[-- Attachment #2: Type: text/html, Size: 4009 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
       [not found]       ` <OF1010A111.39918A93-ON00257F40.006CA5ED-85257F40.006D2225@LocalDomain>
@ 2016-01-20 20:16         ` Stefan Berger
  2016-01-21 11:40           ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 20:16 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 3545 bytes --]

Stefan Berger/Watson/IBM wrote on 01/20/2016 02:51:58 PM:

> "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:42:09 
AM:
> 
> > 
> > On Wed, Jan 20, 2016 at 10:23:50AM -0500, Stefan Berger wrote:
> > > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
09:58:39 
> > > AM:
> > > 
> > > 
> > > > Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a 

> > > > QEMU-external TPM
> > > > 
> > > > On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> > > > > The following series of patches extends TPM support with an
> > > > > external TPM that offers a Linux CUSE (character device in 
userspace)
> > > > > interface. This TPM lets each VM access its own private vTPM.
> > > > 
> > > > What is the backing store for this vTPM ? Are the vTPMs all
> > > > multiplexed onto the host's physical TPM or is there something
> > > > else going on ?
> > > 
> > > The vTPM writes its state into a plain file. In case the user 
started the 
> > > vTPM, the user gets to choose the directory. In case of libvirt, 
libvirt 
> > > sets up the directory and starts the vTPM with the directory as a 
> > > parameter. The expectation for VMs (also containers) is that each VM 
can 
> > > use the full set of TPM commands with the vTPM and due to how the 
TPM 
> > > works, it cannot use the hardware TPM for that. SeaBIOS has 
beenextended 
> > > with TPM 1.2 support and initializes the vTPM in the same way it 
would 
> > > initialize a hardware TPM.
> > 
> > So if its using a plain file, then when snapshotting VMs we have to
> > do full copies of the file and keep them all around in sync with
> > the disk snapshots. By not having this functionality in QEMU we don't
> > immediately have a way to use qcow2 for the vTPM file backing store
> > to deal with snapshot management. The vTPM needs around snapshotting
> > feel fairly similar to the NVRAM needs, so it would be desiralbe to
> > have a ability to do a consistent thing for both.
> 
> The plain file serves as the current state of the TPM. In case of 
> migration, suspend, snapshotting, the vTPM state blobs are retrieved
> from the vTPM using ioctls and in case of a snapshot they are 
> written into the QCoW2. Upon resume the state blobs are set in the 
> vTPM. I is working as it is.

There is one issue in case of resume of a snapshot. If the permanent state 
of the TPM is modified during snapshotting, like ownership is taken of the 
TPM, the state, including the owner password, is written into the plain 
file. Then the VM is shut down. Once it is restarted (not a resume from a 
snapshot), the TPM's state will be relected by what was done during the 
run of that snapshot. So this is likely undesirable. Now the only way 
around this seems to be that one needs to know the reason for why the 
state blobs were pushed into the vTPM. In case of a snapshot, the writing 
of the permanent state into a file may need to be suppressed, while on a 
VM resume and resume from migration operation it needs to be written into 
the TPM's state file.

   Stefan

> 
>    Stefan
> 
> 
> > 
> > Regards,
> > Daniel
> > -- 
> > |: http://berrange.com      -o-    
http://www.flickr.com/photos/dberrange/:|
> > |: http://libvirt.org              -o-             
http://virt-manager.org:|
> > |: http://autobuild.org       -o-         
http://search.cpan.org/~danberr/:|
> > |: http://entangle-photo.org       -o-       
http://live.gnome.org/gtk-vnc:|
> > 



[-- Attachment #2: Type: text/html, Size: 5232 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 18:54           ` Michael S. Tsirkin
@ 2016-01-20 21:25             ` Stefan Berger
  2016-01-21  5:08               ` Michael S. Tsirkin
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-01-20 21:25 UTC (permalink / raw)
  To: Michael S. Tsirkin, Stefan Berger
  Cc: hagen.lauer, jb613w, qemu-devel, quan.xu, silviu.vlasceanu

On 01/20/2016 01:54 PM, Michael S. Tsirkin wrote:
> On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
>> "Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:58:02 AM:
>>
>>> From: "Michael S. Tsirkin" <mst@redhat.com>
>>> On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
>>>> "Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58 AM:
>>>>
>>>>> From: "Michael S. Tsirkin" <mst@redhat.com>
>>>>>> The CUSE TPM and associated tools can be found here:
>>>>>>
>>>>>> https://github.com/stefanberger/swtpm
>>>>>>
>>>>>> (please use the latest version)
>>>>>>
>>>>>> To use the external CUSE TPM, the CUSE TPM should be started as
>> follows:
>>>>>> # terminate previously started CUSE TPM
>>>>>> /usr/bin/swtpm_ioctl -s /dev/vtpm-test
>>>>>>
>>>>>> # start CUSE TPM
>>>>>> /usr/bin/swtpm_cuse -n vtpm-test
>>>>>>
>>>>>> QEMU can then be started using the following parameters:
>>>>>>
>>>>>> qemu-system-x86_64 \
>>>>>>     [...] \
>>>>>>          -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
>>> dev/vtpm-test
>>>> \
>>>>>>          -device tpm-tis,id=tpm0,tpmdev=tpm0 \
>>>>>>     [...]
>>>>>>
>>>>>>
>>>>>> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
>>>>>> Cc: Eric Blake <eblake@redhat.com>
>>>>> Before we add a dependency on this interface,
>>>>> I'd rather see this interface supported in kernel
>>>>> and not just in CUSE.
>>>> For using the single hardware TPM, we have the passthrough type.
>>> It's usage is
>>>> limited.
>>>>
>>>> CUSE extends the TPM character device interface with ioctl's. Behind the
>>>> character device we can implement a TPM 1.2 and a TPM 2. Both TPM
>>>> implementations require large amounts of code, which I don't thinkshould go
>>>> into the Linux kernel itself. So I don't know who would implement this
>>>> interface inside the Linux kernel.
>>>>
>>>>    Stefan
>>>>
>>> BTW I'm not talking about the code - I'm talking about the interfaces here.
>>>
>>> One way would be to add support for these interface support in the kernel.
>>>
>>> Maybe others can be replaced with QMP events so management
>>> can take the necessary action.
>>>
>>> As long as this is not the case, I suspect this code will have to stay
>>> out of tree :( We can't depend on interfaces provided solely by cuse
>>> devices on github.
>> Why is that? I know that the existing ioctls cannot be modified anymore once
>> QEMU accepts the code. So I don't understand it. Some of the ioctls are only
>> useful when emulating a hardware device,
> Maybe they can be replaced with QMP events?
> These could be emitted unconditionally, and ignored
> by management in passthrough case.
>
>> so there's no need for them in a
>> kernel interface unless one was to put the vTPM code into the Linux kernel, but
>> I don't see that this is happening. What is better about a kernel interface
>> versus one implemented by a project on github assuming that the existing ioctls
>> are stable? What is the real reason here?
>>
>>     Stefan
>>
> That someone went to the trouble of reviewing the interface for
> long-term maintainability, portability etc. That it obeys some existing
> standards for API use, coding style etc and will continue to.

The same applies to the libtpms and swtpm projects as well, I suppose. 
If someone wants to join them, let me know.

As stated, we will keep the existing ioctl stables once integrated but 
will adapt where necessary before that.

>
> In other words, kernel is already a dependency for QEMU.

I don't see vTPM going into the kernel, at least I don't know of anyone 
trying to do that.

    Stefan


>>>
>>>
>>> --
>>> MST
>>>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 21:25             ` Stefan Berger
@ 2016-01-21  5:08               ` Michael S. Tsirkin
  2016-01-21  5:41                 ` Xu, Quan
  2016-01-21 12:09                 ` Stefan Berger
  0 siblings, 2 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2016-01-21  5:08 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

On Wed, Jan 20, 2016 at 04:25:15PM -0500, Stefan Berger wrote:
> On 01/20/2016 01:54 PM, Michael S. Tsirkin wrote:
> >On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
> >>"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:58:02 AM:
> >>
> >>>From: "Michael S. Tsirkin" <mst@redhat.com>
> >>>On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> >>>>"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58 AM:
> >>>>
> >>>>>From: "Michael S. Tsirkin" <mst@redhat.com>
> >>>>>>The CUSE TPM and associated tools can be found here:
> >>>>>>
> >>>>>>https://github.com/stefanberger/swtpm
> >>>>>>
> >>>>>>(please use the latest version)
> >>>>>>
> >>>>>>To use the external CUSE TPM, the CUSE TPM should be started as
> >>follows:
> >>>>>># terminate previously started CUSE TPM
> >>>>>>/usr/bin/swtpm_ioctl -s /dev/vtpm-test
> >>>>>>
> >>>>>># start CUSE TPM
> >>>>>>/usr/bin/swtpm_cuse -n vtpm-test
> >>>>>>
> >>>>>>QEMU can then be started using the following parameters:
> >>>>>>
> >>>>>>qemu-system-x86_64 \
> >>>>>>    [...] \
> >>>>>>         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> >>>dev/vtpm-test
> >>>>\
> >>>>>>         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> >>>>>>    [...]
> >>>>>>
> >>>>>>
> >>>>>>Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> >>>>>>Cc: Eric Blake <eblake@redhat.com>
> >>>>>Before we add a dependency on this interface,
> >>>>>I'd rather see this interface supported in kernel
> >>>>>and not just in CUSE.
> >>>>For using the single hardware TPM, we have the passthrough type.
> >>>It's usage is
> >>>>limited.
> >>>>
> >>>>CUSE extends the TPM character device interface with ioctl's. Behind the
> >>>>character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> >>>>implementations require large amounts of code, which I don't thinkshould go
> >>>>into the Linux kernel itself. So I don't know who would implement this
> >>>>interface inside the Linux kernel.
> >>>>
> >>>>   Stefan
> >>>>
> >>>BTW I'm not talking about the code - I'm talking about the interfaces here.
> >>>
> >>>One way would be to add support for these interface support in the kernel.
> >>>
> >>>Maybe others can be replaced with QMP events so management
> >>>can take the necessary action.
> >>>
> >>>As long as this is not the case, I suspect this code will have to stay
> >>>out of tree :( We can't depend on interfaces provided solely by cuse
> >>>devices on github.
> >>Why is that? I know that the existing ioctls cannot be modified anymore once
> >>QEMU accepts the code. So I don't understand it. Some of the ioctls are only
> >>useful when emulating a hardware device,
> >Maybe they can be replaced with QMP events?
> >These could be emitted unconditionally, and ignored
> >by management in passthrough case.
> >
> >>so there's no need for them in a
> >>kernel interface unless one was to put the vTPM code into the Linux kernel, but
> >>I don't see that this is happening. What is better about a kernel interface
> >>versus one implemented by a project on github assuming that the existing ioctls
> >>are stable? What is the real reason here?
> >>
> >>    Stefan
> >>
> >That someone went to the trouble of reviewing the interface for
> >long-term maintainability, portability etc. That it obeys some existing
> >standards for API use, coding style etc and will continue to.
> 
> The same applies to the libtpms and swtpm projects as well, I suppose. If
> someone wants to join them, let me know.
> 
> As stated, we will keep the existing ioctl stables once integrated but will
> adapt where necessary before that.
> >
> >In other words, kernel is already a dependency for QEMU.
> 
> I don't see vTPM going into the kernel, at least I don't know of anyone
> trying to do that.
> 
>    Stefan
> 

Well that was just one idea, it's up to you guys.
But while modular multi-process QEMU for security
might happen in future, I don't see us doing this
by moving large parts of QEMU into cuse devices,
and talking to these through ioctls.

> >>>
> >>>
> >>>--
> >>>MST
> >>>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-21  5:08               ` Michael S. Tsirkin
@ 2016-01-21  5:41                 ` Xu, Quan
  2016-01-21  9:19                   ` Michael S. Tsirkin
  2016-01-21 12:09                 ` Stefan Berger
  1 sibling, 1 reply; 96+ messages in thread
From: Xu, Quan @ 2016-01-21  5:41 UTC (permalink / raw)
  To: Michael S. Tsirkin, Stefan Berger, hagen.lauer
  Cc: jb613w, Stefan Berger, qemu-devel, silviu.vlasceanu

> On January 21, 2016 at 1:08pm, <mst@redhat.com> wrote:
> On Wed, Jan 20, 2016 at 04:25:15PM -0500, Stefan Berger wrote:
> > On 01/20/2016 01:54 PM, Michael S. Tsirkin wrote:
> > >On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
> > >>"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:58:02 AM:
> > >>
> > >>>From: "Michael S. Tsirkin" <mst@redhat.com> On Wed, Jan 20, 2016 at
> > >>>10:36:41AM -0500, Stefan Berger wrote:
> > >>>>"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58
> AM:
> > >>>>
> > >>>>>From: "Michael S. Tsirkin" <mst@redhat.com>
> > >>>>>>The CUSE TPM and associated tools can be found here:
> > >>>>>>
> > >>>>>>https://github.com/stefanberger/swtpm
> > >>>>>>
> > >>>>>>(please use the latest version)
> > >>>>>>
> > >>>>>>To use the external CUSE TPM, the CUSE TPM should be started as
> > >>follows:
> > >>>>>># terminate previously started CUSE TPM /usr/bin/swtpm_ioctl -s
> > >>>>>>/dev/vtpm-test
> > >>>>>>
> > >>>>>># start CUSE TPM
> > >>>>>>/usr/bin/swtpm_cuse -n vtpm-test
> > >>>>>>
> > >>>>>>QEMU can then be started using the following parameters:
> > >>>>>>
> > >>>>>>qemu-system-x86_64 \
> > >>>>>>    [...] \
> > >>>>>>         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> > >>>dev/vtpm-test
> > >>>>\
> > >>>>>>         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > >>>>>>    [...]
> > >>>>>>
> > >>>>>>
> > >>>>>>Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > >>>>>>Cc: Eric Blake <eblake@redhat.com>
> > >>>>>Before we add a dependency on this interface, I'd rather see this
> > >>>>>interface supported in kernel and not just in CUSE.
> > >>>>For using the single hardware TPM, we have the passthrough type.
> > >>>It's usage is
> > >>>>limited.
> > >>>>
> > >>>>CUSE extends the TPM character device interface with ioctl's.
> > >>>>Behind the character device we can implement a TPM 1.2 and a TPM
> > >>>>2. Both TPM implementations require large amounts of code, which I
> > >>>>don't thinkshould go into the Linux kernel itself. So I don't know
> > >>>>who would implement this interface inside the Linux kernel.
> > >>>>
> > >>>>   Stefan
> > >>>>
> > >>>BTW I'm not talking about the code - I'm talking about the interfaces here.
> > >>>
> > >>>One way would be to add support for these interface support in the kernel.
> > >>>
> > >>>Maybe others can be replaced with QMP events so management can take
> > >>>the necessary action.
> > >>>
> > >>>As long as this is not the case, I suspect this code will have to
> > >>>stay out of tree :( We can't depend on interfaces provided solely
> > >>>by cuse devices on github.
> > >>Why is that? I know that the existing ioctls cannot be modified
> > >>anymore once QEMU accepts the code. So I don't understand it. Some
> > >>of the ioctls are only useful when emulating a hardware device,
> > >Maybe they can be replaced with QMP events?
> > >These could be emitted unconditionally, and ignored by management in
> > >passthrough case.
> > >
> > >>so there's no need for them in a
> > >>kernel interface unless one was to put the vTPM code into the Linux
> > >>kernel, but I don't see that this is happening. What is better about
> > >>a kernel interface versus one implemented by a project on github
> > >>assuming that the existing ioctls are stable? What is the real reason here?
> > >>
> > >>    Stefan
> > >>
> > >That someone went to the trouble of reviewing the interface for
> > >long-term maintainability, portability etc. That it obeys some
> > >existing standards for API use, coding style etc and will continue to.
> >
> > The same applies to the libtpms and swtpm projects as well, I suppose.
> > If someone wants to join them, let me know.
> >
> > As stated, we will keep the existing ioctl stables once integrated but
> > will adapt where necessary before that.
> > >
> > >In other words, kernel is already a dependency for QEMU.
> >
> > I don't see vTPM going into the kernel, at least I don't know of
> > anyone trying to do that.
> >
> >    Stefan
> >
> 
> Well that was just one idea, it's up to you guys.
> But while modular multi-process QEMU for security might happen in future, I
> don't see us doing this by moving large parts of QEMU into cuse devices, and
> talking to these through ioctls.


IIUC, the major root issue is that CUSE TPM is based on soft defined TPM, instead of hardware TPM.
This may bring in more security/stability issues.. 
As I know, some trusted cloud products must base on upstream code. TPM passthru is just for limited VM.
As I mentioned, I think CUSE TPM is a good solution for trusted cloud.

Hagen, could you share more user cases for CUSE TPM?
MST, is it possible for CUSE TPM upstream? or much more ARs for Stefan Berger?


Quan

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-21  5:41                 ` Xu, Quan
@ 2016-01-21  9:19                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2016-01-21  9:19 UTC (permalink / raw)
  To: Xu, Quan
  Cc: Stefan Berger, Stefan Berger, qemu-devel, jb613w,
	silviu.vlasceanu, hagen.lauer

On Thu, Jan 21, 2016 at 05:41:32AM +0000, Xu, Quan wrote:
> > On January 21, 2016 at 1:08pm, <mst@redhat.com> wrote:
> > On Wed, Jan 20, 2016 at 04:25:15PM -0500, Stefan Berger wrote:
> > > On 01/20/2016 01:54 PM, Michael S. Tsirkin wrote:
> > > >On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
> > > >>"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:58:02 AM:
> > > >>
> > > >>>From: "Michael S. Tsirkin" <mst@redhat.com> On Wed, Jan 20, 2016 at
> > > >>>10:36:41AM -0500, Stefan Berger wrote:
> > > >>>>"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/20/2016 10:20:58
> > AM:
> > > >>>>
> > > >>>>>From: "Michael S. Tsirkin" <mst@redhat.com>
> > > >>>>>>The CUSE TPM and associated tools can be found here:
> > > >>>>>>
> > > >>>>>>https://github.com/stefanberger/swtpm
> > > >>>>>>
> > > >>>>>>(please use the latest version)
> > > >>>>>>
> > > >>>>>>To use the external CUSE TPM, the CUSE TPM should be started as
> > > >>follows:
> > > >>>>>># terminate previously started CUSE TPM /usr/bin/swtpm_ioctl -s
> > > >>>>>>/dev/vtpm-test
> > > >>>>>>
> > > >>>>>># start CUSE TPM
> > > >>>>>>/usr/bin/swtpm_cuse -n vtpm-test
> > > >>>>>>
> > > >>>>>>QEMU can then be started using the following parameters:
> > > >>>>>>
> > > >>>>>>qemu-system-x86_64 \
> > > >>>>>>    [...] \
> > > >>>>>>         -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> > > >>>dev/vtpm-test
> > > >>>>\
> > > >>>>>>         -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > > >>>>>>    [...]
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > > >>>>>>Cc: Eric Blake <eblake@redhat.com>
> > > >>>>>Before we add a dependency on this interface, I'd rather see this
> > > >>>>>interface supported in kernel and not just in CUSE.
> > > >>>>For using the single hardware TPM, we have the passthrough type.
> > > >>>It's usage is
> > > >>>>limited.
> > > >>>>
> > > >>>>CUSE extends the TPM character device interface with ioctl's.
> > > >>>>Behind the character device we can implement a TPM 1.2 and a TPM
> > > >>>>2. Both TPM implementations require large amounts of code, which I
> > > >>>>don't thinkshould go into the Linux kernel itself. So I don't know
> > > >>>>who would implement this interface inside the Linux kernel.
> > > >>>>
> > > >>>>   Stefan
> > > >>>>
> > > >>>BTW I'm not talking about the code - I'm talking about the interfaces here.
> > > >>>
> > > >>>One way would be to add support for these interface support in the kernel.
> > > >>>
> > > >>>Maybe others can be replaced with QMP events so management can take
> > > >>>the necessary action.
> > > >>>
> > > >>>As long as this is not the case, I suspect this code will have to
> > > >>>stay out of tree :( We can't depend on interfaces provided solely
> > > >>>by cuse devices on github.
> > > >>Why is that? I know that the existing ioctls cannot be modified
> > > >>anymore once QEMU accepts the code. So I don't understand it. Some
> > > >>of the ioctls are only useful when emulating a hardware device,
> > > >Maybe they can be replaced with QMP events?
> > > >These could be emitted unconditionally, and ignored by management in
> > > >passthrough case.
> > > >
> > > >>so there's no need for them in a
> > > >>kernel interface unless one was to put the vTPM code into the Linux
> > > >>kernel, but I don't see that this is happening. What is better about
> > > >>a kernel interface versus one implemented by a project on github
> > > >>assuming that the existing ioctls are stable? What is the real reason here?
> > > >>
> > > >>    Stefan
> > > >>
> > > >That someone went to the trouble of reviewing the interface for
> > > >long-term maintainability, portability etc. That it obeys some
> > > >existing standards for API use, coding style etc and will continue to.
> > >
> > > The same applies to the libtpms and swtpm projects as well, I suppose.
> > > If someone wants to join them, let me know.
> > >
> > > As stated, we will keep the existing ioctl stables once integrated but
> > > will adapt where necessary before that.
> > > >
> > > >In other words, kernel is already a dependency for QEMU.
> > >
> > > I don't see vTPM going into the kernel, at least I don't know of
> > > anyone trying to do that.
> > >
> > >    Stefan
> > >
> > 
> > Well that was just one idea, it's up to you guys.
> > But while modular multi-process QEMU for security might happen in future, I
> > don't see us doing this by moving large parts of QEMU into cuse devices, and
> > talking to these through ioctls.
> 
> 
> IIUC, the major root issue is that CUSE TPM is based on soft defined TPM, instead of hardware TPM.

No, the major issue is in how it's put together.

> This may bring in more security/stability issues.. 
> As I know, some trusted cloud products must base on upstream code. TPM passthru is just for limited VM.
> As I mentioned, I think CUSE TPM is a good solution for trusted cloud.
>
> Hagen, could you share more user cases for CUSE TPM?
> MST, is it possible for CUSE TPM upstream? or much more ARs for Stefan Berger?
> 
> 
> Quan

Nothing's wrong with a software TPM.  My advice is to stop using CUSE
for it, and to put it in an existing source tree (QEMU,kernel, something
else QEMU depends on).

-- 
MST

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-20 16:22           ` Daniel P. Berrange
@ 2016-01-21 11:36             ` Dr. David Alan Gilbert
  2016-05-31 18:58               ` BICKFORD, JEFFREY E
  0 siblings, 1 reply; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-01-21 11:36 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Stefan Berger, mst, Stefan Berger, qemu-devel, jb613w, quan.xu,
	silviu.vlasceanu, hagen.lauer

* Daniel P. Berrange (berrange@redhat.com) wrote:
> On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > >>"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41
> > >>AM:
> > >>
> > >>
> > >>>process at all - it would make sense if there was a single
> > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > >>>QEMU device, it feels like it'd be much simpler to just have
> > >>>the functionality linked in QEMU.  That avoids the problem
> > >>I tried having it linked in QEMU before. It was basically rejected.
> > >I remember an impl you did many years(?) ago now, but don't recall
> > >the results of the discussion. Can you elaborate on why it was
> > >rejected as an approach ? It just doesn't make much sense to me
> > >to have to create an external daemon, a CUSE device and comms
> > >protocol, simply to be able to read/write a plain file containing
> > >the TPM state. Its massive over engineering IMHO and adding way
> > >more complexity and thus scope for failure
> > 
> > The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> > implementation is in the same range. The concern was having this code right
> > in the QEMU address space. It's big, it can have bugs, so we don't want it
> > to harm QEMU. So we now put this into an external process implemented by the
> > swtpm project that builds on libtpms which provides TPM 1.2 functionality
> > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > anymore, so we need a control channel, which is implemented through ioctls
> > on the CUSE device.
> 
> Ok, the security separation concern does make some sense. The use of CUSE
> still seems fairly questionable to me. CUSE makes sense if you want to
> provide a drop-in replacement for the kernel TPM device driver, which
> would avoid ned for a new QEMU backend. If you're not emulating an existing
> kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
> transport between 2 userspace processes.

While I don't really like CUSE; I can see some of the reasoning here.
By providing the existing TPM ioctl interface I think it means you can use
existing host-side TPM tools to initialise/query the soft-tpm, and those
should be independent of the soft-tpm implementation.
As for the extra interfaces you need because it's a soft-tpm to set it up,
once you've already got that ioctl interface as above, then it seems to make
sense to extend that to add the extra interfaces needed.  The only thing
you have to watch for there are that the extra interfaces don't clash
with any future kernel ioctl extensions, and that the interface defined
is generic enough for different soft-tpm implementations.

Dave


> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-20 20:16         ` Stefan Berger
@ 2016-01-21 11:40           ` Dr. David Alan Gilbert
  2016-01-21 12:31             ` Stefan Berger
                               ` (2 more replies)
  0 siblings, 3 replies; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-01-21 11:40 UTC (permalink / raw)
  To: Stefan Berger
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

* Stefan Berger (stefanb@us.ibm.com) wrote:
> Stefan Berger/Watson/IBM wrote on 01/20/2016 02:51:58 PM:
> 
> > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:42:09 
> AM:
> > 
> > > 
> > > On Wed, Jan 20, 2016 at 10:23:50AM -0500, Stefan Berger wrote:
> > > > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
> 09:58:39 
> > > > AM:
> > > > 
> > > > 
> > > > > Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a 
> 
> > > > > QEMU-external TPM
> > > > > 
> > > > > On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> > > > > > The following series of patches extends TPM support with an
> > > > > > external TPM that offers a Linux CUSE (character device in 
> userspace)
> > > > > > interface. This TPM lets each VM access its own private vTPM.
> > > > > 
> > > > > What is the backing store for this vTPM ? Are the vTPMs all
> > > > > multiplexed onto the host's physical TPM or is there something
> > > > > else going on ?
> > > > 
> > > > The vTPM writes its state into a plain file. In case the user 
> started the 
> > > > vTPM, the user gets to choose the directory. In case of libvirt, 
> libvirt 
> > > > sets up the directory and starts the vTPM with the directory as a 
> > > > parameter. The expectation for VMs (also containers) is that each VM 
> can 
> > > > use the full set of TPM commands with the vTPM and due to how the 
> TPM 
> > > > works, it cannot use the hardware TPM for that. SeaBIOS has 
> beenextended 
> > > > with TPM 1.2 support and initializes the vTPM in the same way it 
> would 
> > > > initialize a hardware TPM.
> > > 
> > > So if its using a plain file, then when snapshotting VMs we have to
> > > do full copies of the file and keep them all around in sync with
> > > the disk snapshots. By not having this functionality in QEMU we don't
> > > immediately have a way to use qcow2 for the vTPM file backing store
> > > to deal with snapshot management. The vTPM needs around snapshotting
> > > feel fairly similar to the NVRAM needs, so it would be desiralbe to
> > > have a ability to do a consistent thing for both.
> > 
> > The plain file serves as the current state of the TPM. In case of 
> > migration, suspend, snapshotting, the vTPM state blobs are retrieved
> > from the vTPM using ioctls and in case of a snapshot they are 
> > written into the QCoW2. Upon resume the state blobs are set in the 
> > vTPM. I is working as it is.
> 
> There is one issue in case of resume of a snapshot. If the permanent state 
> of the TPM is modified during snapshotting, like ownership is taken of the 
> TPM, the state, including the owner password, is written into the plain 
> file. Then the VM is shut down. Once it is restarted (not a resume from a 
> snapshot), the TPM's state will be relected by what was done during the 
> run of that snapshot. So this is likely undesirable. Now the only way 
> around this seems to be that one needs to know the reason for why the 
> state blobs were pushed into the vTPM. In case of a snapshot, the writing 
> of the permanent state into a file may need to be suppressed, while on a 
> VM resume and resume from migration operation it needs to be written into 
> the TPM's state file.

I don't understand that; are you saying that the ioctl's dont provide all
the information that's included in the state file?

Dave

> 
>    Stefan
> 
> > 
> >    Stefan
> > 
> > 
> > > 
> > > Regards,
> > > Daniel
> > > -- 
> > > |: http://berrange.com      -o-    
> http://www.flickr.com/photos/dberrange/:|
> > > |: http://libvirt.org              -o-             
> http://virt-manager.org:|
> > > |: http://autobuild.org       -o-         
> http://search.cpan.org/~danberr/:|
> > > |: http://entangle-photo.org       -o-       
> http://live.gnome.org/gtk-vnc:|
> > > 
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-21  5:08               ` Michael S. Tsirkin
  2016-01-21  5:41                 ` Xu, Quan
@ 2016-01-21 12:09                 ` Stefan Berger
  1 sibling, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-21 12:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, qemu-devel, jb613w, quan.xu, silviu.vlasceanu,
	hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 432 bytes --]

"Michael S. Tsirkin" <mst@redhat.com> wrote on 01/21/2016 12:08:20 AM:


> 
> Well that was just one idea, it's up to you guys.
> But while modular multi-process QEMU for security
> might happen in future, I don't see us doing this
> by moving large parts of QEMU into cuse devices,
> and talking to these through ioctls.

I guess we'll have to rely one someone else to provide a different 
interface.

   Stefan
=


[-- Attachment #2: Type: text/html, Size: 618 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
  2016-01-21 11:40           ` Dr. David Alan Gilbert
@ 2016-01-21 12:31             ` Stefan Berger
       [not found]             ` <201601211231.u0LCVGCZ021111@d01av01.pok.ibm.com>
       [not found]             ` <OF7ED031CA.CDD3196F-ON00257F41.004305BB-85257F41.0044C71A@LocalDomain>
  2 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-21 12:31 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 5344 bytes --]

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote on 01/21/2016 
06:40:35 AM:

> 
> * Stefan Berger (stefanb@us.ibm.com) wrote:
> > Stefan Berger/Watson/IBM wrote on 01/20/2016 02:51:58 PM:
> > 
> > > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
10:42:09 
> > AM:
> > > 
> > > > 
> > > > On Wed, Jan 20, 2016 at 10:23:50AM -0500, Stefan Berger wrote:
> > > > > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
> > 09:58:39 
> > > > > AM:
> > > > > 
> > > > > 
> > > > > > Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support 
with a 
> > 
> > > > > > QEMU-external TPM
> > > > > > 
> > > > > > On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> > > > > > > The following series of patches extends TPM support with an
> > > > > > > external TPM that offers a Linux CUSE (character device in 
> > userspace)
> > > > > > > interface. This TPM lets each VM access its own private 
vTPM.
> > > > > > 
> > > > > > What is the backing store for this vTPM ? Are the vTPMs all
> > > > > > multiplexed onto the host's physical TPM or is there something
> > > > > > else going on ?
> > > > > 
> > > > > The vTPM writes its state into a plain file. In case the user 
> > started the 
> > > > > vTPM, the user gets to choose the directory. In case of libvirt, 

> > libvirt 
> > > > > sets up the directory and starts the vTPM with the directory as 
a 
> > > > > parameter. The expectation for VMs (also containers) is that 
each VM 
> > can 
> > > > > use the full set of TPM commands with the vTPM and due to how 
the 
> > TPM 
> > > > > works, it cannot use the hardware TPM for that. SeaBIOS has 
> > beenextended 
> > > > > with TPM 1.2 support and initializes the vTPM in the same way it 

> > would 
> > > > > initialize a hardware TPM.
> > > > 
> > > > So if its using a plain file, then when snapshotting VMs we have 
to
> > > > do full copies of the file and keep them all around in sync with
> > > > the disk snapshots. By not having this functionality in QEMU we 
don't
> > > > immediately have a way to use qcow2 for the vTPM file backing 
store
> > > > to deal with snapshot management. The vTPM needs around 
snapshotting
> > > > feel fairly similar to the NVRAM needs, so it would be desiralbe 
to
> > > > have a ability to do a consistent thing for both.
> > > 
> > > The plain file serves as the current state of the TPM. In case of 
> > > migration, suspend, snapshotting, the vTPM state blobs are retrieved
> > > from the vTPM using ioctls and in case of a snapshot they are 
> > > written into the QCoW2. Upon resume the state blobs are set in the 
> > > vTPM. I is working as it is.
> > 
> > There is one issue in case of resume of a snapshot. If the permanent 
state 
> > of the TPM is modified during snapshotting, like ownership is taken of 
the 
> > TPM, the state, including the owner password, is written into the 
plain 
> > file. Then the VM is shut down. Once it is restarted (not a resume 
from a 
> > snapshot), the TPM's state will be relected by what was done during 
the 
> > run of that snapshot. So this is likely undesirable. Now the only way 
> > around this seems to be that one needs to know the reason for why the 
> > state blobs were pushed into the vTPM. In case of a snapshot, the 
writing 
> > of the permanent state into a file may need to be suppressed, while on 
a 
> > VM resume and resume from migration operation it needs to be written 
into 
> > the TPM's state file.
> 
> I don't understand that; are you saying that the ioctl's dont provide 
all
> the information that's included in the state file?

No. Running a snapshot does not change the state of the VM image unless 
one takes another snapshot. The vTPM has to be behave the same way, 
meaning that the state of the vTPM must not be overwritten while in a 
snapshot. However, the vTPM needs to know that it's running a snapshot 
whose state is 'volatile'.

Example: 
1) A VM is run and VM image is in state VM-A and vTPM is in state vTPM-A. 
The VM is shut down and VM is in state VM-A and vTPM is in state vTPM-A.

2) The VM runs a snapshot and VM image is in state VM-B and vTPM is in 
state B. The user takes ownership of the vTPM, which puts the vTPM into 
state vTPM-B2. VM is shut down and with that all VM image state is 
discarded. Also the VTPM's state needs to be discarded.

3) The VM is run again and the VM image is in state VM-A and the vTPM must 
be in state vTPM-A from 1). However, at the moment the vTPM wold be in 
state vTPM-B2 from the last run of the snapshot since the state was 
written into the vTPM's state file.

The way around the problem in 3) stemming from 2) is writing the vTPM 
state (which is kept in a file) into a differently named file while 
running a snapshot. However, QEMU needs to tell the vTPM that it's running 
a snapshot and the state is to be treated as volatile. A flag that conveys 
'you're running a snapshot' while setting the device state would be 
enough. Though currently the function that triggers the setting of device 
state doesn't get that in a flag. So there would have to be a function 
like 'flag = qemu_doing_snapshot()' and pass that flag to the vTPM. Maybe 
it already exists.

    Stefan



[-- Attachment #2: Type: text/html, Size: 6569 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
       [not found]             ` <201601211231.u0LCVGCZ021111@d01av01.pok.ibm.com>
@ 2016-01-21 14:53               ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-01-21 14:53 UTC (permalink / raw)
  To: Stefan Berger
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

* Stefan Berger (stefanb@us.ibm.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote on 01/21/2016 
> 06:40:35 AM:
> 
> > 
> > * Stefan Berger (stefanb@us.ibm.com) wrote:
> > > Stefan Berger/Watson/IBM wrote on 01/20/2016 02:51:58 PM:
> > > 
> > > > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
> 10:42:09 
> > > AM:
> > > > 
> > > > > 
> > > > > On Wed, Jan 20, 2016 at 10:23:50AM -0500, Stefan Berger wrote:
> > > > > > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
> > > 09:58:39 
> > > > > > AM:
> > > > > > 
> > > > > > 
> > > > > > > Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support 
> with a 
> > > 
> > > > > > > QEMU-external TPM
> > > > > > > 
> > > > > > > On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> > > > > > > > The following series of patches extends TPM support with an
> > > > > > > > external TPM that offers a Linux CUSE (character device in 
> > > userspace)
> > > > > > > > interface. This TPM lets each VM access its own private 
> vTPM.
> > > > > > > 
> > > > > > > What is the backing store for this vTPM ? Are the vTPMs all
> > > > > > > multiplexed onto the host's physical TPM or is there something
> > > > > > > else going on ?
> > > > > > 
> > > > > > The vTPM writes its state into a plain file. In case the user 
> > > started the 
> > > > > > vTPM, the user gets to choose the directory. In case of libvirt, 
> 
> > > libvirt 
> > > > > > sets up the directory and starts the vTPM with the directory as 
> a 
> > > > > > parameter. The expectation for VMs (also containers) is that 
> each VM 
> > > can 
> > > > > > use the full set of TPM commands with the vTPM and due to how 
> the 
> > > TPM 
> > > > > > works, it cannot use the hardware TPM for that. SeaBIOS has 
> > > beenextended 
> > > > > > with TPM 1.2 support and initializes the vTPM in the same way it 
> 
> > > would 
> > > > > > initialize a hardware TPM.
> > > > > 
> > > > > So if its using a plain file, then when snapshotting VMs we have 
> to
> > > > > do full copies of the file and keep them all around in sync with
> > > > > the disk snapshots. By not having this functionality in QEMU we 
> don't
> > > > > immediately have a way to use qcow2 for the vTPM file backing 
> store
> > > > > to deal with snapshot management. The vTPM needs around 
> snapshotting
> > > > > feel fairly similar to the NVRAM needs, so it would be desiralbe 
> to
> > > > > have a ability to do a consistent thing for both.
> > > > 
> > > > The plain file serves as the current state of the TPM. In case of 
> > > > migration, suspend, snapshotting, the vTPM state blobs are retrieved
> > > > from the vTPM using ioctls and in case of a snapshot they are 
> > > > written into the QCoW2. Upon resume the state blobs are set in the 
> > > > vTPM. I is working as it is.
> > > 
> > > There is one issue in case of resume of a snapshot. If the permanent 
> state 
> > > of the TPM is modified during snapshotting, like ownership is taken of 
> the 
> > > TPM, the state, including the owner password, is written into the 
> plain 
> > > file. Then the VM is shut down. Once it is restarted (not a resume 
> from a 
> > > snapshot), the TPM's state will be relected by what was done during 
> the 
> > > run of that snapshot. So this is likely undesirable. Now the only way 
> > > around this seems to be that one needs to know the reason for why the 
> > > state blobs were pushed into the vTPM. In case of a snapshot, the 
> writing 
> > > of the permanent state into a file may need to be suppressed, while on 
> a 
> > > VM resume and resume from migration operation it needs to be written 
> into 
> > > the TPM's state file.
> > 
> > I don't understand that; are you saying that the ioctl's dont provide 
> all
> > the information that's included in the state file?
> 
> No. Running a snapshot does not change the state of the VM image unless 
> one takes another snapshot. The vTPM has to be behave the same way, 
> meaning that the state of the vTPM must not be overwritten while in a 
> snapshot. However, the vTPM needs to know that it's running a snapshot 
> whose state is 'volatile'.
> 
> Example: 
> 1) A VM is run and VM image is in state VM-A and vTPM is in state vTPM-A. 
> The VM is shut down and VM is in state VM-A and vTPM is in state vTPM-A.
> 
> 2) The VM runs a snapshot and VM image is in state VM-B and vTPM is in 
> state B. The user takes ownership of the vTPM, which puts the vTPM into 
> state vTPM-B2. VM is shut down and with that all VM image state is 
> discarded. Also the VTPM's state needs to be discarded.
> 
> 3) The VM is run again and the VM image is in state VM-A and the vTPM must 
> be in state vTPM-A from 1). However, at the moment the vTPM wold be in 
> state vTPM-B2 from the last run of the snapshot since the state was 
> written into the vTPM's state file.
> 
> The way around the problem in 3) stemming from 2) is writing the vTPM 
> state (which is kept in a file) into a differently named file while 
> running a snapshot. However, QEMU needs to tell the vTPM that it's running 
> a snapshot and the state is to be treated as volatile. A flag that conveys 
> 'you're running a snapshot' while setting the device state would be 
> enough. Though currently the function that triggers the setting of device 
> state doesn't get that in a flag. So there would have to be a function 
> like 'flag = qemu_doing_snapshot()' and pass that flag to the vTPM. Maybe 
> it already exists.

So I understand problem 3; but I don't think the solution works.
You don't know what the lifetime and the use of snapshots is going to be
when they're taken, or indeed when they start running.  You can have snapshots
taken off snapshots; you can migrate etc etc.

There are two ways I can see of solving this; but in both cases the state
has to live with the snapshot.  That means reverting to an earlier snapshot
reloads the vTPM from the vTPM state in a snapshot; one way is to
use the ioctl to grab all the state of the vTPM and save it in the snapshot
as migration data, and then when the snapshot resumes you take all the data
and stuff it back in the vTPM with another ioctl.  That's the full TPM state
(except maybe the RNG).   Unless the state is huge this should be pretty easy.

I think there's also a separate soltuion used for Flash memory contents 
(mostly on EFI VMs) called pflash; but I don't understand the snapshotting on
that; but it might be worth checking into - I seem to remeber it's based around
a small file so might be closer to your case.

Dave

>     Stefan
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
       [not found]     ` <201601201532.u0KFW2q2019737@d03av03.boulder.ibm.com>
  2016-01-20 15:46       ` Daniel P. Berrange
@ 2016-01-28 13:15       ` Daniel P. Berrange
  2016-01-28 14:51         ` Stefan Berger
  1 sibling, 1 reply; 96+ messages in thread
From: Daniel P. Berrange @ 2016-01-28 13:15 UTC (permalink / raw)
  To: Stefan Berger
  Cc: mst, Stefan Berger, qemu-devel, jb613w, quan.xu,
	silviu.vlasceanu, hagen.lauer

On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41 
> AM:
> 
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
> > > The CUSE TPM and associated tools can be found here:
> > > 
> > > https://github.com/stefanberger/swtpm
> > > 
> > > (please use the latest version)
> > > 
> > > To use the external CUSE TPM, the CUSE TPM should be started as 
> follows:
> > > 
> > > # terminate previously started CUSE TPM
> > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > 
> > > # start CUSE TPM
> > > /usr/bin/swtpm_cuse -n vtpm-test
> > 
> > IIUC, there needs to be one swtpm_cuse process running per QEMU
> > TPM device ?  This makes my wonder why we need this separate
> 
> Correct. See reason in answer to previous email.
> 
> > process at all - it would make sense if there was a single
> > swtpm_cuse shared across all QEMU's, but if there's one per
> > QEMU device, it feels like it'd be much simpler to just have
> > the functionality linked in QEMU.  That avoids the problem
> 
> I tried having it linked in QEMU before. It was basically rejected.
> 
> > of having to manage all these extra processes alongside QEMU
> > which can add a fair bit of mgmt overhead.
> 
> For libvirt, yes, there is mgmt. overhead but it's quite transparent. So 
> libvirt is involved in the creation of the directory for the vTPMs, the 
> command line creation for the external process as well as the startup of 
> the process, but otherwise it's not a big issue (anymore). I have the 
> patches that do just for an older libvirt version that along with setting 
> up SELinux labels, cgroups etc. for each VM that wants an attached vTPM.

A question that just occurred is how this will work with live migration.
If we live migrate a VM we need the file that backs the guest's vTPM
device to either be on shared storage, or it needs to be copied. With
modern QEMU we are using drive-mirror to copy all block backends over
an NBD connection. If the file backing the vTPM is invisible to QEMU
hidden behind the swtpm_cuse ioctl(), then there's no way for us to
leverage QEMUs block mirror to copy across the TPM state file AFAICT.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-28 13:15       ` Daniel P. Berrange
@ 2016-01-28 14:51         ` Stefan Berger
  0 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-01-28 14:51 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: mst, Stefan Berger, qemu-devel, jb613w, quan.xu,
	silviu.vlasceanu, hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 5045 bytes --]

"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/28/2016 08:15:21 
AM:


> 
> On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 
10:00:41 
> > AM:
> > 
> > > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the 
CUSE 
> > > > The CUSE TPM and associated tools can be found here:
> > > > 
> > > > https://github.com/stefanberger/swtpm
> > > > 
> > > > (please use the latest version)
> > > > 
> > > > To use the external CUSE TPM, the CUSE TPM should be started as 
> > follows:
> > > > 
> > > > # terminate previously started CUSE TPM
> > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > > 
> > > > # start CUSE TPM
> > > > /usr/bin/swtpm_cuse -n vtpm-test
> > > 
> > > IIUC, there needs to be one swtpm_cuse process running per QEMU
> > > TPM device ?  This makes my wonder why we need this separate
> > 
> > Correct. See reason in answer to previous email.
> > 
> > > process at all - it would make sense if there was a single
> > > swtpm_cuse shared across all QEMU's, but if there's one per
> > > QEMU device, it feels like it'd be much simpler to just have
> > > the functionality linked in QEMU.  That avoids the problem
> > 
> > I tried having it linked in QEMU before. It was basically rejected.
> > 
> > > of having to manage all these extra processes alongside QEMU
> > > which can add a fair bit of mgmt overhead.
> > 
> > For libvirt, yes, there is mgmt. overhead but it's quite transparent. 
So 
> > libvirt is involved in the creation of the directory for the vTPMs, 
the 
> > command line creation for the external process as well as the startup 
of 
> > the process, but otherwise it's not a big issue (anymore). I have the 
> > patches that do just for an older libvirt version that along with 
setting 
> > up SELinux labels, cgroups etc. for each VM that wants an attached 
vTPM.
> 
> A question that just occurred is how this will work with live migration.
> If we live migrate a VM we need the file that backs the guest's vTPM
> device to either be on shared storage, or it needs to be copied. With

The vTPM implements commands over the control channel to get the vTPM's 
state blobs upon migration (suspend) and set it back into the vTPM upon 
end of migration (resume). The code is here:

http://lists.nongnu.org/archive/html/qemu-devel/2016-01/msg00088.html

This function implements the retrieval of the state.

+int tpm_util_cuse_get_state_blobs(int tpm_fd,
+                                  bool decrypted_blobs,
+                                  TPMBlobBuffers *tpm_blobs)


> modern QEMU we are using drive-mirror to copy all block backends over
> an NBD connection. If the file backing the vTPM is invisible to QEMU
> hidden behind the swtpm_cuse ioctl(), then there's no way for us to
> leverage QEMUs block mirror to copy across the TPM state file AFAICT.

The vTPM's state is treated like any other device's state and is 
serialized upon machine suspend (alongside all the other VM devices) and 
de-serialized upon machine resume (with the addition that the state is 
pushed into the external vTPM device over the control channel and there 
are control channel commands to resume the vTPM with that state).

It is correct that the vTPM writes its state into a plain text file 
otherwise. This vTPM state needs to go alongside the image of the VM for 
all TPM related applications to run seamlessly under all circumstances (I 
can go into more detail here but don't want to confuse). There's currently 
one problem related to running snapshots and snapshots being 'volatile' 
that I mentioned here (volatile = state of VM filesystem is discarded upon 
shutdown of the VM running a snapshot):

 https://lists.gnu.org/archive/html/qemu-devel/2016-01/msg04047.html

I haven't gotten around trying to run a snapshot and migrating it to 
another machine. Let's say one was to create a new file /root/XYZ while 
running that snapshot and that snapshot is shut down on the machine where 
its destination was. Will that file /root/XYZ appear in the filesystem 
then upon restart of that VM? The 'normal' behavior when not migrating is 
that while running a snapshot and creating a new file /root/XYZ that file 
will not appear when restarting that snapshot (of course!) or when 
starting the machine 'normally'. So VM image state is 'volatile' if 
running a snapshot and that snapshot is shut down. The state of the vTPM 
would have to be treated equally volatile or non-volatile.

Does this explanation clarify things?

Regards,
Stefan 

> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    
http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             
http://virt-manager.org :|
> |: http://autobuild.org       -o-         
http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       
http://live.gnome.org/gtk-vnc :|
> 



[-- Attachment #2: Type: text/html, Size: 7508 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
       [not found]             ` <OF7ED031CA.CDD3196F-ON00257F41.004305BB-85257F41.0044C71A@LocalDomain>
@ 2016-02-01 17:40               ` Stefan Berger
  0 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-02-01 17:40 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Daniel P. Berrange
  Cc: mst, qemu-devel, jb613w, quan.xu, silviu.vlasceanu, hagen.lauer

[-- Attachment #1: Type: text/plain, Size: 2838 bytes --]

Stefan Berger/Watson/IBM wrote on 01/21/2016 07:31:10 AM:

> 
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote on 01/21/2016 
> 06:40:35 AM:
> > 
> > > 
> > > There is one issue in case of resume of a snapshot. If the 
> permanent state 
> > > of the TPM is modified during snapshotting, like ownership is 
> taken of the 
> > > TPM, the state, including the owner password, is written into the 
plain 
> > > file. Then the VM is shut down. Once it is restarted (not a resume 
from a 
> > > snapshot), the TPM's state will be relected by what was done during 
the 
> > > run of that snapshot. So this is likely undesirable. Now the only 
way 
> > > around this seems to be that one needs to know the reason for why 
the 
> > > state blobs were pushed into the vTPM. In case of a snapshot, the 
writing 
> > > of the permanent state into a file may need to be suppressed, while 
on a 
> > > VM resume and resume from migration operation it needs to be written 
into 
> > > the TPM's state file.
> > 
> > I don't understand that; are you saying that the ioctl's dont provide 
all
> > the information that's included in the state file?
> 
> No. Running a snapshot does not change the state of the VM image 
> unless one takes another snapshot. The vTPM has to be behave the 
> same way, meaning that the state of the vTPM must not be overwritten
> while in a snapshot. However, the vTPM needs to know that it's 
> running a snapshot whose state is 'volatile'.
> 
> Example: 
> 1) A VM is run and VM image is in state VM-A and vTPM is in state 
> vTPM-A. The VM is shut down and VM is in state VM-A and vTPM is in 
> state vTPM-A.
> 
> 2) The VM runs a snapshot and VM image is in state VM-B and vTPM is 
> in state B. The user takes ownership of the vTPM, which puts the 
> vTPM into state vTPM-B2. VM is shut down and with that all VM image 
> state is discarded. Also the VTPM's state needs to be discarded.
> 
> 3) The VM is run again and the VM image is in state VM-A and the 
> vTPM must be in state vTPM-A from 1). However, at the moment the 
> vTPM wold be in state vTPM-B2 from the last run of the snapshot 
> since the state was written into the vTPM's state file.

Following tests that I have done (again, on the virt-manager level) the 
above in 3) is not correct. Instead the following seems to be what is 
happening and with that the current vTPM implementation is correct as 
well:

3) The VM is run again and the VM image is in state VM-B (!) and the vTPM 
is also in state vTPM-B from running 2).

Following the run of a VM snapshot, the next time the VM will be started, 
the VM image will in the state when that snapshort terminated. Following 
this, the vTPM's (permanent) state can always be written into the same 
file. 

Regards,
   Stefan



[-- Attachment #2: Type: text/html, Size: 3526 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-01-21 11:36             ` Dr. David Alan Gilbert
@ 2016-05-31 18:58               ` BICKFORD, JEFFREY E
  2016-05-31 19:10                 ` Dr. David Alan Gilbert
  2016-06-01  1:58                 ` Xu, Quan
  0 siblings, 2 replies; 96+ messages in thread
From: BICKFORD, JEFFREY E @ 2016-05-31 18:58 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Daniel P. Berrange
  Cc: Stefan Berger, Stefan Berger, mst, qemu-devel, quan.xu,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C, SERBAN, CRISTINA

> * Daniel P. Berrange (berrange@redhat.com) wrote:
> > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > >>"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41
> > > >>AM:
> > > >>
> > > >>
> > > >>>process at all - it would make sense if there was a single
> > > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > > >>>QEMU device, it feels like it'd be much simpler to just have
> > > >>>the functionality linked in QEMU.  That avoids the problem
> > > >>I tried having it linked in QEMU before. It was basically rejected.
> > > >I remember an impl you did many years(?) ago now, but don't recall
> > > >the results of the discussion. Can you elaborate on why it was
> > > >rejected as an approach ? It just doesn't make much sense to me
> > > >to have to create an external daemon, a CUSE device and comms
> > > >protocol, simply to be able to read/write a plain file containing
> > > >the TPM state. Its massive over engineering IMHO and adding way
> > > >more complexity and thus scope for failure
> > > 
> > > The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> > > implementation is in the same range. The concern was having this code right
> > > in the QEMU address space. It's big, it can have bugs, so we don't want it
> > > to harm QEMU. So we now put this into an external process implemented by the
> > > swtpm project that builds on libtpms which provides TPM 1.2 functionality
> > > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > > anymore, so we need a control channel, which is implemented through ioctls
> > > on the CUSE device.
> > 
> > Ok, the security separation concern does make some sense. The use of CUSE
> > still seems fairly questionable to me. CUSE makes sense if you want to
> > provide a drop-in replacement for the kernel TPM device driver, which
> > would avoid ned for a new QEMU backend. If you're not emulating an existing
> > kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
> > transport between 2 userspace processes.

> While I don't really like CUSE; I can see some of the reasoning here.
> By providing the existing TPM ioctl interface I think it means you can use
> existing host-side TPM tools to initialise/query the soft-tpm, and those
> should be independent of the soft-tpm implementation.
> As for the extra interfaces you need because it's a soft-tpm to set it up,
> once you've already got that ioctl interface as above, then it seems to make
> sense to extend that to add the extra interfaces needed.  The only thing
> you have to watch for there are that the extra interfaces don't clash
> with any future kernel ioctl extensions, and that the interface defined
> is generic enough for different soft-tpm implementations.

> Dave
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


Over the past several months, AT&T Security Research has been testing the Virtual TPM software from IBM on the Power (ppc64) platform. Based on our testing results, the vTPM software works well and as expected. Support for libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality and was tested in a full-fledged OpenStack environment. 
 
We believe the vTPM functionality will improve various aspects of VM security in our enterprise-grade cloud environment. AT&T would like to see these patches accepted into the QEMU community as the default-standard build so this technology can be easily adopted in various open source cloud deployments.

Regards,
Jeffrey Bickford
AT&T Security Research Center
jbickford@att.com

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-05-31 18:58               ` BICKFORD, JEFFREY E
@ 2016-05-31 19:10                 ` Dr. David Alan Gilbert
  2016-06-01 22:54                   ` BICKFORD, JEFFREY E
  2016-06-13 10:56                   ` Stefan Berger
  2016-06-01  1:58                 ` Xu, Quan
  1 sibling, 2 replies; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-05-31 19:10 UTC (permalink / raw)
  To: BICKFORD, JEFFREY E
  Cc: Daniel P. Berrange, Stefan Berger, Stefan Berger, mst,
	qemu-devel, quan.xu, silviu.vlasceanu, hagen.lauer, SHIH,
	CHING C, SERBAN, CRISTINA

* BICKFORD, JEFFREY E (jb613w@att.com) wrote:
> > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > >>"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41
> > > > >>AM:
> > > > >>
> > > > >>
> > > > >>>process at all - it would make sense if there was a single
> > > > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > > > >>>QEMU device, it feels like it'd be much simpler to just have
> > > > >>>the functionality linked in QEMU.  That avoids the problem
> > > > >>I tried having it linked in QEMU before. It was basically rejected.
> > > > >I remember an impl you did many years(?) ago now, but don't recall
> > > > >the results of the discussion. Can you elaborate on why it was
> > > > >rejected as an approach ? It just doesn't make much sense to me
> > > > >to have to create an external daemon, a CUSE device and comms
> > > > >protocol, simply to be able to read/write a plain file containing
> > > > >the TPM state. Its massive over engineering IMHO and adding way
> > > > >more complexity and thus scope for failure
> > > > 
> > > > The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> > > > implementation is in the same range. The concern was having this code right
> > > > in the QEMU address space. It's big, it can have bugs, so we don't want it
> > > > to harm QEMU. So we now put this into an external process implemented by the
> > > > swtpm project that builds on libtpms which provides TPM 1.2 functionality
> > > > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > > > anymore, so we need a control channel, which is implemented through ioctls
> > > > on the CUSE device.
> > > 
> > > Ok, the security separation concern does make some sense. The use of CUSE
> > > still seems fairly questionable to me. CUSE makes sense if you want to
> > > provide a drop-in replacement for the kernel TPM device driver, which
> > > would avoid ned for a new QEMU backend. If you're not emulating an existing
> > > kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
> > > transport between 2 userspace processes.
> 
> > While I don't really like CUSE; I can see some of the reasoning here.
> > By providing the existing TPM ioctl interface I think it means you can use
> > existing host-side TPM tools to initialise/query the soft-tpm, and those
> > should be independent of the soft-tpm implementation.
> > As for the extra interfaces you need because it's a soft-tpm to set it up,
> > once you've already got that ioctl interface as above, then it seems to make
> > sense to extend that to add the extra interfaces needed.  The only thing
> > you have to watch for there are that the extra interfaces don't clash
> > with any future kernel ioctl extensions, and that the interface defined
> > is generic enough for different soft-tpm implementations.
> 
> > Dave
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> 
> Over the past several months, AT&T Security Research has been testing the Virtual TPM software from IBM on the Power (ppc64) platform. Based on our testing results, the vTPM software works well and as expected. Support for libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality and was tested in a full-fledged OpenStack environment. 
>  
> We believe the vTPM functionality will improve various aspects of VM security in our enterprise-grade cloud environment. AT&T would like to see these patches accepted into the QEMU community as the default-standard build so this technology can be easily adopted in various open source cloud deployments.

Interesting; however, I see Stefan has been contributing other kernel
patches that create a different vTPM setup without the use of CUSE;
if that's the case then I guess that's the preferable solution.

Jeffrey: Can you detail a bit more about your setup, and how
you're maanging the life cycle of the vTPM data?

Dave

> 
> Regards,
> Jeffrey Bickford
> AT&T Security Research Center
> jbickford@att.com
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-05-31 18:58               ` BICKFORD, JEFFREY E
  2016-05-31 19:10                 ` Dr. David Alan Gilbert
@ 2016-06-01  1:58                 ` Xu, Quan
  2016-06-13 11:02                   ` Stefan Berger
  1 sibling, 1 reply; 96+ messages in thread
From: Xu, Quan @ 2016-06-01  1:58 UTC (permalink / raw)
  To: BICKFORD, JEFFREY E, Stefan Berger, Stefan Berger
  Cc: mst, qemu-devel, silviu.vlasceanu, hagen.lauer, SHIH, CHING C,
	SERBAN, CRISTINA, Dr. David Alan Gilbert, Daniel P. Berrange

On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E <jb613w@att.com> wrote:
> > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > >>"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016
> > > > >>10:00:41
> > > > >>AM:
> > > > >>
> > > > >>
> > > > >>>process at all - it would make sense if there was a single
> > > > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > > > >>>QEMU device, it feels like it'd be much simpler to just have
> > > > >>>the functionality linked in QEMU.  That avoids the problem
> > > > >>I tried having it linked in QEMU before. It was basically rejected.
> > > > >I remember an impl you did many years(?) ago now, but don't
> > > > >recall the results of the discussion. Can you elaborate on why it
> > > > >was rejected as an approach ? It just doesn't make much sense to
> > > > >me to have to create an external daemon, a CUSE device and comms
> > > > >protocol, simply to be able to read/write a plain file containing
> > > > >the TPM state. Its massive over engineering IMHO and adding way
> > > > >more complexity and thus scope for failure
> > > >
> > > > The TPM 1.2 implementation adds 10s of thousands of lines of code.
> > > > The TPM 2 implementation is in the same range. The concern was
> > > > having this code right in the QEMU address space. It's big, it can
> > > > have bugs, so we don't want it to harm QEMU. So we now put this
> > > > into an external process implemented by the swtpm project that
> > > > builds on libtpms which provides TPM 1.2 functionality (to be
> > > > extended with TPM 2). We cannot call APIs of libtpms directly
> > > > anymore, so we need a control channel, which is implemented through
> ioctls on the CUSE device.
> > >
> > > Ok, the security separation concern does make some sense. The use of
> > > CUSE still seems fairly questionable to me. CUSE makes sense if you
> > > want to provide a drop-in replacement for the kernel TPM device
> > > driver, which would avoid ned for a new QEMU backend. If you're not
> > > emulating an existing kernel driver ABI though, CUSE + ioctl is
> > > feels like a really awful RPC transport between 2 userspace processes.
> 
> > While I don't really like CUSE; I can see some of the reasoning here.
> > By providing the existing TPM ioctl interface I think it means you can
> > use existing host-side TPM tools to initialise/query the soft-tpm, and
> > those should be independent of the soft-tpm implementation.
> > As for the extra interfaces you need because it's a soft-tpm to set it
> > up, once you've already got that ioctl interface as above, then it
> > seems to make sense to extend that to add the extra interfaces needed.
> > The only thing you have to watch for there are that the extra
> > interfaces don't clash with any future kernel ioctl extensions, and
> > that the interface defined is generic enough for different soft-tpm
> implementations.
> 
> > Dave
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> 
> Over the past several months, AT&T Security Research has been testing the
> Virtual TPM software from IBM on the Power (ppc64) platform.

What about x86 platform?

> Based on our
> testing results, the vTPM software works well and as expected. Support for
> libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality
> and was tested in a full-fledged OpenStack environment.
>

Cool..

> We believe the vTPM functionality will improve various aspects of VM security
> in our enterprise-grade cloud environment. AT&T would like to see these
> patches accepted into the QEMU community as the default-standard build so
> this technology can be easily adopted in various open source cloud
> deployments.

Stefan: could you update status about this patch set? I'd really appreciate your patch..

-Quan

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-05-31 19:10                 ` Dr. David Alan Gilbert
@ 2016-06-01 22:54                   ` BICKFORD, JEFFREY E
  2016-06-13 10:56                   ` Stefan Berger
  1 sibling, 0 replies; 96+ messages in thread
From: BICKFORD, JEFFREY E @ 2016-06-01 22:54 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Daniel P. Berrange, Stefan Berger, Stefan Berger, mst,
	qemu-devel, quan.xu, silviu.vlasceanu, hagen.lauer, SHIH,
	CHING C, SERBAN, CRISTINA

> * BICKFORD, JEFFREY E (jb613w@att.com) wrote:
> > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > > >>"Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41
> > > > > >>AM:
> > > > > >>
> > > > > >>
> > > > > >>>process at all - it would make sense if there was a single
> > > > > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > > > > >>>QEMU device, it feels like it'd be much simpler to just have
> > > > > >>>the functionality linked in QEMU.  That avoids the problem
> > > > > >>I tried having it linked in QEMU before. It was basically rejected.
> > > > > >I remember an impl you did many years(?) ago now, but don't recall
> > > > > >the results of the discussion. Can you elaborate on why it was
> > > > > >rejected as an approach ? It just doesn't make much sense to me
> > > > > >to have to create an external daemon, a CUSE device and comms
> > > > > >protocol, simply to be able to read/write a plain file containing
> > > > > >the TPM state. Its massive over engineering IMHO and adding way
> > > > > >more complexity and thus scope for failure
> > > > > 
> > > > > The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> > > > > implementation is in the same range. The concern was having this code right
> > > > > in the QEMU address space. It's big, it can have bugs, so we don't want it
> > > > > to harm QEMU. So we now put this into an external process implemented by the
> > > > > swtpm project that builds on libtpms which provides TPM 1.2 functionality
> > > > > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > > > > anymore, so we need a control channel, which is implemented through ioctls
> > > > > on the CUSE device.
> > > > 
> > > > Ok, the security separation concern does make some sense. The use of CUSE
> > > > still seems fairly questionable to me. CUSE makes sense if you want to
> > > > provide a drop-in replacement for the kernel TPM device driver, which
> > > > would avoid ned for a new QEMU backend. If you're not emulating an existing
> > > > kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
> > > > transport between 2 userspace processes.
> > 
> > > While I don't really like CUSE; I can see some of the reasoning here.
> > > By providing the existing TPM ioctl interface I think it means you can use
> > > existing host-side TPM tools to initialise/query the soft-tpm, and those
> > > should be independent of the soft-tpm implementation.
> > > As for the extra interfaces you need because it's a soft-tpm to set it up,
> > > once you've already got that ioctl interface as above, then it seems to make
> > > sense to extend that to add the extra interfaces needed.  The only thing
> > > you have to watch for there are that the extra interfaces don't clash
> > > with any future kernel ioctl extensions, and that the interface defined
> > > is generic enough for different soft-tpm implementations.
> > 
> > > Dave
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > 
> > 
> > Over the past several months, AT&T Security Research has been testing the Virtual TPM software from IBM on the Power (ppc64) platform. Based on our testing results, the vTPM software works well and as expected. Support for libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality and was tested in a full-fledged OpenStack environment. 
> >  
> > We believe the vTPM functionality will improve various aspects of VM security in our enterprise-grade cloud environment. AT&T would like to see these patches accepted into the QEMU community as the default-standard build so this technology can be easily adopted in various open source cloud deployments.
> 
> Interesting; however, I see Stefan has been contributing other kernel
> patches that create a different vTPM setup without the use of CUSE;
> if that's the case then I guess that's the preferable solution.
> 
> Jeffrey: Can you detail a bit more about your setup, and how
> you're maanging the life cycle of the vTPM data?
> 
> Dave
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Sure. We are using the various patches Stefan submitted at the beginning of the year on an IBM Power (ppc64) machine. The machine is running the PowerKVM operating system (Linux with KVM for Power architecture) with Stefan's vTPM patches installed. This machine is also running as a single OpenStack compute node for us to test the vTPM functionality through an OpenStack deployment. Our environment is currently running OpenStack Kilo. 
 
Our main goal has been to test the overall functionality of the vTPM within a VM and compare this functionality to a physical TPM 1.2 running on a separate physical machine. Our main use case has been using the vTPM for booting a VM in a trusted manner and VM runtime integrity through Linux IMA. Based on our testing, we have found that the vTPM software supplied in Stefan's patches work in the same way as a physical TPM at the virtual machine layer. We have tested running an attestation server within a guest network to attest the boot-time and run-time integrity of a set of VMs using the vTPM. 
 
Regarding the life cycle, we have tested the creation and destruction of a VM. In these cases, vTPM state is created and destroyed successfully. We have also tested creating VMs from an image and snapshot, in which case a new vTPM device is created for the new instance. vTPMs that are created based off an image or snapshot are unique and contain their own different public/private endorsement key pairs. Other VM functions such as pause and resume seem to also work as normal. Log files show that the CUSE TPM is stopped and started successfully based on these commands. 
 
We have not tested VM migration yet. 
 
Please let me know if you have any other questions about our environment or testing. 
 
Thanks,
Jeff

Jeffrey Bickford
AT&T Security Research Center
jbickford@att.com

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-05-31 19:10                 ` Dr. David Alan Gilbert
  2016-06-01 22:54                   ` BICKFORD, JEFFREY E
@ 2016-06-13 10:56                   ` Stefan Berger
  1 sibling, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-06-13 10:56 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, BICKFORD, JEFFREY E
  Cc: Daniel P. Berrange, Stefan Berger, mst, qemu-devel, quan.xu,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C, SERBAN, CRISTINA

On 05/31/2016 03:10 PM, Dr. David Alan Gilbert wrote:
> * BICKFORD, JEFFREY E (jb613w@att.com) wrote:
>>> * Daniel P. Berrange (berrange@redhat.com) wrote:
>>>> On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
>>>>> On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
>>>>>> On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
>>>>>>> "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016 10:00:41
>>>>>>> AM:
>>>>>>>
>>>>>>>
>>>>>>>> process at all - it would make sense if there was a single
>>>>>>>> swtpm_cuse shared across all QEMU's, but if there's one per
>>>>>>>> QEMU device, it feels like it'd be much simpler to just have
>>>>>>>> the functionality linked in QEMU.  That avoids the problem
>>>>>>> I tried having it linked in QEMU before. It was basically rejected.
>>>>>> I remember an impl you did many years(?) ago now, but don't recall
>>>>>> the results of the discussion. Can you elaborate on why it was
>>>>>> rejected as an approach ? It just doesn't make much sense to me
>>>>>> to have to create an external daemon, a CUSE device and comms
>>>>>> protocol, simply to be able to read/write a plain file containing
>>>>>> the TPM state. Its massive over engineering IMHO and adding way
>>>>>> more complexity and thus scope for failure
>>>>> The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
>>>>> implementation is in the same range. The concern was having this code right
>>>>> in the QEMU address space. It's big, it can have bugs, so we don't want it
>>>>> to harm QEMU. So we now put this into an external process implemented by the
>>>>> swtpm project that builds on libtpms which provides TPM 1.2 functionality
>>>>> (to be extended with TPM 2). We cannot call APIs of libtpms directly
>>>>> anymore, so we need a control channel, which is implemented through ioctls
>>>>> on the CUSE device.
>>>> Ok, the security separation concern does make some sense. The use of CUSE
>>>> still seems fairly questionable to me. CUSE makes sense if you want to
>>>> provide a drop-in replacement for the kernel TPM device driver, which
>>>> would avoid ned for a new QEMU backend. If you're not emulating an existing
>>>> kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
>>>> transport between 2 userspace processes.
>>> While I don't really like CUSE; I can see some of the reasoning here.
>>> By providing the existing TPM ioctl interface I think it means you can use
>>> existing host-side TPM tools to initialise/query the soft-tpm, and those
>>> should be independent of the soft-tpm implementation.
>>> As for the extra interfaces you need because it's a soft-tpm to set it up,
>>> once you've already got that ioctl interface as above, then it seems to make
>>> sense to extend that to add the extra interfaces needed.  The only thing
>>> you have to watch for there are that the extra interfaces don't clash
>>> with any future kernel ioctl extensions, and that the interface defined
>>> is generic enough for different soft-tpm implementations.
>>> Dave
>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>
>> Over the past several months, AT&T Security Research has been testing the Virtual TPM software from IBM on the Power (ppc64) platform. Based on our testing results, the vTPM software works well and as expected. Support for libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality and was tested in a full-fledged OpenStack environment.
>>   
>> We believe the vTPM functionality will improve various aspects of VM security in our enterprise-grade cloud environment. AT&T would like to see these patches accepted into the QEMU community as the default-standard build so this technology can be easily adopted in various open source cloud deployments.
> Interesting; however, I see Stefan has been contributing other kernel
> patches that create a different vTPM setup without the use of CUSE;
> if that's the case then I guess that's the preferable solution.

That solution is for Linux containers. It doesn't have the control 
channel we need for virtual machines where for example a reset it sent 
to the vTPM by QEMU when rebooting the VM. Instead we assume that the 
container management stack would reset the vTPM upon container restart.


> Jeffrey: Can you detail a bit more about your setup, and how
> you're maanging the life cycle of the vTPM data?
>
> Dave
>
>> Regards,
>> Jeffrey Bickford
>> AT&T Security Research Center
>> jbickford@att.com
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-01  1:58                 ` Xu, Quan
@ 2016-06-13 11:02                   ` Stefan Berger
  2016-06-15 19:30                     ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-06-13 11:02 UTC (permalink / raw)
  To: Xu, Quan, BICKFORD, JEFFREY E, Stefan Berger
  Cc: mst, qemu-devel, silviu.vlasceanu, hagen.lauer, SHIH, CHING C,
	SERBAN, CRISTINA, Dr. David Alan Gilbert, Daniel P. Berrange

On 05/31/2016 09:58 PM, Xu, Quan wrote:
> On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E <jb613w@att.com> wrote:
>>> * Daniel P. Berrange (berrange@redhat.com) wrote:
>>>> On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
>>>>> On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
>>>>>> On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
>>>>>>> "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016
>>>>>>> 10:00:41
>>>>>>> AM:
>>>>>>>
>>>>>>>
>>>>>>>> process at all - it would make sense if there was a single
>>>>>>>> swtpm_cuse shared across all QEMU's, but if there's one per
>>>>>>>> QEMU device, it feels like it'd be much simpler to just have
>>>>>>>> the functionality linked in QEMU.  That avoids the problem
>>>>>>> I tried having it linked in QEMU before. It was basically rejected.
>>>>>> I remember an impl you did many years(?) ago now, but don't
>>>>>> recall the results of the discussion. Can you elaborate on why it
>>>>>> was rejected as an approach ? It just doesn't make much sense to
>>>>>> me to have to create an external daemon, a CUSE device and comms
>>>>>> protocol, simply to be able to read/write a plain file containing
>>>>>> the TPM state. Its massive over engineering IMHO and adding way
>>>>>> more complexity and thus scope for failure
>>>>> The TPM 1.2 implementation adds 10s of thousands of lines of code.
>>>>> The TPM 2 implementation is in the same range. The concern was
>>>>> having this code right in the QEMU address space. It's big, it can
>>>>> have bugs, so we don't want it to harm QEMU. So we now put this
>>>>> into an external process implemented by the swtpm project that
>>>>> builds on libtpms which provides TPM 1.2 functionality (to be
>>>>> extended with TPM 2). We cannot call APIs of libtpms directly
>>>>> anymore, so we need a control channel, which is implemented through
>> ioctls on the CUSE device.
>>>> Ok, the security separation concern does make some sense. The use of
>>>> CUSE still seems fairly questionable to me. CUSE makes sense if you
>>>> want to provide a drop-in replacement for the kernel TPM device
>>>> driver, which would avoid ned for a new QEMU backend. If you're not
>>>> emulating an existing kernel driver ABI though, CUSE + ioctl is
>>>> feels like a really awful RPC transport between 2 userspace processes.
>>> While I don't really like CUSE; I can see some of the reasoning here.
>>> By providing the existing TPM ioctl interface I think it means you can
>>> use existing host-side TPM tools to initialise/query the soft-tpm, and
>>> those should be independent of the soft-tpm implementation.
>>> As for the extra interfaces you need because it's a soft-tpm to set it
>>> up, once you've already got that ioctl interface as above, then it
>>> seems to make sense to extend that to add the extra interfaces needed.
>>> The only thing you have to watch for there are that the extra
>>> interfaces don't clash with any future kernel ioctl extensions, and
>>> that the interface defined is generic enough for different soft-tpm
>> implementations.
>>
>>> Dave
>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>
>> Over the past several months, AT&T Security Research has been testing the
>> Virtual TPM software from IBM on the Power (ppc64) platform.
> What about x86 platform?
>
>> Based on our
>> testing results, the vTPM software works well and as expected. Support for
>> libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality
>> and was tested in a full-fledged OpenStack environment.
>>
> Cool..
>
>> We believe the vTPM functionality will improve various aspects of VM security
>> in our enterprise-grade cloud environment. AT&T would like to see these
>> patches accepted into the QEMU community as the default-standard build so
>> this technology can be easily adopted in various open source cloud
>> deployments.
> Stefan: could you update status about this patch set? I'd really appreciate your patch..

What do you mean by 'update status'. It's pretty much still the same as 
before.

https://github.com/stefanberger/qemu-tpm/tree/v2.6.0+tpm


The implementation of the swtpm that I use for connecting QEMU to now 
has more interface choices. There's the existing CUSE + ioctl for data 
and control channel or any combination of TCP and Unix sockets for data 
and control channel. The libvirt based management stack I built on top 
of QEMU with vTPM assumes QEMU using the CUSE interface.

     Stefan


>
> -Quan
>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-13 11:02                   ` Stefan Berger
@ 2016-06-15 19:30                     ` Dr. David Alan Gilbert
  2016-06-15 20:54                       ` Stefan Berger
  0 siblings, 1 reply; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-06-15 19:30 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Xu, Quan, BICKFORD, JEFFREY E, Stefan Berger, mst, qemu-devel,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C, SERBAN, CRISTINA,
	Daniel P. Berrange

* Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> On 05/31/2016 09:58 PM, Xu, Quan wrote:
> > On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E <jb613w@att.com> wrote:
> > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > > > > On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > > > > > "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016
> > > > > > > > 10:00:41
> > > > > > > > AM:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > process at all - it would make sense if there was a single
> > > > > > > > > swtpm_cuse shared across all QEMU's, but if there's one per
> > > > > > > > > QEMU device, it feels like it'd be much simpler to just have
> > > > > > > > > the functionality linked in QEMU.  That avoids the problem
> > > > > > > > I tried having it linked in QEMU before. It was basically rejected.
> > > > > > > I remember an impl you did many years(?) ago now, but don't
> > > > > > > recall the results of the discussion. Can you elaborate on why it
> > > > > > > was rejected as an approach ? It just doesn't make much sense to
> > > > > > > me to have to create an external daemon, a CUSE device and comms
> > > > > > > protocol, simply to be able to read/write a plain file containing
> > > > > > > the TPM state. Its massive over engineering IMHO and adding way
> > > > > > > more complexity and thus scope for failure
> > > > > > The TPM 1.2 implementation adds 10s of thousands of lines of code.
> > > > > > The TPM 2 implementation is in the same range. The concern was
> > > > > > having this code right in the QEMU address space. It's big, it can
> > > > > > have bugs, so we don't want it to harm QEMU. So we now put this
> > > > > > into an external process implemented by the swtpm project that
> > > > > > builds on libtpms which provides TPM 1.2 functionality (to be
> > > > > > extended with TPM 2). We cannot call APIs of libtpms directly
> > > > > > anymore, so we need a control channel, which is implemented through
> > > ioctls on the CUSE device.
> > > > > Ok, the security separation concern does make some sense. The use of
> > > > > CUSE still seems fairly questionable to me. CUSE makes sense if you
> > > > > want to provide a drop-in replacement for the kernel TPM device
> > > > > driver, which would avoid ned for a new QEMU backend. If you're not
> > > > > emulating an existing kernel driver ABI though, CUSE + ioctl is
> > > > > feels like a really awful RPC transport between 2 userspace processes.
> > > > While I don't really like CUSE; I can see some of the reasoning here.
> > > > By providing the existing TPM ioctl interface I think it means you can
> > > > use existing host-side TPM tools to initialise/query the soft-tpm, and
> > > > those should be independent of the soft-tpm implementation.
> > > > As for the extra interfaces you need because it's a soft-tpm to set it
> > > > up, once you've already got that ioctl interface as above, then it
> > > > seems to make sense to extend that to add the extra interfaces needed.
> > > > The only thing you have to watch for there are that the extra
> > > > interfaces don't clash with any future kernel ioctl extensions, and
> > > > that the interface defined is generic enough for different soft-tpm
> > > implementations.
> > > 
> > > > Dave
> > > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > > 
> > > Over the past several months, AT&T Security Research has been testing the
> > > Virtual TPM software from IBM on the Power (ppc64) platform.
> > What about x86 platform?
> > 
> > > Based on our
> > > testing results, the vTPM software works well and as expected. Support for
> > > libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality
> > > and was tested in a full-fledged OpenStack environment.
> > > 
> > Cool..
> > 
> > > We believe the vTPM functionality will improve various aspects of VM security
> > > in our enterprise-grade cloud environment. AT&T would like to see these
> > > patches accepted into the QEMU community as the default-standard build so
> > > this technology can be easily adopted in various open source cloud
> > > deployments.
> > Stefan: could you update status about this patch set? I'd really appreciate your patch..
> 
> What do you mean by 'update status'. It's pretty much still the same as
> before.
> 
> https://github.com/stefanberger/qemu-tpm/tree/v2.6.0+tpm
> 
> 
> The implementation of the swtpm that I use for connecting QEMU to now has
> more interface choices. There's the existing CUSE + ioctl for data and
> control channel or any combination of TCP and Unix sockets for data and
> control channel. The libvirt based management stack I built on top of QEMU
> with vTPM assumes QEMU using the CUSE interface.

So what was the multi-instance vTPM proxy driver patch set about?

Dave

> 
>     Stefan
> 
> 
> > 
> > -Quan
> > 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-15 19:30                     ` Dr. David Alan Gilbert
@ 2016-06-15 20:54                       ` Stefan Berger
  2016-06-16  8:05                         ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-06-15 20:54 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, SERBAN, CRISTINA, BICKFORD,
	JEFFREY E, Xu, Quan, silviu.vlasceanu, hagen.lauer, SHIH,
	CHING C

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>> On 05/31/2016 09:58 PM, Xu, Quan wrote:
>>> On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E <jb613w@att.com> wrote:
>>>>> * Daniel P. Berrange (berrange@redhat.com) wrote:
>>>>>> On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
>>>>>>> On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
>>>>>>>> On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
>>>>>>>>> "Daniel P. Berrange" <berrange@redhat.com> wrote on 01/20/2016
>>>>>>>>> 10:00:41
>>>>>>>>> AM:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> process at all - it would make sense if there was a single
>>>>>>>>>> swtpm_cuse shared across all QEMU's, but if there's one per
>>>>>>>>>> QEMU device, it feels like it'd be much simpler to just have
>>>>>>>>>> the functionality linked in QEMU.  That avoids the problem
>>>>>>>>> I tried having it linked in QEMU before. It was basically rejected.
>>>>>>>> I remember an impl you did many years(?) ago now, but don't
>>>>>>>> recall the results of the discussion. Can you elaborate on why it
>>>>>>>> was rejected as an approach ? It just doesn't make much sense to
>>>>>>>> me to have to create an external daemon, a CUSE device and comms
>>>>>>>> protocol, simply to be able to read/write a plain file containing
>>>>>>>> the TPM state. Its massive over engineering IMHO and adding way
>>>>>>>> more complexity and thus scope for failure
>>>>>>> The TPM 1.2 implementation adds 10s of thousands of lines of code.
>>>>>>> The TPM 2 implementation is in the same range. The concern was
>>>>>>> having this code right in the QEMU address space. It's big, it can
>>>>>>> have bugs, so we don't want it to harm QEMU. So we now put this
>>>>>>> into an external process implemented by the swtpm project that
>>>>>>> builds on libtpms which provides TPM 1.2 functionality (to be
>>>>>>> extended with TPM 2). We cannot call APIs of libtpms directly
>>>>>>> anymore, so we need a control channel, which is implemented through
>>>> ioctls on the CUSE device.
>>>>>> Ok, the security separation concern does make some sense. The use of
>>>>>> CUSE still seems fairly questionable to me. CUSE makes sense if you
>>>>>> want to provide a drop-in replacement for the kernel TPM device
>>>>>> driver, which would avoid ned for a new QEMU backend. If you're not
>>>>>> emulating an existing kernel driver ABI though, CUSE + ioctl is
>>>>>> feels like a really awful RPC transport between 2 userspace processes.
>>>>> While I don't really like CUSE; I can see some of the reasoning here.
>>>>> By providing the existing TPM ioctl interface I think it means you can
>>>>> use existing host-side TPM tools to initialise/query the soft-tpm, and
>>>>> those should be independent of the soft-tpm implementation.
>>>>> As for the extra interfaces you need because it's a soft-tpm to set it
>>>>> up, once you've already got that ioctl interface as above, then it
>>>>> seems to make sense to extend that to add the extra interfaces needed.
>>>>> The only thing you have to watch for there are that the extra
>>>>> interfaces don't clash with any future kernel ioctl extensions, and
>>>>> that the interface defined is generic enough for different soft-tpm
>>>> implementations.
>>>>
>>>>> Dave
>>>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>>> Over the past several months, AT&T Security Research has been testing the
>>>> Virtual TPM software from IBM on the Power (ppc64) platform.
>>> What about x86 platform?
>>>
>>>> Based on our
>>>> testing results, the vTPM software works well and as expected. Support for
>>>> libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality
>>>> and was tested in a full-fledged OpenStack environment.
>>>>
>>> Cool..
>>>
>>>> We believe the vTPM functionality will improve various aspects of VM security
>>>> in our enterprise-grade cloud environment. AT&T would like to see these
>>>> patches accepted into the QEMU community as the default-standard build so
>>>> this technology can be easily adopted in various open source cloud
>>>> deployments.
>>> Stefan: could you update status about this patch set? I'd really appreciate your patch..
>> What do you mean by 'update status'. It's pretty much still the same as
>> before.
>>
>> https://github.com/stefanberger/qemu-tpm/tree/v2.6.0+tpm
>>
>>
>> The implementation of the swtpm that I use for connecting QEMU to now has
>> more interface choices. There's the existing CUSE + ioctl for data and
>> control channel or any combination of TCP and Unix sockets for data and
>> control channel. The libvirt based management stack I built on top of QEMU
>> with vTPM assumes QEMU using the CUSE interface.
> So what was the multi-instance vTPM proxy driver patch set about?

That's for containers.

     Stefan

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-15 20:54                       ` Stefan Berger
@ 2016-06-16  8:05                         ` Dr. David Alan Gilbert
  2016-06-16  8:25                           ` Daniel P. Berrange
                                             ` (2 more replies)
  0 siblings, 3 replies; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-06-16  8:05 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, mst, qemu-devel, SERBAN, CRISTINA, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

* Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:

<snip>

> > So what was the multi-instance vTPM proxy driver patch set about?
> 
> That's for containers.

Why have the two mechanisms? Can you explain how the multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container case.

Dave
P.S. I've removed Jeff from the cc because I got a bounce from
his AT&T address saying 'restricted/not authorized'

> 
>     Stefan
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16  8:05                         ` Dr. David Alan Gilbert
@ 2016-06-16  8:25                           ` Daniel P. Berrange
  2016-06-16 15:20                             ` Stefan Berger
  2017-03-01 12:25                             ` Stefan Berger
  2016-06-16 13:58                           ` SERBAN, CRISTINA
  2016-06-16 15:04                           ` Stefan Berger
  2 siblings, 2 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2016-06-16  8:25 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Stefan Berger, Stefan Berger, mst, qemu-devel, hagen.lauer, Xu,
	Quan, silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> 
> <snip>
> 
> > > So what was the multi-instance vTPM proxy driver patch set about?
> > 
> > That's for containers.
> 
> Why have the two mechanisms? Can you explain how the multi-instance
> proxy works; my brief reading when I saw your patch series seemed
> to suggest it could be used instead of CUSE for the non-container case.

One of the key things that was/is not appealing about this CUSE approach
is that it basically invents a new ioctl() mechanism for talking to
a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't need
to have any changes at all - its existing driver for talking to TPM
char devices ought to just work. All that would be required is libvirt
support too configure the vTPM instances.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16  8:05                         ` Dr. David Alan Gilbert
  2016-06-16  8:25                           ` Daniel P. Berrange
@ 2016-06-16 13:58                           ` SERBAN, CRISTINA
  2016-06-16 15:04                           ` Stefan Berger
  2 siblings, 0 replies; 96+ messages in thread
From: SERBAN, CRISTINA @ 2016-06-16 13:58 UTC (permalink / raw)
  To: 'Dr. David Alan Gilbert', 'Stefan Berger'
  Cc: 'Stefan Berger', 'mst@redhat.com',
	'qemu-devel@nongnu.org', 'Xu, Quan',
	'silviu.vlasceanu@gmail.com',
	'hagen.lauer@huawei.com',
	SHIH, CHING C

Dave -
Jeff moved from AT&T to another company at the end of last week.
Ching Shih (cc'd) and I are still here, if any questions come up about this testing.
Thanks,
Cristina


Cristina Serban, PhD, CISSP
Lead Member of Technical Staff
AT&T Security Research Center 
Middletown, NJ - USA
  


-----Original Message-----
From: Dr. David Alan Gilbert [mailto:dgilbert@redhat.com] 
Sent: Thursday, June 16, 2016 4:05 AM
To: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefan Berger <stefanb@us.ibm.com>; mst@redhat.com; qemu-devel@nongnu.org; SERBAN, CRISTINA <cs1731@att.com>; Xu, Quan <quan.xu@intel.com>; silviu.vlasceanu@gmail.com; hagen.lauer@huawei.com; SHIH, CHING C <cs1815@att.com>
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


P.S. I've removed Jeff from the cc because I got a bounce from
his AT&T address saying 'restricted/not authorized'

> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16  8:05                         ` Dr. David Alan Gilbert
  2016-06-16  8:25                           ` Daniel P. Berrange
  2016-06-16 13:58                           ` SERBAN, CRISTINA
@ 2016-06-16 15:04                           ` Stefan Berger
  2016-06-16 15:22                             ` Dr. David Alan Gilbert
  2 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-06-16 15:04 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> <snip>
>
>>> So what was the multi-instance vTPM proxy driver patch set about?
>> That's for containers.
> Why have the two mechanisms? Can you explain how the multi-instance
> proxy works; my brief reading when I saw your patch series seemed
> to suggest it could be used instead of CUSE for the non-container case.

The multi-instance vtpm proxy driver basically works through usage of an 
ioctl() on /dev/vtpmx that is used to spawn a new front- and backend 
pair. The front-end is a new /dev/tpm%d device that then can be moved 
into the container (mknod + device cgroup setup). The backend is an 
anonymous file descriptor that is to be passed to a TPM emulator for 
reading TPM requests coming in from that /dev/tpm%d and returning 
responses to. Since it is implemented as a kernel driver, we can hook it 
into the Linux Integrity Measurement Architecture (IMA) and have it be 
used by IMA in place of a hardware TPM driver. There's ongoing work in 
the area of namespacing support for IMA to have an independent IMA 
instance per container so that this can be used.

A TPM does not only have a data channel (/dev/tpm%d) but also a control 
channel, which is primarily implemented in its hardware interface and is 
typically not fully accessible to user space. The vtpm proxy driver 
_only_ supports the data channel through which it basically relays TPM 
commands and responses from user space to the TPM emulator. The control 
channel is provided by the software emulator through an additional TCP 
or UnixIO socket or in case of CUSE through ioctls. The control channel 
allows to reset the TPM when the container/VM is being reset or set the 
locality of a command or retrieve the state of the vTPM (for suspend) 
and set the state of the vTPM (for resume) among several other things. 
The commands for the control channel are defined here:

https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h

For a container we would require that its management stack initializes 
and resets the vTPM when the container is rebooted. (These are typically 
operations that are done through pulses on the motherboard.)

In case of QEMU we would need to have more access to the control 
channel, which includes initialization and reset of the vTPM, getting 
and setting its state for suspend/resume/migration, setting the locality 
of commands, etc., so that all low-level functionality is accessible to 
the emulator (QEMU). The proxy driver does not help with this but we 
should use the swtpm implementation that either has that CUSE interface 
with control channel (through ioctls) or provides UnixIO and TCP sockets 
for the control channel.

     Stefan

>
> Dave
> P.S. I've removed Jeff from the cc because I got a bounce from
> his AT&T address saying 'restricted/not authorized'
>
>>      Stefan
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16  8:25                           ` Daniel P. Berrange
@ 2016-06-16 15:20                             ` Stefan Berger
  2017-03-01 12:25                             ` Stefan Berger
  1 sibling, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2016-06-16 15:20 UTC (permalink / raw)
  To: Daniel P. Berrange, Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert wrote:
>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>> <snip>
>>
>>>> So what was the multi-instance vTPM proxy driver patch set about?
>>> That's for containers.
>> Why have the two mechanisms? Can you explain how the multi-instance
>> proxy works; my brief reading when I saw your patch series seemed
>> to suggest it could be used instead of CUSE for the non-container case.
> One of the key things that was/is not appealing about this CUSE approach
> is that it basically invents a new ioctl() mechanism for talking to
> a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't need
> to have any changes at all - its existing driver for talking to TPM
> char devices ought to just work. All that would be required is libvirt
> support too configure the vTPM instances.

The issue here is mainly the control channel as stated in the other email.

The CUSE TPM allows users to provide the name of the device that will 
appear in /dev. Since the kernel TPM driver basically owns the 
/dev/tpm%d names, a CUSE TPM should use a different name. I don't quite 
understand why such a device should not be able to offer an ioctl 
interface for its control channel? In case of the CUSE TPM it's not a 
hardware device underneath but a software emulation of a hardware device 
that needs an additional control channel to allow certain functionality 
to be reached that is typically hidden by the device driver. It just 
happens to have a compatible data channel that works just like /dev/tpm%d.

The ioctl interface is in my opinion only a problem in so far as the 
control channel commands can be larger than what the Linux CUSE driver 
supports so that the implementation had to work around this restriction. 
As stated in the other email, there's the possibility of using the TPM 
emulator with socket interfaces where the data and control channels can 
now use any combination of UnixIO and TCP sockets, so two UnixIO sockets 
(for data and control) are possible.

     Stefan

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16 15:04                           ` Stefan Berger
@ 2016-06-16 15:22                             ` Dr. David Alan Gilbert
  2016-06-16 15:35                               ` Stefan Berger
  0 siblings, 1 reply; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-06-16 15:22 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, mst, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

* Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > <snip>
> > 
> > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > That's for containers.
> > Why have the two mechanisms? Can you explain how the multi-instance
> > proxy works; my brief reading when I saw your patch series seemed
> > to suggest it could be used instead of CUSE for the non-container case.
> 
> The multi-instance vtpm proxy driver basically works through usage of an
> ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
> The front-end is a new /dev/tpm%d device that then can be moved into the
> container (mknod + device cgroup setup). The backend is an anonymous file
> descriptor that is to be passed to a TPM emulator for reading TPM requests
> coming in from that /dev/tpm%d and returning responses to. Since it is
> implemented as a kernel driver, we can hook it into the Linux Integrity
> Measurement Architecture (IMA) and have it be used by IMA in place of a
> hardware TPM driver. There's ongoing work in the area of namespacing support
> for IMA to have an independent IMA instance per container so that this can
> be used.
> 
> A TPM does not only have a data channel (/dev/tpm%d) but also a control
> channel, which is primarily implemented in its hardware interface and is
> typically not fully accessible to user space. The vtpm proxy driver _only_
> supports the data channel through which it basically relays TPM commands and
> responses from user space to the TPM emulator. The control channel is
> provided by the software emulator through an additional TCP or UnixIO socket
> or in case of CUSE through ioctls. The control channel allows to reset the
> TPM when the container/VM is being reset or set the locality of a command or
> retrieve the state of the vTPM (for suspend) and set the state of the vTPM
> (for resume) among several other things. The commands for the control
> channel are defined here:
> 
> https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> 
> For a container we would require that its management stack initializes and
> resets the vTPM when the container is rebooted. (These are typically
> operations that are done through pulses on the motherboard.)
> 
> In case of QEMU we would need to have more access to the control channel,
> which includes initialization and reset of the vTPM, getting and setting its
> state for suspend/resume/migration, setting the locality of commands, etc.,
> so that all low-level functionality is accessible to the emulator (QEMU).
> The proxy driver does not help with this but we should use the swtpm
> implementation that either has that CUSE interface with control channel
> (through ioctls) or provides UnixIO and TCP sockets for the control channel.

OK, that makes sense; does the control interface need to be handled by QEMU
or by libvirt or both?
Either way, I think you're saying that with your kernel interface + a UnixIO
socket you can avoid the CUSE stuff?

Dave

>     Stefan
> 
> > 
> > Dave
> > P.S. I've removed Jeff from the cc because I got a bounce from
> > his AT&T address saying 'restricted/not authorized'
> > 
> > >      Stefan
> > > 
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16 15:22                             ` Dr. David Alan Gilbert
@ 2016-06-16 15:35                               ` Stefan Berger
  2016-06-16 17:54                                 ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-06-16 15:35 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>>> <snip>
>>>
>>>>> So what was the multi-instance vTPM proxy driver patch set about?
>>>> That's for containers.
>>> Why have the two mechanisms? Can you explain how the multi-instance
>>> proxy works; my brief reading when I saw your patch series seemed
>>> to suggest it could be used instead of CUSE for the non-container case.
>> The multi-instance vtpm proxy driver basically works through usage of an
>> ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
>> The front-end is a new /dev/tpm%d device that then can be moved into the
>> container (mknod + device cgroup setup). The backend is an anonymous file
>> descriptor that is to be passed to a TPM emulator for reading TPM requests
>> coming in from that /dev/tpm%d and returning responses to. Since it is
>> implemented as a kernel driver, we can hook it into the Linux Integrity
>> Measurement Architecture (IMA) and have it be used by IMA in place of a
>> hardware TPM driver. There's ongoing work in the area of namespacing support
>> for IMA to have an independent IMA instance per container so that this can
>> be used.
>>
>> A TPM does not only have a data channel (/dev/tpm%d) but also a control
>> channel, which is primarily implemented in its hardware interface and is
>> typically not fully accessible to user space. The vtpm proxy driver _only_
>> supports the data channel through which it basically relays TPM commands and
>> responses from user space to the TPM emulator. The control channel is
>> provided by the software emulator through an additional TCP or UnixIO socket
>> or in case of CUSE through ioctls. The control channel allows to reset the
>> TPM when the container/VM is being reset or set the locality of a command or
>> retrieve the state of the vTPM (for suspend) and set the state of the vTPM
>> (for resume) among several other things. The commands for the control
>> channel are defined here:
>>
>> https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
>>
>> For a container we would require that its management stack initializes and
>> resets the vTPM when the container is rebooted. (These are typically
>> operations that are done through pulses on the motherboard.)
>>
>> In case of QEMU we would need to have more access to the control channel,
>> which includes initialization and reset of the vTPM, getting and setting its
>> state for suspend/resume/migration, setting the locality of commands, etc.,
>> so that all low-level functionality is accessible to the emulator (QEMU).
>> The proxy driver does not help with this but we should use the swtpm
>> implementation that either has that CUSE interface with control channel
>> (through ioctls) or provides UnixIO and TCP sockets for the control channel.
> OK, that makes sense; does the control interface need to be handled by QEMU
> or by libvirt or both?

The control interface needs to be handled primarily by QEMU.

In case of the libvirt implementation I am running an external program 
swtpm_ioctl that uses the control channel to gracefully shut down any 
existing running TPM emulator whose device name happens to have the same 
name as the device of the TPM emulator that is to be created. So it 
cleans up before starting a new TPM emulator just to make sure that that 
new TPM instance can be started. Detail...

> Either way, I think you're saying that with your kernel interface + a UnixIO
> socket you can avoid the CUSE stuff?

So in case of QEMU you don't need that new kernel device driver -- it's 
primarily meant for containers. For QEMU one would start the TPM 
emulator and make sure that QEMU has access to the data and control 
channels, which are now offered as

- CUSE interface with ioctl
- TCP + TCP
- UnixIO + TCP
- TCP + UnioIO
- UnixIO + UnixIO
- file descriptors passed from invoker

   Stefan

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16 15:35                               ` Stefan Berger
@ 2016-06-16 17:54                                 ` Dr. David Alan Gilbert
  2016-06-16 18:43                                   ` Stefan Berger
  0 siblings, 1 reply; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-06-16 17:54 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, mst, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

* Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > <snip>
> > > > 
> > > > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the multi-instance
> > > > proxy works; my brief reading when I saw your patch series seemed
> > > > to suggest it could be used instead of CUSE for the non-container case.
> > > The multi-instance vtpm proxy driver basically works through usage of an
> > > ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
> > > The front-end is a new /dev/tpm%d device that then can be moved into the
> > > container (mknod + device cgroup setup). The backend is an anonymous file
> > > descriptor that is to be passed to a TPM emulator for reading TPM requests
> > > coming in from that /dev/tpm%d and returning responses to. Since it is
> > > implemented as a kernel driver, we can hook it into the Linux Integrity
> > > Measurement Architecture (IMA) and have it be used by IMA in place of a
> > > hardware TPM driver. There's ongoing work in the area of namespacing support
> > > for IMA to have an independent IMA instance per container so that this can
> > > be used.
> > > 
> > > A TPM does not only have a data channel (/dev/tpm%d) but also a control
> > > channel, which is primarily implemented in its hardware interface and is
> > > typically not fully accessible to user space. The vtpm proxy driver _only_
> > > supports the data channel through which it basically relays TPM commands and
> > > responses from user space to the TPM emulator. The control channel is
> > > provided by the software emulator through an additional TCP or UnixIO socket
> > > or in case of CUSE through ioctls. The control channel allows to reset the
> > > TPM when the container/VM is being reset or set the locality of a command or
> > > retrieve the state of the vTPM (for suspend) and set the state of the vTPM
> > > (for resume) among several other things. The commands for the control
> > > channel are defined here:
> > > 
> > > https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> > > 
> > > For a container we would require that its management stack initializes and
> > > resets the vTPM when the container is rebooted. (These are typically
> > > operations that are done through pulses on the motherboard.)
> > > 
> > > In case of QEMU we would need to have more access to the control channel,
> > > which includes initialization and reset of the vTPM, getting and setting its
> > > state for suspend/resume/migration, setting the locality of commands, etc.,
> > > so that all low-level functionality is accessible to the emulator (QEMU).
> > > The proxy driver does not help with this but we should use the swtpm
> > > implementation that either has that CUSE interface with control channel
> > > (through ioctls) or provides UnixIO and TCP sockets for the control channel.
> > OK, that makes sense; does the control interface need to be handled by QEMU
> > or by libvirt or both?
> 
> The control interface needs to be handled primarily by QEMU.
> 
> In case of the libvirt implementation I am running an external program
> swtpm_ioctl that uses the control channel to gracefully shut down any
> existing running TPM emulator whose device name happens to have the same
> name as the device of the TPM emulator that is to be created. So it cleans
> up before starting a new TPM emulator just to make sure that that new TPM
> instance can be started. Detail...
> 
> > Either way, I think you're saying that with your kernel interface + a UnixIO
> > socket you can avoid the CUSE stuff?
> 
> So in case of QEMU you don't need that new kernel device driver -- it's
> primarily meant for containers. For QEMU one would start the TPM emulator
> and make sure that QEMU has access to the data and control channels, which
> are now offered as
> 
> - CUSE interface with ioctl
> - TCP + TCP
> - UnixIO + TCP
> - TCP + UnioIO
> - UnixIO + UnixIO
> - file descriptors passed from invoker

OK, I'm trying to remember back; I'll admit to not having
liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
side require a lot of code to add a qemu interface that wasn't
ioctl?
Doesn't using the kernel driver give you the benefit of both worlds,
i.e. the non-control side in QEMU is unchanged.

Dave

>   Stefan
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16 17:54                                 ` Dr. David Alan Gilbert
@ 2016-06-16 18:43                                   ` Stefan Berger
  2016-06-16 19:24                                     ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-06-16 18:43 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>> On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>>> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
>>>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>>>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>>>>> <snip>
>>>>>
>>>>>>> So what was the multi-instance vTPM proxy driver patch set about?
>>>>>> That's for containers.
>>>>> Why have the two mechanisms? Can you explain how the multi-instance
>>>>> proxy works; my brief reading when I saw your patch series seemed
>>>>> to suggest it could be used instead of CUSE for the non-container case.
>>>> The multi-instance vtpm proxy driver basically works through usage of an
>>>> ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
>>>> The front-end is a new /dev/tpm%d device that then can be moved into the
>>>> container (mknod + device cgroup setup). The backend is an anonymous file
>>>> descriptor that is to be passed to a TPM emulator for reading TPM requests
>>>> coming in from that /dev/tpm%d and returning responses to. Since it is
>>>> implemented as a kernel driver, we can hook it into the Linux Integrity
>>>> Measurement Architecture (IMA) and have it be used by IMA in place of a
>>>> hardware TPM driver. There's ongoing work in the area of namespacing support
>>>> for IMA to have an independent IMA instance per container so that this can
>>>> be used.
>>>>
>>>> A TPM does not only have a data channel (/dev/tpm%d) but also a control
>>>> channel, which is primarily implemented in its hardware interface and is
>>>> typically not fully accessible to user space. The vtpm proxy driver _only_
>>>> supports the data channel through which it basically relays TPM commands and
>>>> responses from user space to the TPM emulator. The control channel is
>>>> provided by the software emulator through an additional TCP or UnixIO socket
>>>> or in case of CUSE through ioctls. The control channel allows to reset the
>>>> TPM when the container/VM is being reset or set the locality of a command or
>>>> retrieve the state of the vTPM (for suspend) and set the state of the vTPM
>>>> (for resume) among several other things. The commands for the control
>>>> channel are defined here:
>>>>
>>>> https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
>>>>
>>>> For a container we would require that its management stack initializes and
>>>> resets the vTPM when the container is rebooted. (These are typically
>>>> operations that are done through pulses on the motherboard.)
>>>>
>>>> In case of QEMU we would need to have more access to the control channel,
>>>> which includes initialization and reset of the vTPM, getting and setting its
>>>> state for suspend/resume/migration, setting the locality of commands, etc.,
>>>> so that all low-level functionality is accessible to the emulator (QEMU).
>>>> The proxy driver does not help with this but we should use the swtpm
>>>> implementation that either has that CUSE interface with control channel
>>>> (through ioctls) or provides UnixIO and TCP sockets for the control channel.
>>> OK, that makes sense; does the control interface need to be handled by QEMU
>>> or by libvirt or both?
>> The control interface needs to be handled primarily by QEMU.
>>
>> In case of the libvirt implementation I am running an external program
>> swtpm_ioctl that uses the control channel to gracefully shut down any
>> existing running TPM emulator whose device name happens to have the same
>> name as the device of the TPM emulator that is to be created. So it cleans
>> up before starting a new TPM emulator just to make sure that that new TPM
>> instance can be started. Detail...
>>
>>> Either way, I think you're saying that with your kernel interface + a UnixIO
>>> socket you can avoid the CUSE stuff?
>> So in case of QEMU you don't need that new kernel device driver -- it's
>> primarily meant for containers. For QEMU one would start the TPM emulator
>> and make sure that QEMU has access to the data and control channels, which
>> are now offered as
>>
>> - CUSE interface with ioctl
>> - TCP + TCP
>> - UnixIO + TCP
>> - TCP + UnioIO
>> - UnixIO + UnixIO
>> - file descriptors passed from invoker
> OK, I'm trying to remember back; I'll admit to not having
> liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
> side require a lot of code to add a qemu interface that wasn't
> ioctl?

Adding these additional interface to the TPM was a bigger effort, yes.

> Doesn't using the kernel driver give you the benefit of both worlds,
> i.e. the non-control side in QEMU is unchanged.

Yes. I am not sure what you are asking, though. A control channel is 
necessary no matter what. The kernel driver talks to /dev/vtpm-<VM uuid> 
via a file descriptor and uses commands sent through ioctl for the 
control channel. Whether QEMU now uses an fd that is a UnixIO or TCP 
socket to send the commands to the TPM or an fd that uses CUSE, doesn't 
matter much on the side of QEMU. The control channel may be a bit 
different when using ioctl versus an fd (for UnixIO or TCP) or ioctl. I 
am not sure why we would send commands through that vTPM proxy driver in 
case of QEMU rather than talking to the TPM emulator directly.

   Stefan

>
> Dave
>
>>    Stefan
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16 18:43                                   ` Stefan Berger
@ 2016-06-16 19:24                                     ` Dr. David Alan Gilbert
  2016-06-16 21:28                                       ` Stefan Berger
  0 siblings, 1 reply; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2016-06-16 19:24 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, mst, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C, berrange

* Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > > > On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> > > > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > > > <snip>
> > > > > > 
> > > > > > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > > > > > That's for containers.
> > > > > > Why have the two mechanisms? Can you explain how the multi-instance
> > > > > > proxy works; my brief reading when I saw your patch series seemed
> > > > > > to suggest it could be used instead of CUSE for the non-container case.
> > > > > The multi-instance vtpm proxy driver basically works through usage of an
> > > > > ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
> > > > > The front-end is a new /dev/tpm%d device that then can be moved into the
> > > > > container (mknod + device cgroup setup). The backend is an anonymous file
> > > > > descriptor that is to be passed to a TPM emulator for reading TPM requests
> > > > > coming in from that /dev/tpm%d and returning responses to. Since it is
> > > > > implemented as a kernel driver, we can hook it into the Linux Integrity
> > > > > Measurement Architecture (IMA) and have it be used by IMA in place of a
> > > > > hardware TPM driver. There's ongoing work in the area of namespacing support
> > > > > for IMA to have an independent IMA instance per container so that this can
> > > > > be used.
> > > > > 
> > > > > A TPM does not only have a data channel (/dev/tpm%d) but also a control
> > > > > channel, which is primarily implemented in its hardware interface and is
> > > > > typically not fully accessible to user space. The vtpm proxy driver _only_
> > > > > supports the data channel through which it basically relays TPM commands and
> > > > > responses from user space to the TPM emulator. The control channel is
> > > > > provided by the software emulator through an additional TCP or UnixIO socket
> > > > > or in case of CUSE through ioctls. The control channel allows to reset the
> > > > > TPM when the container/VM is being reset or set the locality of a command or
> > > > > retrieve the state of the vTPM (for suspend) and set the state of the vTPM
> > > > > (for resume) among several other things. The commands for the control
> > > > > channel are defined here:
> > > > > 
> > > > > https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> > > > > 
> > > > > For a container we would require that its management stack initializes and
> > > > > resets the vTPM when the container is rebooted. (These are typically
> > > > > operations that are done through pulses on the motherboard.)
> > > > > 
> > > > > In case of QEMU we would need to have more access to the control channel,
> > > > > which includes initialization and reset of the vTPM, getting and setting its
> > > > > state for suspend/resume/migration, setting the locality of commands, etc.,
> > > > > so that all low-level functionality is accessible to the emulator (QEMU).
> > > > > The proxy driver does not help with this but we should use the swtpm
> > > > > implementation that either has that CUSE interface with control channel
> > > > > (through ioctls) or provides UnixIO and TCP sockets for the control channel.
> > > > OK, that makes sense; does the control interface need to be handled by QEMU
> > > > or by libvirt or both?
> > > The control interface needs to be handled primarily by QEMU.
> > > 
> > > In case of the libvirt implementation I am running an external program
> > > swtpm_ioctl that uses the control channel to gracefully shut down any
> > > existing running TPM emulator whose device name happens to have the same
> > > name as the device of the TPM emulator that is to be created. So it cleans
> > > up before starting a new TPM emulator just to make sure that that new TPM
> > > instance can be started. Detail...
> > > 
> > > > Either way, I think you're saying that with your kernel interface + a UnixIO
> > > > socket you can avoid the CUSE stuff?
> > > So in case of QEMU you don't need that new kernel device driver -- it's
> > > primarily meant for containers. For QEMU one would start the TPM emulator
> > > and make sure that QEMU has access to the data and control channels, which
> > > are now offered as
> > > 
> > > - CUSE interface with ioctl
> > > - TCP + TCP
> > > - UnixIO + TCP
> > > - TCP + UnioIO
> > > - UnixIO + UnixIO
> > > - file descriptors passed from invoker
> > OK, I'm trying to remember back; I'll admit to not having
> > liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
> > side require a lot of code to add a qemu interface that wasn't
> > ioctl?
> 
> Adding these additional interface to the TPM was a bigger effort, yes.

Right, so that code isn't in upstream qemu is it?

> > Doesn't using the kernel driver give you the benefit of both worlds,
> > i.e. the non-control side in QEMU is unchanged.
> 
> Yes. I am not sure what you are asking, though. A control channel is
> necessary no matter what. The kernel driver talks to /dev/vtpm-<VM uuid> via
> a file descriptor and uses commands sent through ioctl for the control
> channel. Whether QEMU now uses an fd that is a UnixIO or TCP socket to send
> the commands to the TPM or an fd that uses CUSE, doesn't matter much on the
> side of QEMU. The control channel may be a bit different when using ioctl
> versus an fd (for UnixIO or TCP) or ioctl. I am not sure why we would send
> commands through that vTPM proxy driver in case of QEMU rather than talking
> to the TPM emulator directly.

Right, so what I'm thinking is:
   a) QEMU talks to /dev/vtpm-whatever for the normal TPM stuff
      no/little code is needed to be added to qemu upstream for that
   b) Then you talk to the control side via an fd/socket
      you need to add your existing code for that.

So that doesn't depend on CUSE, it doesn't depend on your particular
vTPM implementation (except for the control socket data, but then
hopefully that's pretty abstract); all good?

Dave

> 
>   Stefan
> 
> > 
> > Dave
> > 
> > >    Stefan
> > > 
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16 19:24                                     ` Dr. David Alan Gilbert
@ 2016-06-16 21:28                                       ` Stefan Berger
  2017-02-28 18:31                                         ` Marc-André Lureau
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2016-06-16 21:28 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C, berrange

On 06/16/2016 03:24 PM, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>> On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>>> On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
>>>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>>>>> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
>>>>>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>>>>>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>>>>>>> <snip>
>>>>>>>
>>>>>>>>> So what was the multi-instance vTPM proxy driver patch set about?
>>>>>>>> That's for containers.
>>>>>>> Why have the two mechanisms? Can you explain how the multi-instance
>>>>>>> proxy works; my brief reading when I saw your patch series seemed
>>>>>>> to suggest it could be used instead of CUSE for the non-container case.
>>>>>> The multi-instance vtpm proxy driver basically works through usage of an
>>>>>> ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
>>>>>> The front-end is a new /dev/tpm%d device that then can be moved into the
>>>>>> container (mknod + device cgroup setup). The backend is an anonymous file
>>>>>> descriptor that is to be passed to a TPM emulator for reading TPM requests
>>>>>> coming in from that /dev/tpm%d and returning responses to. Since it is
>>>>>> implemented as a kernel driver, we can hook it into the Linux Integrity
>>>>>> Measurement Architecture (IMA) and have it be used by IMA in place of a
>>>>>> hardware TPM driver. There's ongoing work in the area of namespacing support
>>>>>> for IMA to have an independent IMA instance per container so that this can
>>>>>> be used.
>>>>>>
>>>>>> A TPM does not only have a data channel (/dev/tpm%d) but also a control
>>>>>> channel, which is primarily implemented in its hardware interface and is
>>>>>> typically not fully accessible to user space. The vtpm proxy driver _only_
>>>>>> supports the data channel through which it basically relays TPM commands and
>>>>>> responses from user space to the TPM emulator. The control channel is
>>>>>> provided by the software emulator through an additional TCP or UnixIO socket
>>>>>> or in case of CUSE through ioctls. The control channel allows to reset the
>>>>>> TPM when the container/VM is being reset or set the locality of a command or
>>>>>> retrieve the state of the vTPM (for suspend) and set the state of the vTPM
>>>>>> (for resume) among several other things. The commands for the control
>>>>>> channel are defined here:
>>>>>>
>>>>>> https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
>>>>>>
>>>>>> For a container we would require that its management stack initializes and
>>>>>> resets the vTPM when the container is rebooted. (These are typically
>>>>>> operations that are done through pulses on the motherboard.)
>>>>>>
>>>>>> In case of QEMU we would need to have more access to the control channel,
>>>>>> which includes initialization and reset of the vTPM, getting and setting its
>>>>>> state for suspend/resume/migration, setting the locality of commands, etc.,
>>>>>> so that all low-level functionality is accessible to the emulator (QEMU).
>>>>>> The proxy driver does not help with this but we should use the swtpm
>>>>>> implementation that either has that CUSE interface with control channel
>>>>>> (through ioctls) or provides UnixIO and TCP sockets for the control channel.
>>>>> OK, that makes sense; does the control interface need to be handled by QEMU
>>>>> or by libvirt or both?
>>>> The control interface needs to be handled primarily by QEMU.
>>>>
>>>> In case of the libvirt implementation I am running an external program
>>>> swtpm_ioctl that uses the control channel to gracefully shut down any
>>>> existing running TPM emulator whose device name happens to have the same
>>>> name as the device of the TPM emulator that is to be created. So it cleans
>>>> up before starting a new TPM emulator just to make sure that that new TPM
>>>> instance can be started. Detail...
>>>>
>>>>> Either way, I think you're saying that with your kernel interface + a UnixIO
>>>>> socket you can avoid the CUSE stuff?
>>>> So in case of QEMU you don't need that new kernel device driver -- it's
>>>> primarily meant for containers. For QEMU one would start the TPM emulator
>>>> and make sure that QEMU has access to the data and control channels, which
>>>> are now offered as
>>>>
>>>> - CUSE interface with ioctl
>>>> - TCP + TCP
>>>> - UnixIO + TCP
>>>> - TCP + UnioIO
>>>> - UnixIO + UnixIO
>>>> - file descriptors passed from invoker
>>> OK, I'm trying to remember back; I'll admit to not having
>>> liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
>>> side require a lot of code to add a qemu interface that wasn't
>>> ioctl?
>> Adding these additional interface to the TPM was a bigger effort, yes.
> Right, so that code isn't in upstream qemu is it?

I was talking about the TPM emulator side that has been extended like 
this, not QEMU.

>
>>> Doesn't using the kernel driver give you the benefit of both worlds,
>>> i.e. the non-control side in QEMU is unchanged.
>> Yes. I am not sure what you are asking, though. A control channel is
>> necessary no matter what. The kernel driver talks to /dev/vtpm-<VM uuid> via
>> a file descriptor and uses commands sent through ioctl for the control
>> channel. Whether QEMU now uses an fd that is a UnixIO or TCP socket to send
>> the commands to the TPM or an fd that uses CUSE, doesn't matter much on the
>> side of QEMU. The control channel may be a bit different when using ioctl
>> versus an fd (for UnixIO or TCP) or ioctl. I am not sure why we would send
>> commands through that vTPM proxy driver in case of QEMU rather than talking
>> to the TPM emulator directly.
> Right, so what I'm thinking is:
>     a) QEMU talks to /dev/vtpm-whatever for the normal TPM stuff
>        no/little code is needed to be added to qemu upstream for that

If we talk to /dev/vtpm-whatever, then in my book we would talk to a 
CUSE TPM device. We have compatibility for that via fd passing from libvirt.

>     b) Then you talk to the control side via an fd/socket
>        you need to add your existing code for that.

Not sure what /dev/vtpm-whatever is. If you mean the vtpm proxy driver 
by it then I don't understand why we would need that dependency along 
with the complication of how the setup for this particular device needs 
to be done (run ioctl on /dev/vtpmx to get a front end device and 
backend device file descriptor which then has to be passed to the swtpm 
to read from and write to).

>
> So that doesn't depend on CUSE, it doesn't depend on your particular

If it doesn't depend on CUSE, it depends on a rather novel device driver 
that doesn't need to be used in the QEMU case.

> vTPM implementation (except for the control socket data, but then
> hopefully that's pretty abstract); all good?
Not sure I followed you above.

    Stefan

>
> Dave
>
>>    Stefan
>>
>>> Dave
>>>
>>>>     Stefan
>>>>
>>> --
>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16 21:28                                       ` Stefan Berger
@ 2017-02-28 18:31                                         ` Marc-André Lureau
  2017-03-01 12:32                                           ` Stefan Berger
  0 siblings, 1 reply; 96+ messages in thread
From: Marc-André Lureau @ 2017-02-28 18:31 UTC (permalink / raw)
  To: Stefan Berger, Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, SERBAN, CRISTINA, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

Hi

On Fri, Jun 17, 2016 at 1:29 AM Stefan Berger <stefanb@linux.vnet.ibm.com>
wrote:

> On 06/16/2016 03:24 PM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> >> On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
> >>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> >>>> On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> >>>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> >>>>>> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> >>>>>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> >>>>>>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> >>>>>>> <snip>
> >>>>>>>
> >>>>>>>>> So what was the multi-instance vTPM proxy driver patch set about?
> >>>>>>>> That's for containers.
> >>>>>>> Why have the two mechanisms? Can you explain how the multi-instance
> >>>>>>> proxy works; my brief reading when I saw your patch series seemed
> >>>>>>> to suggest it could be used instead of CUSE for the non-container
> case.
> >>>>>> The multi-instance vtpm proxy driver basically works through usage
> of an
> >>>>>> ioctl() on /dev/vtpmx that is used to spawn a new front- and
> backend pair.
> >>>>>> The front-end is a new /dev/tpm%d device that then can be moved
> into the
> >>>>>> container (mknod + device cgroup setup). The backend is an
> anonymous file
> >>>>>> descriptor that is to be passed to a TPM emulator for reading TPM
> requests
> >>>>>> coming in from that /dev/tpm%d and returning responses to. Since it
> is
> >>>>>> implemented as a kernel driver, we can hook it into the Linux
> Integrity
> >>>>>> Measurement Architecture (IMA) and have it be used by IMA in place
> of a
> >>>>>> hardware TPM driver. There's ongoing work in the area of
> namespacing support
> >>>>>> for IMA to have an independent IMA instance per container so that
> this can
> >>>>>> be used.
> >>>>>>
> >>>>>> A TPM does not only have a data channel (/dev/tpm%d) but also a
> control
> >>>>>> channel, which is primarily implemented in its hardware interface
> and is
> >>>>>> typically not fully accessible to user space. The vtpm proxy driver
> _only_
> >>>>>> supports the data channel through which it basically relays TPM
> commands and
> >>>>>> responses from user space to the TPM emulator. The control channel
> is
> >>>>>> provided by the software emulator through an additional TCP or
> UnixIO socket
> >>>>>> or in case of CUSE through ioctls. The control channel allows to
> reset the
> >>>>>> TPM when the container/VM is being reset or set the locality of a
> command or
> >>>>>> retrieve the state of the vTPM (for suspend) and set the state of
> the vTPM
> >>>>>> (for resume) among several other things. The commands for the
> control
> >>>>>> channel are defined here:
> >>>>>>
> >>>>>>
> https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> >>>>>>
> >>>>>> For a container we would require that its management stack
> initializes and
> >>>>>> resets the vTPM when the container is rebooted. (These are typically
> >>>>>> operations that are done through pulses on the motherboard.)
> >>>>>>
> >>>>>> In case of QEMU we would need to have more access to the control
> channel,
> >>>>>> which includes initialization and reset of the vTPM, getting and
> setting its
> >>>>>> state for suspend/resume/migration, setting the locality of
> commands, etc.,
> >>>>>> so that all low-level functionality is accessible to the emulator
> (QEMU).
> >>>>>> The proxy driver does not help with this but we should use the swtpm
> >>>>>> implementation that either has that CUSE interface with control
> channel
> >>>>>> (through ioctls) or provides UnixIO and TCP sockets for the control
> channel.
> >>>>> OK, that makes sense; does the control interface need to be handled
> by QEMU
> >>>>> or by libvirt or both?
> >>>> The control interface needs to be handled primarily by QEMU.
> >>>>
> >>>> In case of the libvirt implementation I am running an external program
> >>>> swtpm_ioctl that uses the control channel to gracefully shut down any
> >>>> existing running TPM emulator whose device name happens to have the
> same
> >>>> name as the device of the TPM emulator that is to be created. So it
> cleans
> >>>> up before starting a new TPM emulator just to make sure that that new
> TPM
> >>>> instance can be started. Detail...
> >>>>
> >>>>> Either way, I think you're saying that with your kernel interface +
> a UnixIO
> >>>>> socket you can avoid the CUSE stuff?
> >>>> So in case of QEMU you don't need that new kernel device driver --
> it's
> >>>> primarily meant for containers. For QEMU one would start the TPM
> emulator
> >>>> and make sure that QEMU has access to the data and control channels,
> which
> >>>> are now offered as
> >>>>
> >>>> - CUSE interface with ioctl
> >>>> - TCP + TCP
> >>>> - UnixIO + TCP
> >>>> - TCP + UnioIO
> >>>> - UnixIO + UnixIO
> >>>> - file descriptors passed from invoker
> >>> OK, I'm trying to remember back; I'll admit to not having
> >>> liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
> >>> side require a lot of code to add a qemu interface that wasn't
> >>> ioctl?
> >> Adding these additional interface to the TPM was a bigger effort, yes.
> > Right, so that code isn't in upstream qemu is it?
>
> I was talking about the TPM emulator side that has been extended like
> this, not QEMU.
>
>
Out of curiosity, did you do it (adding socket/fd channel) for qemu or for
other reasons?


> >
> >>> Doesn't using the kernel driver give you the benefit of both worlds,
> >>> i.e. the non-control side in QEMU is unchanged.
> >> Yes. I am not sure what you are asking, though. A control channel is
> >> necessary no matter what. The kernel driver talks to /dev/vtpm-<VM
> uuid> via
> >> a file descriptor and uses commands sent through ioctl for the control
> >> channel. Whether QEMU now uses an fd that is a UnixIO or TCP socket to
> send
> >> the commands to the TPM or an fd that uses CUSE, doesn't matter much on
> the
> >> side of QEMU. The control channel may be a bit different when using
> ioctl
> >> versus an fd (for UnixIO or TCP) or ioctl. I am not sure why we would
> send
> >> commands through that vTPM proxy driver in case of QEMU rather than
> talking
> >> to the TPM emulator directly.
> > Right, so what I'm thinking is:
> >     a) QEMU talks to /dev/vtpm-whatever for the normal TPM stuff
> >        no/little code is needed to be added to qemu upstream for that
>
> If we talk to /dev/vtpm-whatever, then in my book we would talk to a
> CUSE TPM device. We have compatibility for that via fd passing from
> libvirt.
>

 /dev/vtpmx created devices are not CUSE devices, are they?

Could you explain why containers use the TPM proxy driver to create sw TPM,
and not CUSE? Perhaps that will clear some aspects.. I imagine that the
kernel can provide some data from the TPM proxy driver, via /sys, or even
use some functions (random etc)? A CUSE driver is opaque to the host
kernel, right?

I understand simulated hw TPM needs the additional control channel (the
iostl stuff), and so they can't use the TPM proxy, as it wouldn't give you
that extra channel. But containers could eventually use CUSE created
devices (if they didn't need the extra /sys or other interface), right?


> >     b) Then you talk to the control side via an fd/socket
> >        you need to add your existing code for that.
>
> Not sure what /dev/vtpm-whatever is. If you mean the vtpm proxy driver
> by it then I don't understand why we would need that dependency along
> with the complication of how the setup for this particular device needs
> to be done (run ioctl on /dev/vtpmx to get a front end device and
> backend device file descriptor which then has to be passed to the swtpm
> to read from and write to).
>

I think we would like to see it as simple as containers, but they require
different level of operations. If all of emulation would be in qemu there
would be no need for control channel, so the control interface depends on
what qemu and the tpm emulation process do. None of it required for swtpm &
containers, but hw emulation needs more. I

t looks like TPM kernel interface is only data read/write, the CUSE IOCTLs
are only for control IPC. If so then I think it's simpler, and more
portable, to go with a pure socket/fd based solution, since CUSE in this
qemu case doesn't bring much benefits afaict.

Btw, is there a need to synchronize data & control channel? (asking because
it's not obvious when you say you can have both channels using different
transport)


> >
> > So that doesn't depend on CUSE, it doesn't depend on your particular
>
> If it doesn't depend on CUSE, it depends on a rather novel device driver
> that doesn't need to be used in the QEMU case.
>

> > vTPM implementation (except for the control socket data, but then
> > hopefully that's pretty abstract); all good?
> Not sure I followed you above.
>
>
Hopefully I dind't add more confusion :)
Thanks


>     Stefan
>
> >
> > Dave
> >
> >>    Stefan
> >>
> >>> Dave
> >>>
> >>>>     Stefan
> >>>>
> >>> --
> >>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >>>
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
>
>
> --
Marc-André Lureau

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2016-06-16  8:25                           ` Daniel P. Berrange
  2016-06-16 15:20                             ` Stefan Berger
@ 2017-03-01 12:25                             ` Stefan Berger
  2017-03-01 12:54                               ` Daniel P. Berrange
  1 sibling, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 12:25 UTC (permalink / raw)
  To: Daniel P. Berrange, Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, SERBAN, CRISTINA, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert wrote:
>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>> <snip>
>>
>>>> So what was the multi-instance vTPM proxy driver patch set about?
>>> That's for containers.
>> Why have the two mechanisms? Can you explain how the multi-instance
>> proxy works; my brief reading when I saw your patch series seemed
>> to suggest it could be used instead of CUSE for the non-container case.
> One of the key things that was/is not appealing about this CUSE approach
> is that it basically invents a new ioctl() mechanism for talking to
> a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't need
> to have any changes at all - its existing driver for talking to TPM

We still need the control channel with the vTPM to reset it upon VM 
reset, for getting and setting the state of the vTPM upon 
snapshot/suspend/resume, changing locality, etc.

    Stefan

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-02-28 18:31                                         ` Marc-André Lureau
@ 2017-03-01 12:32                                           ` Stefan Berger
  0 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 12:32 UTC (permalink / raw)
  To: Marc-André Lureau, Dr. David Alan Gilbert
  Cc: Stefan Berger, mst, qemu-devel, SERBAN, CRISTINA, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On 02/28/2017 01:31 PM, Marc-André Lureau wrote:
> Hi
>
> On Fri, Jun 17, 2016 at 1:29 AM Stefan Berger 
> <stefanb@linux.vnet.ibm.com <mailto:stefanb@linux.vnet.ibm.com>> wrote:
>
>     On 06/16/2016 03:24 PM, Dr. David Alan Gilbert wrote:
>     > * Stefan Berger (stefanb@linux.vnet.ibm.com
>     <mailto:stefanb@linux.vnet.ibm.com>) wrote:
>     >> On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
>     >>> * Stefan Berger (stefanb@linux.vnet.ibm.com
>     <mailto:stefanb@linux.vnet.ibm.com>) wrote:
>     >>>> On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
>     >>>>> * Stefan Berger (stefanb@linux.vnet.ibm.com
>     <mailto:stefanb@linux.vnet.ibm.com>) wrote:
>     >>>>>> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
>     >>>>>>> * Stefan Berger (stefanb@linux.vnet.ibm.com
>     <mailto:stefanb@linux.vnet.ibm.com>) wrote:
>     >>>>>>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>     >>>>>>> <snip>
>     >>>>>>>
>     >>>>>>>>> So what was the multi-instance vTPM proxy driver patch
>     set about?
>     >>>>>>>> That's for containers.
>     >>>>>>> Why have the two mechanisms? Can you explain how the
>     multi-instance
>     >>>>>>> proxy works; my brief reading when I saw your patch series
>     seemed
>     >>>>>>> to suggest it could be used instead of CUSE for the
>     non-container case.
>     >>>>>> The multi-instance vtpm proxy driver basically works
>     through usage of an
>     >>>>>> ioctl() on /dev/vtpmx that is used to spawn a new front-
>     and backend pair.
>     >>>>>> The front-end is a new /dev/tpm%d device that then can be
>     moved into the
>     >>>>>> container (mknod + device cgroup setup). The backend is an
>     anonymous file
>     >>>>>> descriptor that is to be passed to a TPM emulator for
>     reading TPM requests
>     >>>>>> coming in from that /dev/tpm%d and returning responses to.
>     Since it is
>     >>>>>> implemented as a kernel driver, we can hook it into the
>     Linux Integrity
>     >>>>>> Measurement Architecture (IMA) and have it be used by IMA
>     in place of a
>     >>>>>> hardware TPM driver. There's ongoing work in the area of
>     namespacing support
>     >>>>>> for IMA to have an independent IMA instance per container
>     so that this can
>     >>>>>> be used.
>     >>>>>>
>     >>>>>> A TPM does not only have a data channel (/dev/tpm%d) but
>     also a control
>     >>>>>> channel, which is primarily implemented in its hardware
>     interface and is
>     >>>>>> typically not fully accessible to user space. The vtpm
>     proxy driver _only_
>     >>>>>> supports the data channel through which it basically relays
>     TPM commands and
>     >>>>>> responses from user space to the TPM emulator. The control
>     channel is
>     >>>>>> provided by the software emulator through an additional TCP
>     or UnixIO socket
>     >>>>>> or in case of CUSE through ioctls. The control channel
>     allows to reset the
>     >>>>>> TPM when the container/VM is being reset or set the
>     locality of a command or
>     >>>>>> retrieve the state of the vTPM (for suspend) and set the
>     state of the vTPM
>     >>>>>> (for resume) among several other things. The commands for
>     the control
>     >>>>>> channel are defined here:
>     >>>>>>
>     >>>>>>
>     https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
>     >>>>>>
>     >>>>>> For a container we would require that its management stack
>     initializes and
>     >>>>>> resets the vTPM when the container is rebooted. (These are
>     typically
>     >>>>>> operations that are done through pulses on the motherboard.)
>     >>>>>>
>     >>>>>> In case of QEMU we would need to have more access to the
>     control channel,
>     >>>>>> which includes initialization and reset of the vTPM,
>     getting and setting its
>     >>>>>> state for suspend/resume/migration, setting the locality of
>     commands, etc.,
>     >>>>>> so that all low-level functionality is accessible to the
>     emulator (QEMU).
>     >>>>>> The proxy driver does not help with this but we should use
>     the swtpm
>     >>>>>> implementation that either has that CUSE interface with
>     control channel
>     >>>>>> (through ioctls) or provides UnixIO and TCP sockets for the
>     control channel.
>     >>>>> OK, that makes sense; does the control interface need to be
>     handled by QEMU
>     >>>>> or by libvirt or both?
>     >>>> The control interface needs to be handled primarily by QEMU.
>     >>>>
>     >>>> In case of the libvirt implementation I am running an
>     external program
>     >>>> swtpm_ioctl that uses the control channel to gracefully shut
>     down any
>     >>>> existing running TPM emulator whose device name happens to
>     have the same
>     >>>> name as the device of the TPM emulator that is to be created.
>     So it cleans
>     >>>> up before starting a new TPM emulator just to make sure that
>     that new TPM
>     >>>> instance can be started. Detail...
>     >>>>
>     >>>>> Either way, I think you're saying that with your kernel
>     interface + a UnixIO
>     >>>>> socket you can avoid the CUSE stuff?
>     >>>> So in case of QEMU you don't need that new kernel device
>     driver -- it's
>     >>>> primarily meant for containers. For QEMU one would start the
>     TPM emulator
>     >>>> and make sure that QEMU has access to the data and control
>     channels, which
>     >>>> are now offered as
>     >>>>
>     >>>> - CUSE interface with ioctl
>     >>>> - TCP + TCP
>     >>>> - UnixIO + TCP
>     >>>> - TCP + UnioIO
>     >>>> - UnixIO + UnixIO
>     >>>> - file descriptors passed from invoker
>     >>> OK, I'm trying to remember back; I'll admit to not having
>     >>> liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
>     >>> side require a lot of code to add a qemu interface that wasn't
>     >>> ioctl?
>     >> Adding these additional interface to the TPM was a bigger
>     effort, yes.
>     > Right, so that code isn't in upstream qemu is it?
>
>     I was talking about the TPM emulator side that has been extended like
>     this, not QEMU.
>
>
> Out of curiosity, did you do it (adding socket/fd channel) for qemu or 
> for other reasons?
>
>     >
>     >>> Doesn't using the kernel driver give you the benefit of both
>     worlds,
>     >>> i.e. the non-control side in QEMU is unchanged.
>     >> Yes. I am not sure what you are asking, though. A control
>     channel is
>     >> necessary no matter what. The kernel driver talks to
>     /dev/vtpm-<VM uuid> via
>     >> a file descriptor and uses commands sent through ioctl for the
>     control
>     >> channel. Whether QEMU now uses an fd that is a UnixIO or TCP
>     socket to send
>     >> the commands to the TPM or an fd that uses CUSE, doesn't matter
>     much on the
>     >> side of QEMU. The control channel may be a bit different when
>     using ioctl
>     >> versus an fd (for UnixIO or TCP) or ioctl. I am not sure why we
>     would send
>     >> commands through that vTPM proxy driver in case of QEMU rather
>     than talking
>     >> to the TPM emulator directly.
>     > Right, so what I'm thinking is:
>     >     a) QEMU talks to /dev/vtpm-whatever for the normal TPM stuff
>     >        no/little code is needed to be added to qemu upstream for
>     that
>
>     If we talk to /dev/vtpm-whatever, then in my book we would talk to a
>     CUSE TPM device. We have compatibility for that via fd passing
>     from libvirt.
>
>
>  /dev/vtpmx created devices are not CUSE devices, are they?
>
> Could you explain why containers use the TPM proxy driver to create sw 
> TPM, and not CUSE? Perhaps that will clear some aspects.. I imagine 
> that the kernel can provide some data from the TPM proxy driver, via 
> /sys, or even use some functions (random etc)? A CUSE driver is opaque 
> to the host kernel, right?

The TPM proxy driver hooks into the existing Linux TPM drivercore and 
with that makes it available for other kernel services, such as trusted 
and encrytped keys and possibly a namespace IMA where the container 
would run its own instance of IMA, which then can extend the PCRs of the 
emulated TPM (vTPM). For QEMU it's sufficient to make an emulated TPM 
available.


>
> I understand simulated hw TPM needs the additional control channel 
> (the iostl stuff), and so they can't use the TPM proxy, as it wouldn't 
> give you that extra channel. But containers could eventually use CUSE 
> created devices (if they didn't need the extra /sys or other 
> interface), right?

For containers I think we would want to make more kernel services 
available to each container and for that we need a driver that hooks 
itself into the core TPM code and makes a 'chip' available.

http://lxr.free-electrons.com/source/drivers/char/tpm/tpm-chip.c#L88


     Stefan

>
>
>     >     b) Then you talk to the control side via an fd/socket
>     >        you need to add your existing code for that.
>
>     Not sure what /dev/vtpm-whatever is. If you mean the vtpm proxy driver
>     by it then I don't understand why we would need that dependency along
>     with the complication of how the setup for this particular device
>     needs
>     to be done (run ioctl on /dev/vtpmx to get a front end device and
>     backend device file descriptor which then has to be passed to the
>     swtpm
>     to read from and write to).
>
>
> I think we would like to see it as simple as containers, but they 
> require different level of operations. If all of emulation would be in 
> qemu there would be no need for control channel, so the control 
> interface depends on what qemu and the tpm emulation process do. None 
> of it required for swtpm & containers, but hw emulation needs more. I
>
> t looks like TPM kernel interface is only data read/write, the CUSE 
> IOCTLs are only for control IPC. If so then I think it's simpler, and 
> more portable, to go with a pure socket/fd based solution, since CUSE 
> in this qemu case doesn't bring much benefits afaict.
>
> Btw, is there a need to synchronize data & control channel? (asking 
> because it's not obvious when you say you can have both channels using 
> different transport)
>
>
>     >
>     > So that doesn't depend on CUSE, it doesn't depend on your particular
>
>     If it doesn't depend on CUSE, it depends on a rather novel device
>     driver
>     that doesn't need to be used in the QEMU case.
>
>
>     > vTPM implementation (except for the control socket data, but then
>     > hopefully that's pretty abstract); all good?
>     Not sure I followed you above.
>
>
> Hopefully I dind't add more confusion :)
> Thanks
>
>         Stefan
>
>     >
>     > Dave
>     >
>     >>    Stefan
>     >>
>     >>> Dave
>     >>>
>     >>>>     Stefan
>     >>>>
>     >>> --
>     >>> Dr. David Alan Gilbert / dgilbert@redhat.com
>     <mailto:dgilbert@redhat.com> / Manchester, UK
>     >>>
>     > --
>     > Dr. David Alan Gilbert / dgilbert@redhat.com
>     <mailto:dgilbert@redhat.com> / Manchester, UK
>     >
>
>
> -- 
> Marc-André Lureau

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 12:25                             ` Stefan Berger
@ 2017-03-01 12:54                               ` Daniel P. Berrange
  2017-03-01 13:25                                 ` Stefan Berger
  0 siblings, 1 reply; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 12:54 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Dr. David Alan Gilbert, Stefan Berger, mst, qemu-devel, SERBAN,
	CRISTINA, Xu, Quan, silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert wrote:
> > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > <snip>
> > > 
> > > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > > That's for containers.
> > > Why have the two mechanisms? Can you explain how the multi-instance
> > > proxy works; my brief reading when I saw your patch series seemed
> > > to suggest it could be used instead of CUSE for the non-container case.
> > One of the key things that was/is not appealing about this CUSE approach
> > is that it basically invents a new ioctl() mechanism for talking to
> > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't need
> > to have any changes at all - its existing driver for talking to TPM
> 
> We still need the control channel with the vTPM to reset it upon VM reset,
> for getting and setting the state of the vTPM upon snapshot/suspend/resume,
> changing locality, etc.

You ultimately need the same mechanisms if using in-kernel vTPM with
containers as containers can support snapshot/suspend/resume/etc too.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 12:54                               ` Daniel P. Berrange
@ 2017-03-01 13:25                                 ` Stefan Berger
  2017-03-01 14:17                                   ` Marc-André Lureau
  2017-03-01 15:18                                   ` Daniel P. Berrange
  0 siblings, 2 replies; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 13:25 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: SERBAN, CRISTINA, SHIH, CHING C, Dr. David Alan Gilbert,
	hagen.lauer, mst, qemu-devel, Xu, Quan, silviu.vlasceanu,
	Stefan Berger

"Daniel P. Berrange" <berrange@redhat.com> wrote on 03/01/2017 07:54:14 
AM:

> From: "Daniel P. Berrange" <berrange@redhat.com>
> To: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Stefan Berger/
> Watson/IBM@IBMUS, "mst@redhat.com" <mst@redhat.com>, "qemu-
> devel@nongnu.org" <qemu-devel@nongnu.org>, "SERBAN, CRISTINA" 
> <cs1731@att.com>, "Xu, Quan" <quan.xu@intel.com>, 
> "silviu.vlasceanu@gmail.com" <silviu.vlasceanu@gmail.com>, 
> "hagen.lauer@huawei.com" <hagen.lauer@huawei.com>, "SHIH, CHING C" 
> <cs1815@att.com>
> Date: 03/01/2017 08:03 AM
> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
TPM
> 
> On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert 
wrote:
> > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > <snip>
> > > > 
> > > > > > So what was the multi-instance vTPM proxy driver patch set 
about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the 
multi-instance
> > > > proxy works; my brief reading when I saw your patch series seemed
> > > > to suggest it could be used instead of CUSE for the non-container 
case.
> > > One of the key things that was/is not appealing about this CUSE 
approach
> > > is that it basically invents a new ioctl() mechanism for talking to
> > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't 
need
> > > to have any changes at all - its existing driver for talking to TPM
> > 
> > We still need the control channel with the vTPM to reset it upon VM 
reset,
> > for getting and setting the state of the vTPM upon 
snapshot/suspend/resume,
> > changing locality, etc.
> 
> You ultimately need the same mechanisms if using in-kernel vTPM with
> containers as containers can support snapshot/suspend/resume/etc too.

The vTPM running on the backend side of the vTPM proxy driver is 
essentially the same as the CUSE TPM used for QEMU. I has the same control 
channel through sockets. So on that level we would have support for the 
operations but not integrated with anything that would support container 
migration.

   Stefan


> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    
http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             
http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    
http://search.cpan.org/~danberr/ :|
> 

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 13:25                                 ` Stefan Berger
@ 2017-03-01 14:17                                   ` Marc-André Lureau
  2017-03-01 14:50                                     ` Stefan Berger
  2017-03-01 15:18                                   ` Daniel P. Berrange
  1 sibling, 1 reply; 96+ messages in thread
From: Marc-André Lureau @ 2017-03-01 14:17 UTC (permalink / raw)
  To: Stefan Berger, Daniel P. Berrange
  Cc: mst, Stefan Berger, qemu-devel, Dr. David Alan Gilbert,
	hagen.lauer, Xu, Quan, silviu.vlasceanu, SERBAN, CRISTINA, SHIH,
	CHING C

Hi

On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger <stefanb@us.ibm.com> wrote:

> "Daniel P. Berrange" <berrange@redhat.com> wrote on 03/01/2017 07:54:14
> AM:
>
> > From: "Daniel P. Berrange" <berrange@redhat.com>
> > To: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Stefan Berger/
> > Watson/IBM@IBMUS, "mst@redhat.com" <mst@redhat.com>, "qemu-
> > devel@nongnu.org" <qemu-devel@nongnu.org>, "SERBAN, CRISTINA"
> > <cs1731@att.com>, "Xu, Quan" <quan.xu@intel.com>,
> > "silviu.vlasceanu@gmail.com" <silviu.vlasceanu@gmail.com>,
> > "hagen.lauer@huawei.com" <hagen.lauer@huawei.com>, "SHIH, CHING C"
> > <cs1815@att.com>
> > Date: 03/01/2017 08:03 AM
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE
> TPM
> >
> > On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
> wrote:
> > > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > > <snip>
> > > > >
> > > > > > > So what was the multi-instance vTPM proxy driver patch set
> about?
> > > > > > That's for containers.
> > > > > Why have the two mechanisms? Can you explain how the
> multi-instance
> > > > > proxy works; my brief reading when I saw your patch series seemed
> > > > > to suggest it could be used instead of CUSE for the non-container
> case.
> > > > One of the key things that was/is not appealing about this CUSE
> approach
> > > > is that it basically invents a new ioctl() mechanism for talking to
> > > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't
> need
> > > > to have any changes at all - its existing driver for talking to TPM
> > >
> > > We still need the control channel with the vTPM to reset it upon VM
> reset,
> > > for getting and setting the state of the vTPM upon
> snapshot/suspend/resume,
> > > changing locality, etc.
> >
> > You ultimately need the same mechanisms if using in-kernel vTPM with
> > containers as containers can support snapshot/suspend/resume/etc too.
>
> The vTPM running on the backend side of the vTPM proxy driver is
> essentially the same as the CUSE TPM used for QEMU. I has the same control
> channel through sockets. So on that level we would have support for the
> operations but not integrated with anything that would support container
> migration.
>
>
Ah that might explain why you added the socket control channel, but there
is no user yet? (or some private product perhaps). Could you tell if
control and data channels need to be synchronized in any ways?

Getting back to the original out-of-process design: qemu links with many
libraries already, perhaps a less controversial approach would be to have a
linked in solution before proposing out-of-process? This would be easier to
deal with for management layers etc. This wouldn't be the most robust
solution, but could get us somewhere at least for easier testing and
development.

thanks


-- 
Marc-André Lureau

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 14:17                                   ` Marc-André Lureau
@ 2017-03-01 14:50                                     ` Stefan Berger
  2017-03-01 15:24                                       ` Marc-André Lureau
  2017-03-01 16:22                                       ` Michael S. Tsirkin
  0 siblings, 2 replies; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 14:50 UTC (permalink / raw)
  To: Marc-André Lureau, Stefan Berger, Daniel P. Berrange
  Cc: mst, qemu-devel, Dr. David Alan Gilbert, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On 03/01/2017 09:17 AM, Marc-André Lureau wrote:
> Hi
>
> On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger <stefanb@us.ibm.com 
> <mailto:stefanb@us.ibm.com>> wrote:
>
>     "Daniel P. Berrange" <berrange@redhat.com
>     <mailto:berrange@redhat.com>> wrote on 03/01/2017 07:54:14
>     AM:
>     >
>     > On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
>     > > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
>     > > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
>     wrote:
>     > > > > * Stefan Berger (stefanb@linux.vnet.ibm.com
>     <mailto:stefanb@linux.vnet.ibm.com>) wrote:
>     > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>     > > > > <snip>
>     > > > >
>     > > > > > > So what was the multi-instance vTPM proxy driver patch set
>     about?
>     > > > > > That's for containers.
>     > > > > Why have the two mechanisms? Can you explain how the
>     multi-instance
>     > > > > proxy works; my brief reading when I saw your patch series
>     seemed
>     > > > > to suggest it could be used instead of CUSE for the
>     non-container
>     case.
>     > > > One of the key things that was/is not appealing about this CUSE
>     approach
>     > > > is that it basically invents a new ioctl() mechanism for
>     talking to
>     > > > a TPM chardev. With in-kernel vTPM support, QEMU probably
>     doesn't
>     need
>     > > > to have any changes at all - its existing driver for talking
>     to TPM
>     > >
>     > > We still need the control channel with the vTPM to reset it
>     upon VM
>     reset,
>     > > for getting and setting the state of the vTPM upon
>     snapshot/suspend/resume,
>     > > changing locality, etc.
>     >
>     > You ultimately need the same mechanisms if using in-kernel vTPM with
>     > containers as containers can support snapshot/suspend/resume/etc
>     too.
>
>     The vTPM running on the backend side of the vTPM proxy driver is
>     essentially the same as the CUSE TPM used for QEMU. I has the same
>     control
>     channel through sockets. So on that level we would have support
>     for the
>     operations but not integrated with anything that would support
>     container
>     migration.
>
>
> Ah that might explain why you added the socket control channel, but 
> there is no user yet? (or some private product perhaps). Could you 
> tell if control and data channels need to be synchronized in any ways?


In the general case, synchronization would have to happen, yes. So a 
lock that is held while the TPM processes data would have to lock out 
control channel commands that operate on the TPM data. That may be 
missing. In case of QEMU being the client, not much concurrency would be 
expected there just by the way QEMU interacts with it.

A detail: A corner case is live-migration with the TPM emulation being 
busy processing a command, like creation of a key. In that case QEMU 
would keep on running and only start streaming device state to the 
recipient side after the TPM command processing finishes and has 
returned the result. QEMU wouldn't want to get stuck in a lock between 
data and control channel, so would have other means of determining when 
the backend processing is done.

>
> Getting back to the original out-of-process design: qemu links with 
> many libraries already, perhaps a less controversial approach would be 
> to have a linked in solution before proposing out-of-process? This 
> would be easier to deal with for

I had already proposed a linked-in version before I went to the 
out-of-process design. Anthony's concerns back then were related to the 
code not being trusted and a segfault in the code could bring down all 
of QEMU. That we have test suites running over it didn't work as an 
argument. Some of the test suite are private, though.

> management layers etc. This wouldn't be the most robust solution, but 
> could get us somewhere at least for easier testing and development.

Hm. In terms of external process it's basically 'there', so I don't 
related to the 'easier testing and development.' The various versions 
with QEMU + CUSE TPM driver patches applied are here.

https://github.com/stefanberger/qemu-tpm/tree/v2.8.0+tpm

I have an older version of libvirt that has the necessary patches 
applied to start QEMU with the external TPM. There's also virt-manager 
support.

If CUSE is the wrong interface, then there's a discussion about this 
here. Alternatively UnixIO for data and control channel could be used.

https://github.com/stefanberger/swtpm/issues/4

    Stefan


>
> thanks
>
>
> -- 
> Marc-André Lureau

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 13:25                                 ` Stefan Berger
  2017-03-01 14:17                                   ` Marc-André Lureau
@ 2017-03-01 15:18                                   ` Daniel P. Berrange
  2017-03-01 15:40                                     ` Stefan Berger
  1 sibling, 1 reply; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 15:18 UTC (permalink / raw)
  To: Stefan Berger
  Cc: SERBAN, CRISTINA, SHIH, CHING C, Dr. David Alan Gilbert,
	hagen.lauer, mst, qemu-devel, Xu, Quan, silviu.vlasceanu,
	Stefan Berger

On Wed, Mar 01, 2017 at 08:25:43AM -0500, Stefan Berger wrote:
> "Daniel P. Berrange" <berrange@redhat.com> wrote on 03/01/2017 07:54:14 
> AM:
> 
> > From: "Daniel P. Berrange" <berrange@redhat.com>
> > To: Stefan Berger <stefanb@linux.vnet.ibm.com>
> > Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Stefan Berger/
> > Watson/IBM@IBMUS, "mst@redhat.com" <mst@redhat.com>, "qemu-
> > devel@nongnu.org" <qemu-devel@nongnu.org>, "SERBAN, CRISTINA" 
> > <cs1731@att.com>, "Xu, Quan" <quan.xu@intel.com>, 
> > "silviu.vlasceanu@gmail.com" <silviu.vlasceanu@gmail.com>, 
> > "hagen.lauer@huawei.com" <hagen.lauer@huawei.com>, "SHIH, CHING C" 
> > <cs1815@att.com>
> > Date: 03/01/2017 08:03 AM
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
> TPM
> > 
> > On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert 
> wrote:
> > > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > > <snip>
> > > > > 
> > > > > > > So what was the multi-instance vTPM proxy driver patch set 
> about?
> > > > > > That's for containers.
> > > > > Why have the two mechanisms? Can you explain how the 
> multi-instance
> > > > > proxy works; my brief reading when I saw your patch series seemed
> > > > > to suggest it could be used instead of CUSE for the non-container 
> case.
> > > > One of the key things that was/is not appealing about this CUSE 
> approach
> > > > is that it basically invents a new ioctl() mechanism for talking to
> > > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't 
> need
> > > > to have any changes at all - its existing driver for talking to TPM
> > > 
> > > We still need the control channel with the vTPM to reset it upon VM 
> reset,
> > > for getting and setting the state of the vTPM upon 
> snapshot/suspend/resume,
> > > changing locality, etc.
> > 
> > You ultimately need the same mechanisms if using in-kernel vTPM with
> > containers as containers can support snapshot/suspend/resume/etc too.
> 
> The vTPM running on the backend side of the vTPM proxy driver is 
> essentially the same as the CUSE TPM used for QEMU. I has the same control 
> channel through sockets. So on that level we would have support for the 
> operations but not integrated with anything that would support container 
> migration.

This goes back to the question Dave mentions above ? Ignoring the control
channel aspect temporarily, can the CUSE TPM support the exact same ioctl
interface as the existing kernel TPM device ? It feels like this should
be possible, and if so, then this virtal TPM feature can be considered to
have two separate pieces.

First enabling basic CUSE TPM device support would not require QEMU changes,
as we could just use the existing tpm-passthrough driver against the CUSE
device, albeit with the limitations around migration, snapshot etc.

Second we could consider the question of supporting a control channel as
a separate topic. IIUC, QEMU essentially needs a way to trigger various
operations in the underlying TPM implementation, when certain lifecycle
operations are performed on the VM. I could see this being done as a
simple network protocol over a UNIX socket. So, you could then add a
new 'chardev' property to the tpm-passthrough device, which gives the
ID of a character device that provides the control channel.

This way QEMU does not need to have any special code to deal with CUSE
directly. QEMU could be used with a real TPM device, a vTPM device or
a CUSE TPM device, with the same driver. With both the vTPM and the
CUSE TPM device, QEMU would have the ability to use a out of band
control channel when migration/snapshot/etc take place.

This cleanly isolates QEMU from the particular design & implementation
that is currently used by the current swtpm code.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 14:50                                     ` Stefan Berger
@ 2017-03-01 15:24                                       ` Marc-André Lureau
  2017-03-01 15:58                                         ` Stefan Berger
  2017-03-01 16:22                                       ` Michael S. Tsirkin
  1 sibling, 1 reply; 96+ messages in thread
From: Marc-André Lureau @ 2017-03-01 15:24 UTC (permalink / raw)
  To: Stefan Berger, Stefan Berger, Daniel P. Berrange
  Cc: mst, qemu-devel, Dr. David Alan Gilbert, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

Hi

On Wed, Mar 1, 2017 at 6:50 PM Stefan Berger <stefanb@linux.vnet.ibm.com>
wrote:

On 03/01/2017 09:17 AM, Marc-André Lureau wrote:

Hi

On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger <stefanb@us.ibm.com> wrote:

"Daniel P. Berrange" <berrange@redhat.com> wrote on 03/01/2017 07:54:14
AM:
>

> On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
wrote:
> > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > <snip>
> > > >
> > > > > > So what was the multi-instance vTPM proxy driver patch set
about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the
multi-instance
> > > > proxy works; my brief reading when I saw your patch series seemed
> > > > to suggest it could be used instead of CUSE for the non-container
case.
> > > One of the key things that was/is not appealing about this CUSE
approach
> > > is that it basically invents a new ioctl() mechanism for talking to
> > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't
need
> > > to have any changes at all - its existing driver for talking to TPM
> >
> > We still need the control channel with the vTPM to reset it upon VM
reset,
> > for getting and setting the state of the vTPM upon
snapshot/suspend/resume,
> > changing locality, etc.
>
> You ultimately need the same mechanisms if using in-kernel vTPM with
> containers as containers can support snapshot/suspend/resume/etc too.

The vTPM running on the backend side of the vTPM proxy driver is
essentially the same as the CUSE TPM used for QEMU. I has the same control
channel through sockets. So on that level we would have support for the
operations but not integrated with anything that would support container
migration.


Ah that might explain why you added the socket control channel, but there
is no user yet? (or some private product perhaps). Could you tell if
control and data channels need to be synchronized in any ways?



In the general case, synchronization would have to happen, yes. So a lock
that is held while the TPM processes data would have to lock out control
channel commands that operate on the TPM data. That may be missing. In case
of QEMU being the client, not much concurrency would be expected there just
by the way QEMU interacts with it.


Could the data channel be muxed in with the control channel? )that is only
use one control socket)

A detail: A corner case is live-migration with the TPM emulation being busy
processing a command, like creation of a key. In that case QEMU would keep
on running and only start streaming device state to the recipient side
after the TPM command processing finishes and has returned the result. QEMU
wouldn't want to get stuck in a lock between data and control channel, so
would have other means of determining when the backend processing is done.



Getting back to the original out-of-process design: qemu links with many
libraries already, perhaps a less controversial approach would be to have a
linked in solution before proposing out-of-process? This would be easier to
deal with for


I had already proposed a linked-in version before I went to the
out-of-process design. Anthony's concerns back then were related to the
code not being trusted and a segfault in the code could bring down all of
QEMU. That we have test suites running over it didn't work as an argument.
Some of the test suite are private, though.


I think Anthony argument is valid for anything running in qemu :) So I
don't see why TPM would be an exception now.

Could you say how much is covered by the public test suite?

About tests, is there any test for qemu TIS?


management layers etc. This wouldn't be the most robust solution, but could
get us somewhere at least for easier testing and development.


Hm. In terms of external process it's basically 'there', so I don't related
to the 'easier testing and development.' The various versions with QEMU +
CUSE TPM driver patches applied are here.

https://github.com/stefanbergToo bader/qemu-tpm/tree/v2.8.0+tpm
<https://github.com/stefanberger/qemu-tpm/tree/v2.8.0+tpm>



Some people may want to use simulated TPM with qemu without the need for
security, just to do development/testing.

Dealing with external processes makes also qemu development and testing
more difficult.

Changing the IPC interface is more complicated  than having linked in
solution. Testing is easier if you can just start/kill one qemu process.

I can't say if it's really needed to ease progress, but at least it would
avoid that CUSE/IPC discussion for now.

I have an older version of libvirt that has the necessary patches applied
to start QEMU with the external TPM. There's also virt-manager support.


Ok, I think it would be worth to list all the up to date trees in
http://www.qemu-project.org/Features/TPM (btw, that page is 5y old, would
be nice if you could refresh it, I bet some changed happened)


If CUSE is the wrong interface, then there's a discussion about this here.
Alternatively UnixIO for data and control channel could be used.

https://github.com/stefanberger/swtpm/issues/4


If there is no strong argument for CUSE, I would go without it

(I'd also suggest a similar approach to vhost-user backend I proposed in
http://lists.nongnu.org/archive/html/qemu-devel/2016-06/msg01014.html,
it spawns a backend and pass an extra socketpair fd to it)

   Stefan



thanks


-- 
Marc-André Lureau


-- 
Marc-André Lureau

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 15:18                                   ` Daniel P. Berrange
@ 2017-03-01 15:40                                     ` Stefan Berger
  2017-03-01 16:13                                       ` Daniel P. Berrange
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 15:40 UTC (permalink / raw)
  To: Daniel P. Berrange, Stefan Berger
  Cc: mst, qemu-devel, Dr. David Alan Gilbert, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On 03/01/2017 10:18 AM, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 08:25:43AM -0500, Stefan Berger wrote:
>> "Daniel P. Berrange" <berrange@redhat.com> wrote on 03/01/2017 07:54:14
>> AM:
>>
>>> From: "Daniel P. Berrange" <berrange@redhat.com>
>>> To: Stefan Berger <stefanb@linux.vnet.ibm.com>
>>> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Stefan Berger/
>>> Watson/IBM@IBMUS, "mst@redhat.com" <mst@redhat.com>, "qemu-
>>> devel@nongnu.org" <qemu-devel@nongnu.org>, "SERBAN, CRISTINA"
>>> <cs1731@att.com>, "Xu, Quan" <quan.xu@intel.com>,
>>> "silviu.vlasceanu@gmail.com" <silviu.vlasceanu@gmail.com>,
>>> "hagen.lauer@huawei.com" <hagen.lauer@huawei.com>, "SHIH, CHING C"
>>> <cs1815@att.com>
>>> Date: 03/01/2017 08:03 AM
>>> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE
>> TPM
>>> On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
>>>> On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
>>>>> On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
>> wrote:
>>>>>> * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote:
>>>>>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>>>>>> <snip>
>>>>>>
>>>>>>>> So what was the multi-instance vTPM proxy driver patch set
>> about?
>>>>>>> That's for containers.
>>>>>> Why have the two mechanisms? Can you explain how the
>> multi-instance
>>>>>> proxy works; my brief reading when I saw your patch series seemed
>>>>>> to suggest it could be used instead of CUSE for the non-container
>> case.
>>>>> One of the key things that was/is not appealing about this CUSE
>> approach
>>>>> is that it basically invents a new ioctl() mechanism for talking to
>>>>> a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't
>> need
>>>>> to have any changes at all - its existing driver for talking to TPM
>>>> We still need the control channel with the vTPM to reset it upon VM
>> reset,
>>>> for getting and setting the state of the vTPM upon
>> snapshot/suspend/resume,
>>>> changing locality, etc.
>>> You ultimately need the same mechanisms if using in-kernel vTPM with
>>> containers as containers can support snapshot/suspend/resume/etc too.
>> The vTPM running on the backend side of the vTPM proxy driver is
>> essentially the same as the CUSE TPM used for QEMU. I has the same control
>> channel through sockets. So on that level we would have support for the
>> operations but not integrated with anything that would support container
>> migration.
> This goes back to the question Dave mentions above ? Ignoring the control
> channel aspect temporarily, can the CUSE TPM support the exact same ioctl
> interface as the existing kernel TPM device ? It feels like this should
> be possible, and if so, then this virtal TPM feature can be considered to
> have two separate pieces.

The existing kernel device has not ioctl interface. If it had one, it 
wouldn't have the same since the control channel implemented on the 
ioctl interface is related to low level commands such as resetting the 
device when the platform resets, etc.

>
> First enabling basic CUSE TPM device support would not require QEMU changes,
> as we could just use the existing tpm-passthrough driver against the CUSE
> device, albeit with the limitations around migration, snapshot etc.

... and device reset upon VM reset. You want to have at least that since 
otherwise the PCRs will not be in the correct state once the firmware 
with TPM support startes extending them. They need to be reset and the 
only way to do that is through some control channel command.


>
> Second we could consider the question of supporting a control channel as
> a separate topic. IIUC, QEMU essentially needs a way to trigger various
> operations in the underlying TPM implementation, when certain lifecycle
> operations are performed on the VM. I could see this being done as a
> simple network protocol over a UNIX socket. So, you could then add a
> new 'chardev' property to the tpm-passthrough device, which gives the
> ID of a character device that provides the control channel.

Why would that other control channel need to be a device rather than an 
ioctl on the device? Or maybe entirely access the emulated TPM through 
UnixIO ?


>
> This way QEMU does not need to have any special code to deal with CUSE
> directly. QEMU could be used with a real TPM device, a vTPM device or
> a CUSE TPM device, with the same driver. With both the vTPM and the
> CUSE TPM device, QEMU would have the ability to use a out of band
> control channel when migration/snapshot/etc take place.
>
> This cleanly isolates QEMU from the particular design & implementation
> that is currently used by the current swtpm code.

Someone needs to define the control channel commands.My definition is here:

https://github.com/stefanberger/qemu-tpm/commit/27d6cd856d5a14061955df7a93ee490697a7a174#diff-5cc0e46d3ec33a3f4262db773c193dfe


  This won't go away even if we changed the transport for the commands. 
ioctl's seem to be one way of achieving this with a character device. 
The socket based control channels of 'swtpm' use the same commands.

    Stefan

>
> Regards,
> Daniel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 15:24                                       ` Marc-André Lureau
@ 2017-03-01 15:58                                         ` Stefan Berger
  0 siblings, 0 replies; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 15:58 UTC (permalink / raw)
  To: Marc-André Lureau, Stefan Berger, Daniel P. Berrange
  Cc: mst, qemu-devel, Dr. David Alan Gilbert, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On 03/01/2017 10:24 AM, Marc-André Lureau wrote:
> Hi
>
> On Wed, Mar 1, 2017 at 6:50 PM Stefan Berger 
> <stefanb@linux.vnet.ibm.com <mailto:stefanb@linux.vnet.ibm.com>> wrote:
>
>     On 03/01/2017 09:17 AM, Marc-André Lureau wrote:
>>     Hi
>>
>>     On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger <stefanb@us.ibm.com
>>     <mailto:stefanb@us.ibm.com>> wrote:
>>
>>         "Daniel P. Berrange" <berrange@redhat.com
>>         <mailto:berrange@redhat.com>> wrote on 03/01/2017 07:54:14
>>         AM:
>>         >
>>
>>         > On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
>>         > > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
>>         > > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David
>>         Alan Gilbert
>>         wrote:
>>         > > > > * Stefan Berger (stefanb@linux.vnet.ibm.com
>>         <mailto:stefanb@linux.vnet.ibm.com>) wrote:
>>         > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>>         > > > > <snip>
>>         > > > >
>>         > > > > > > So what was the multi-instance vTPM proxy driver
>>         patch set
>>         about?
>>         > > > > > That's for containers.
>>         > > > > Why have the two mechanisms? Can you explain how the
>>         multi-instance
>>         > > > > proxy works; my brief reading when I saw your patch
>>         series seemed
>>         > > > > to suggest it could be used instead of CUSE for the
>>         non-container
>>         case.
>>         > > > One of the key things that was/is not appealing about
>>         this CUSE
>>         approach
>>         > > > is that it basically invents a new ioctl() mechanism
>>         for talking to
>>         > > > a TPM chardev. With in-kernel vTPM support, QEMU
>>         probably doesn't
>>         need
>>         > > > to have any changes at all - its existing driver for
>>         talking to TPM
>>         > >
>>         > > We still need the control channel with the vTPM to reset
>>         it upon VM
>>         reset,
>>         > > for getting and setting the state of the vTPM upon
>>         snapshot/suspend/resume,
>>         > > changing locality, etc.
>>         >
>>         > You ultimately need the same mechanisms if using in-kernel
>>         vTPM with
>>         > containers as containers can support
>>         snapshot/suspend/resume/etc too.
>>
>>         The vTPM running on the backend side of the vTPM proxy driver is
>>         essentially the same as the CUSE TPM used for QEMU. I has the
>>         same control
>>         channel through sockets. So on that level we would have
>>         support for the
>>         operations but not integrated with anything that would
>>         support container
>>         migration.
>>
>>
>>     Ah that might explain why you added the socket control channel,
>>     but there is no user yet? (or some private product perhaps).
>>     Could you tell if control and data channels need to be
>>     synchronized in any ways?
>
>
>     In the general case, synchronization would have to happen, yes. So
>     a lock that is held while the TPM processes data would have to
>     lock out control channel commands that operate on the TPM data.
>     That may be missing. In case of QEMU being the client, not much
>     concurrency would be expected there just by the way QEMU interacts
>     with it.
>
>
> Could the data channel be muxed in with the control channel? )that is 
> only use one control socket)


You could run the data channel as part of the control channel or vice 
versa. I think the problem is that TCG hasn't defined anything in this 
area and two people in different rooms will come up with two different 
designs.



>
>     A detail: A corner case is live-migration with the TPM emulation
>     being busy processing a command, like creation of a key. In that
>     case QEMU would keep on running and only start streaming device
>     state to the recipient side after the TPM command processing
>     finishes and has returned the result. QEMU wouldn't want to get
>     stuck in a lock between data and control channel, so would have
>     other means of determining when the backend processing is done.
>
>
>>
>>     Getting back to the original out-of-process design: qemu links
>>     with many libraries already, perhaps a less controversial
>>     approach would be to have a linked in solution before proposing
>>     out-of-process? This would be easier to deal with for
>
>     I had already proposed a linked-in version before I went to the
>     out-of-process design. Anthony's concerns back then were related
>     to the code not being trusted and a segfault in the code could
>     bring down all of QEMU. That we have test suites running over it
>     didn't work as an argument. Some of the test suite are private,
>     though.
>
>
> I think Anthony argument is valid for anything running in qemu :) So I 
> don't see why TPM would be an exception now.
>
> Could you say how much is covered by the public test suite?

I don't know anything in terms of percentage of code coverage. But I 
think in terms of coverage of commands of a TPM 1.2 we were probably 
 >95%. Now there's also TPM 2 and for that I don't know.

>
> About tests, is there any test for qemu TIS?

For the TIS I had some very limited tests in SeaBIOS, which of course 
are not upstreamed. Though the primary goal there was to test live 
migration while doing PCR Extends.


>
>
>>     management layers etc. This wouldn't be the most robust solution,
>>     but could get us somewhere at least for easier testing and
>>     development.
>
>     Hm. In terms of external process it's basically 'there', so I
>     don't related to the 'easier testing and development.' The various
>     versions with QEMU + CUSE TPM driver patches applied are here.
>
>     https://github.com/stefanberger/qemu-tpm/tree/v2.8.0+tpm
>
>
>
> Some people may want to use simulated TPM with qemu without the need 
> for security, just to do development/testing.
>

For that they have a solution with the above tree and the swtpm and 
libtpms projects.

> Dealing with external processes makes also qemu development and 
> testing more difficult.

Well, internal didn't fly previously.

>
> Changing the IPC interface is more complicated  than having linked in 
> solution. Testing is easier if you can just start/kill one qemu process.
>
> I can't say if it's really needed to ease progress, but at least it 
> would avoid that CUSE/IPC discussion for now.
>
>     I have an older version of libvirt that has the necessary patches
>     applied to start QEMU with the external TPM. There's also
>     virt-manager support.
>
>
> Ok, I think it would be worth to list all the up to date trees in 
> http://www.qemu-project.org/Features/TPM (btw, that page is 5y old, 
> would be nice if you could refresh it, I bet some changed happened)
>
>     If CUSE is the wrong interface, then there's a discussion about
>     this here. Alternatively UnixIO for data and control channel could
>     be used.
>
>     https://github.com/stefanberger/swtpm/issues/4
>
>
> If there is no strong argument for CUSE, I would go without it
>
> (I'd also suggest a similar approach to vhost-user backend I proposed 
> in http://lists.nongnu.org/archive/html/qemu-devel/2016-06/msg01014.html,
> it spawns a backend and pass an extra socketpair fd to it)
>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 15:40                                     ` Stefan Berger
@ 2017-03-01 16:13                                       ` Daniel P. Berrange
  0 siblings, 0 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 16:13 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Stefan Berger, mst, qemu-devel, Dr. David Alan Gilbert,
	hagen.lauer, Xu, Quan, silviu.vlasceanu, SERBAN, CRISTINA, SHIH,
	CHING C

On Wed, Mar 01, 2017 at 10:40:13AM -0500, Stefan Berger wrote:
> On 03/01/2017 10:18 AM, Daniel P. Berrange wrote:
> > This goes back to the question Dave mentions above ? Ignoring the control
> > channel aspect temporarily, can the CUSE TPM support the exact same ioctl
> > interface as the existing kernel TPM device ? It feels like this should
> > be possible, and if so, then this virtal TPM feature can be considered to
> > have two separate pieces.
> 
> The existing kernel device has not ioctl interface. If it had one, it
> wouldn't have the same since the control channel implemented on the ioctl
> interface is related to low level commands such as resetting the device when
> the platform resets, etc.

Ok, well what I meant is the CUSE TPM device should implement the same
API/ABI as the existing kernel TPM and vTPM devices.

> > First enabling basic CUSE TPM device support would not require QEMU changes,
> > as we could just use the existing tpm-passthrough driver against the CUSE
> > device, albeit with the limitations around migration, snapshot etc.
> 
> ... and device reset upon VM reset. You want to have at least that since
> otherwise the PCRs will not be in the correct state once the firmware with
> TPM support startes extending them. They need to be reset and the only way
> to do that is through some control channel command.

How does that work with real TPM device passthrough ?  Is it simply broken
if the VM is reset ?

> > Second we could consider the question of supporting a control channel as
> > a separate topic. IIUC, QEMU essentially needs a way to trigger various
> > operations in the underlying TPM implementation, when certain lifecycle
> > operations are performed on the VM. I could see this being done as a
> > simple network protocol over a UNIX socket. So, you could then add a
> > new 'chardev' property to the tpm-passthrough device, which gives the
> > ID of a character device that provides the control channel.
> 
> Why would that other control channel need to be a device rather than an
> ioctl on the device? Or maybe entirely access the emulated TPM through
> UnixIO ?

I'm not suggesting another device - I'm saying use a QEMU chardev
backend API - which lets you use multipl transports, one of which
is UNIX sockets.

> > This way QEMU does not need to have any special code to deal with CUSE
> > directly. QEMU could be used with a real TPM device, a vTPM device or
> > a CUSE TPM device, with the same driver. With both the vTPM and the
> > CUSE TPM device, QEMU would have the ability to use a out of band
> > control channel when migration/snapshot/etc take place.
> > 
> > This cleanly isolates QEMU from the particular design & implementation
> > that is currently used by the current swtpm code.
> 
> Someone needs to define the control channel commands.My definition is here:
> 
> https://github.com/stefanberger/qemu-tpm/commit/27d6cd856d5a14061955df7a93ee490697a7a174#diff-5cc0e46d3ec33a3f4262db773c193dfe
> 
> 
>  This won't go away even if we changed the transport for the commands.
> ioctl's seem to be one way of achieving this with a character device. The
> socket based control channels of 'swtpm' use the same commands.

Essentially the control channel is an RPC interface to an external
process managing virtual TPM. By common agreement, ioctl() is one
of the worst parts of the UNIX ABI, and thus so I'm loathe to see
us an RPC interface between QEMU & an external process, using ioctl.
The exception would be if this was for interoperability with an
existing standard, but this isn't the case here - the ioctl proposal
is a green-field design. I can't help thinking it'd be better from
QEMU's POV to just use QAPI for the RPC.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 14:50                                     ` Stefan Berger
  2017-03-01 15:24                                       ` Marc-André Lureau
@ 2017-03-01 16:22                                       ` Michael S. Tsirkin
  2017-03-01 16:31                                         ` Daniel P. Berrange
  1 sibling, 1 reply; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 16:22 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Marc-André Lureau, Stefan Berger, Daniel P. Berrange,
	qemu-devel, Dr. David Alan Gilbert, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> I had already proposed a linked-in version before I went to the out-of-process
> design. Anthony's concerns back then were related to the code not being trusted
> and a segfault in the code could bring down all of QEMU. That we have test
> suites running over it didn't work as an argument. Some of the test suite are
> private, though.

Given how bad the alternative is maybe we should go back to that one.
Same argument can be made for any device and we aren't making
them out of process right now.

IIMO it's less the in-process question (modularization
of QEMU has been on the agenda since years and I don't
think anyone is against it) it's more a code control/community question.

It doesn't look like userspace swtpm bits have a large community of
developers around it, and the only user appears to be QEMU, so depending
on that externally does not make sense, we should just have them
in-tree. This way we don't need to worry about versioning etc.

-- 
MST

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 16:22                                       ` Michael S. Tsirkin
@ 2017-03-01 16:31                                         ` Daniel P. Berrange
  2017-03-01 16:57                                           ` Dr. David Alan Gilbert
  2017-03-01 17:02                                           ` Michael S. Tsirkin
  0 siblings, 2 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 16:31 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, Marc-André Lureau, Stefan Berger, qemu-devel,
	Dr. David Alan Gilbert, hagen.lauer, Xu, Quan, silviu.vlasceanu,
	SERBAN, CRISTINA, SHIH, CHING C

On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > I had already proposed a linked-in version before I went to the out-of-process
> > design. Anthony's concerns back then were related to the code not being trusted
> > and a segfault in the code could bring down all of QEMU. That we have test
> > suites running over it didn't work as an argument. Some of the test suite are
> > private, though.
> 
> Given how bad the alternative is maybe we should go back to that one.
> Same argument can be made for any device and we aren't making
> them out of process right now.
> 
> IIMO it's less the in-process question (modularization
> of QEMU has been on the agenda since years and I don't
> think anyone is against it) it's more a code control/community question.

I rather disagree. Modularization of QEMU has seen few results
because it is generally a hard problem to solve when you have a
complex pre-existing codebase.  I don't think code control has
been a factor in this - as long as QEMU can clearly define its
ABI/API between core & the modular pieces, it doesn't matter
who owns the module. We've seen this with vhost-user which is
essentially outsourcing network device backend impls to a 3rd
party project. QEMU's defined the vhost-user ABI/API and delegated
impl to something else.

With the vTPM stuff here, we've not got a pre-existing feature
we need to deal with, so the biggest blocker wrt modularization does
not exist. Given that I think having the vTPM impl modularized is
highly desirable, as long as we can define a sane ABI/API between
QEMU and the external piece.  So I think anthony's point about not
putting a vTPM impl in-process is still valid, and since Stefan's
already done much of the work to achieve a modular design we should
not go back to an in-process design now.

> It doesn't look like userspace swtpm bits have a large community of
> developers around it, and the only user appears to be QEMU, so depending
> on that externally does not make sense, we should just have them
> in-tree. This way we don't need to worry about versioning etc.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 16:31                                         ` Daniel P. Berrange
@ 2017-03-01 16:57                                           ` Dr. David Alan Gilbert
  2017-03-01 17:02                                           ` Michael S. Tsirkin
  1 sibling, 0 replies; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-01 16:57 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Michael S. Tsirkin, Stefan Berger, Marc-André Lureau,
	Stefan Berger, qemu-devel, hagen.lauer, Xu, Quan,
	silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

* Daniel P. Berrange (berrange@redhat.com) wrote:
> On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > I had already proposed a linked-in version before I went to the out-of-process
> > > design. Anthony's concerns back then were related to the code not being trusted
> > > and a segfault in the code could bring down all of QEMU. That we have test
> > > suites running over it didn't work as an argument. Some of the test suite are
> > > private, though.
> > 
> > Given how bad the alternative is maybe we should go back to that one.
> > Same argument can be made for any device and we aren't making
> > them out of process right now.
> > 
> > IIMO it's less the in-process question (modularization
> > of QEMU has been on the agenda since years and I don't
> > think anyone is against it) it's more a code control/community question.
> 
> I rather disagree. Modularization of QEMU has seen few results
> because it is generally a hard problem to solve when you have a
> complex pre-existing codebase.  I don't think code control has
> been a factor in this - as long as QEMU can clearly define its
> ABI/API between core & the modular pieces, it doesn't matter
> who owns the module. We've seen this with vhost-user which is
> essentially outsourcing network device backend impls to a 3rd
> party project. QEMU's defined the vhost-user ABI/API and delegated
> impl to something else.
> 
> With the vTPM stuff here, we've not got a pre-existing feature
> we need to deal with, so the biggest blocker wrt modularization does
> not exist. Given that I think having the vTPM impl modularized is
> highly desirable, as long as we can define a sane ABI/API between
> QEMU and the external piece.  So I think anthony's point about not
> putting a vTPM impl in-process is still valid, and since Stefan's
> already done much of the work to achieve a modular design we should
> not go back to an in-process design now.

Yes, I agree.  Also it means there's potential to do things like only
allow the vTPM process to access the underlying key storage using SELinux.

Dave

> > It doesn't look like userspace swtpm bits have a large community of
> > developers around it, and the only user appears to be QEMU, so depending
> > on that externally does not make sense, we should just have them
> > in-tree. This way we don't need to worry about versioning etc.
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 16:31                                         ` Daniel P. Berrange
  2017-03-01 16:57                                           ` Dr. David Alan Gilbert
@ 2017-03-01 17:02                                           ` Michael S. Tsirkin
  2017-03-01 17:12                                             ` Stefan Berger
  1 sibling, 1 reply; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 17:02 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Stefan Berger, Marc-André Lureau, Stefan Berger, qemu-devel,
	Dr. David Alan Gilbert, hagen.lauer, Xu, Quan, silviu.vlasceanu,
	SERBAN, CRISTINA, SHIH, CHING C

On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > I had already proposed a linked-in version before I went to the out-of-process
> > > design. Anthony's concerns back then were related to the code not being trusted
> > > and a segfault in the code could bring down all of QEMU. That we have test
> > > suites running over it didn't work as an argument. Some of the test suite are
> > > private, though.
> > 
> > Given how bad the alternative is maybe we should go back to that one.
> > Same argument can be made for any device and we aren't making
> > them out of process right now.
> > 
> > IIMO it's less the in-process question (modularization
> > of QEMU has been on the agenda since years and I don't
> > think anyone is against it) it's more a code control/community question.
> 
> I rather disagree. Modularization of QEMU has seen few results
> because it is generally a hard problem to solve when you have a
> complex pre-existing codebase.  I don't think code control has
> been a factor in this - as long as QEMU can clearly define its
> ABI/API between core & the modular pieces, it doesn't matter
> who owns the module. We've seen this with vhost-user which is
> essentially outsourcing network device backend impls to a 3rd
> party project.

And it was done precisely for community reasons.  dpdk/VPP community is
quite large and fell funded but they just can't all grok QEMU.  They
work for hardware vendors and do baremetal things.  With the split we
can focus on virtualization and they can focus on moving packets around.


> QEMU's defined the vhost-user ABI/API and delegated
> impl to something else.

The vhost ABI isn't easy to maintain at all though. So I would not
commit to that lightly without a good reason.

It will be way more painful if the ABI is dictated by a 3rd party
library.

> With the vTPM stuff here, we've not got a pre-existing feature
> we need to deal with, so the biggest blocker wrt modularization does
> not exist. Given that I think having the vTPM impl modularized is
> highly desirable, as long as we can define a sane ABI/API between
> QEMU and the external piece.  So I think anthony's point about not
> putting a vTPM impl in-process is still valid, and since Stefan's
> already done much of the work to achieve a modular design we should
> not go back to an in-process design now.

Fine by me. But with the given project I'm inclined to say
- let's keep it in tree so we don't get to maintain an ABI
- CUSE seems like a wrong interface - not portable, too
  hard to setup ...

> > It doesn't look like userspace swtpm bits have a large community of
> > developers around it, and the only user appears to be QEMU, so depending
> > on that externally does not make sense, we should just have them
> > in-tree. This way we don't need to worry about versioning etc.
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:02                                           ` Michael S. Tsirkin
@ 2017-03-01 17:12                                             ` Stefan Berger
  2017-03-01 17:16                                               ` Michael S. Tsirkin
  2017-03-01 17:36                                               ` Daniel P. Berrange
  0 siblings, 2 replies; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 17:12 UTC (permalink / raw)
  To: Michael S. Tsirkin, Daniel P. Berrange
  Cc: Stefan Berger, qemu-devel, Dr. David Alan Gilbert, SERBAN,
	CRISTINA, Marc-André Lureau, Xu, Quan, silviu.vlasceanu,
	hagen.lauer, SHIH, CHING C

On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
>> On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
>>> On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
>>>> I had already proposed a linked-in version before I went to the out-of-process
>>>> design. Anthony's concerns back then were related to the code not being trusted
>>>> and a segfault in the code could bring down all of QEMU. That we have test
>>>> suites running over it didn't work as an argument. Some of the test suite are
>>>> private, though.
>>> Given how bad the alternative is maybe we should go back to that one.
>>> Same argument can be made for any device and we aren't making
>>> them out of process right now.
>>>
>>> IIMO it's less the in-process question (modularization
>>> of QEMU has been on the agenda since years and I don't
>>> think anyone is against it) it's more a code control/community question.
>> I rather disagree. Modularization of QEMU has seen few results
>> because it is generally a hard problem to solve when you have a
>> complex pre-existing codebase.  I don't think code control has
>> been a factor in this - as long as QEMU can clearly define its
>> ABI/API between core & the modular pieces, it doesn't matter
>> who owns the module. We've seen this with vhost-user which is
>> essentially outsourcing network device backend impls to a 3rd
>> party project.
> And it was done precisely for community reasons.  dpdk/VPP community is
> quite large and fell funded but they just can't all grok QEMU.  They
> work for hardware vendors and do baremetal things.  With the split we
> can focus on virtualization and they can focus on moving packets around.
>
>
>> QEMU's defined the vhost-user ABI/API and delegated
>> impl to something else.
> The vhost ABI isn't easy to maintain at all though. So I would not
> commit to that lightly without a good reason.
>
> It will be way more painful if the ABI is dictated by a 3rd party
> library.

Who should define it?

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:12                                             ` Stefan Berger
@ 2017-03-01 17:16                                               ` Michael S. Tsirkin
  2017-03-01 17:20                                                 ` Daniel P. Berrange
  2017-03-01 17:25                                                 ` Stefan Berger
  2017-03-01 17:36                                               ` Daniel P. Berrange
  1 sibling, 2 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 17:16 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Daniel P. Berrange, Stefan Berger, qemu-devel,
	Dr. David Alan Gilbert, SERBAN, CRISTINA, Marc-André Lureau,
	Xu, Quan, silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > private, though.
> > > > Given how bad the alternative is maybe we should go back to that one.
> > > > Same argument can be made for any device and we aren't making
> > > > them out of process right now.
> > > > 
> > > > IIMO it's less the in-process question (modularization
> > > > of QEMU has been on the agenda since years and I don't
> > > > think anyone is against it) it's more a code control/community question.
> > > I rather disagree. Modularization of QEMU has seen few results
> > > because it is generally a hard problem to solve when you have a
> > > complex pre-existing codebase.  I don't think code control has
> > > been a factor in this - as long as QEMU can clearly define its
> > > ABI/API between core & the modular pieces, it doesn't matter
> > > who owns the module. We've seen this with vhost-user which is
> > > essentially outsourcing network device backend impls to a 3rd
> > > party project.
> > And it was done precisely for community reasons.  dpdk/VPP community is
> > quite large and fell funded but they just can't all grok QEMU.  They
> > work for hardware vendors and do baremetal things.  With the split we
> > can focus on virtualization and they can focus on moving packets around.
> > 
> > 
> > > QEMU's defined the vhost-user ABI/API and delegated
> > > impl to something else.
> > The vhost ABI isn't easy to maintain at all though. So I would not
> > commit to that lightly without a good reason.
> > 
> > It will be way more painful if the ABI is dictated by a 3rd party
> > library.
> 
> Who should define it?
> 

No one. Put it in same source tree with QEMU and forget ABI stability
issues.

-- 
MST

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:16                                               ` Michael S. Tsirkin
@ 2017-03-01 17:20                                                 ` Daniel P. Berrange
  2017-03-01 18:03                                                   ` Michael S. Tsirkin
  2017-03-01 17:25                                                 ` Stefan Berger
  1 sibling, 1 reply; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 17:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, Stefan Berger, qemu-devel, Dr. David Alan Gilbert,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 07:16:01PM +0200, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > private, though.
> > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > Same argument can be made for any device and we aren't making
> > > > > them out of process right now.
> > > > > 
> > > > > IIMO it's less the in-process question (modularization
> > > > > of QEMU has been on the agenda since years and I don't
> > > > > think anyone is against it) it's more a code control/community question.
> > > > I rather disagree. Modularization of QEMU has seen few results
> > > > because it is generally a hard problem to solve when you have a
> > > > complex pre-existing codebase.  I don't think code control has
> > > > been a factor in this - as long as QEMU can clearly define its
> > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > who owns the module. We've seen this with vhost-user which is
> > > > essentially outsourcing network device backend impls to a 3rd
> > > > party project.
> > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > quite large and fell funded but they just can't all grok QEMU.  They
> > > work for hardware vendors and do baremetal things.  With the split we
> > > can focus on virtualization and they can focus on moving packets around.
> > > 
> > > 
> > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > impl to something else.
> > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > commit to that lightly without a good reason.
> > > 
> > > It will be way more painful if the ABI is dictated by a 3rd party
> > > library.
> > 
> > Who should define it?
> 
> No one. Put it in same source tree with QEMU and forget ABI stability
> issues.

That doesn't work very well in practice as you have to make sure the
vTPM process that is running, provides exactly the same ABI as the QEMU
process that's connecting to it. You could have a single vTPM process
on the host serving many QEMU processes, each of which could be a
different QEMU version, due to upgraded RPMs/Debs.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:16                                               ` Michael S. Tsirkin
  2017-03-01 17:20                                                 ` Daniel P. Berrange
@ 2017-03-01 17:25                                                 ` Stefan Berger
  2017-03-01 17:38                                                   ` Daniel P. Berrange
  1 sibling, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 17:25 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, qemu-devel, Dr. David Alan Gilbert, hagen.lauer,
	Marc-André Lureau, Xu, Quan, silviu.vlasceanu, SERBAN,
	CRISTINA, SHIH, CHING C

On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
>> On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
>>> On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
>>>> On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
>>>>> On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
>>>>>> I had already proposed a linked-in version before I went to the out-of-process
>>>>>> design. Anthony's concerns back then were related to the code not being trusted
>>>>>> and a segfault in the code could bring down all of QEMU. That we have test
>>>>>> suites running over it didn't work as an argument. Some of the test suite are
>>>>>> private, though.
>>>>> Given how bad the alternative is maybe we should go back to that one.
>>>>> Same argument can be made for any device and we aren't making
>>>>> them out of process right now.
>>>>>
>>>>> IIMO it's less the in-process question (modularization
>>>>> of QEMU has been on the agenda since years and I don't
>>>>> think anyone is against it) it's more a code control/community question.
>>>> I rather disagree. Modularization of QEMU has seen few results
>>>> because it is generally a hard problem to solve when you have a
>>>> complex pre-existing codebase.  I don't think code control has
>>>> been a factor in this - as long as QEMU can clearly define its
>>>> ABI/API between core & the modular pieces, it doesn't matter
>>>> who owns the module. We've seen this with vhost-user which is
>>>> essentially outsourcing network device backend impls to a 3rd
>>>> party project.
>>> And it was done precisely for community reasons.  dpdk/VPP community is
>>> quite large and fell funded but they just can't all grok QEMU.  They
>>> work for hardware vendors and do baremetal things.  With the split we
>>> can focus on virtualization and they can focus on moving packets around.
>>>
>>>
>>>> QEMU's defined the vhost-user ABI/API and delegated
>>>> impl to something else.
>>> The vhost ABI isn't easy to maintain at all though. So I would not
>>> commit to that lightly without a good reason.
>>>
>>> It will be way more painful if the ABI is dictated by a 3rd party
>>> library.
>> Who should define it?
>>
> No one. Put it in same source tree with QEMU and forget ABI stability
> issues.

You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU 
tree? These are multiple thousands of lines of code each and we'll break 
them apart into logical chunks and review them?

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:12                                             ` Stefan Berger
  2017-03-01 17:16                                               ` Michael S. Tsirkin
@ 2017-03-01 17:36                                               ` Daniel P. Berrange
  1 sibling, 0 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 17:36 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Michael S. Tsirkin, Stefan Berger, qemu-devel,
	Dr. David Alan Gilbert, SERBAN, CRISTINA, Marc-André Lureau,
	Xu, Quan, silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > private, though.
> > > > Given how bad the alternative is maybe we should go back to that one.
> > > > Same argument can be made for any device and we aren't making
> > > > them out of process right now.
> > > > 
> > > > IIMO it's less the in-process question (modularization
> > > > of QEMU has been on the agenda since years and I don't
> > > > think anyone is against it) it's more a code control/community question.
> > > I rather disagree. Modularization of QEMU has seen few results
> > > because it is generally a hard problem to solve when you have a
> > > complex pre-existing codebase.  I don't think code control has
> > > been a factor in this - as long as QEMU can clearly define its
> > > ABI/API between core & the modular pieces, it doesn't matter
> > > who owns the module. We've seen this with vhost-user which is
> > > essentially outsourcing network device backend impls to a 3rd
> > > party project.
> > And it was done precisely for community reasons.  dpdk/VPP community is
> > quite large and fell funded but they just can't all grok QEMU.  They
> > work for hardware vendors and do baremetal things.  With the split we
> > can focus on virtualization and they can focus on moving packets around.
> > 
> > 
> > > QEMU's defined the vhost-user ABI/API and delegated
> > > impl to something else.
> > The vhost ABI isn't easy to maintain at all though. So I would not
> > commit to that lightly without a good reason.
> > 
> > It will be way more painful if the ABI is dictated by a 3rd party
> > library.
> 
> Who should define it?

I'm unsure of the best answer here right now. If swtpm is targetted as
being a generic component for use by arbitrary consumers, that'd tend
towards suggesting swtpm should "own" protocol definition. On the other
hand if we desire for QEMU to be able to replace swtpm with an alternate
impl, that might suggest QEMU define the protocol. Possibly it just does
not matter if there's an owner, as long as both swtpm & QEMU maintainers
colloborate & agree on the details.

>From a purely self-ish QEMU maintainer POV, I'd tend to suggest a QAPI
based protocol since we have two impls of that already (monitor and
guest agent) & thus its well understood by QEMU maintainers. Writing
a custom binary protocol might sound easy, but things can get complex
pretty quickly

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:25                                                 ` Stefan Berger
@ 2017-03-01 17:38                                                   ` Daniel P. Berrange
  2017-03-01 17:58                                                     ` Michael S. Tsirkin
  0 siblings, 1 reply; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 17:38 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Michael S. Tsirkin, Stefan Berger, qemu-devel,
	Dr. David Alan Gilbert, SERBAN, CRISTINA, Marc-André Lureau,
	Xu, Quan, silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > > private, though.
> > > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > > Same argument can be made for any device and we aren't making
> > > > > > them out of process right now.
> > > > > > 
> > > > > > IIMO it's less the in-process question (modularization
> > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > think anyone is against it) it's more a code control/community question.
> > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > because it is generally a hard problem to solve when you have a
> > > > > complex pre-existing codebase.  I don't think code control has
> > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > who owns the module. We've seen this with vhost-user which is
> > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > party project.
> > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > work for hardware vendors and do baremetal things.  With the split we
> > > > can focus on virtualization and they can focus on moving packets around.
> > > > 
> > > > 
> > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > impl to something else.
> > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > commit to that lightly without a good reason.
> > > > 
> > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > library.
> > > Who should define it?
> > > 
> > No one. Put it in same source tree with QEMU and forget ABI stability
> > issues.
> 
> You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> These are multiple thousands of lines of code each and we'll break them
> apart into logical chunks and review them?

No, lets not make that mistake again - we only just got rid of the
libcacard smartcard library code from QEMU git. 

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:38                                                   ` Daniel P. Berrange
@ 2017-03-01 17:58                                                     ` Michael S. Tsirkin
  2017-03-01 18:06                                                       ` Dr. David Alan Gilbert
  2017-03-01 18:11                                                       ` Daniel P. Berrange
  0 siblings, 2 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 17:58 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Stefan Berger, Stefan Berger, qemu-devel, Dr. David Alan Gilbert,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 05:38:23PM +0000, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > > > private, though.
> > > > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > them out of process right now.
> > > > > > > 
> > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > think anyone is against it) it's more a code control/community question.
> > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > because it is generally a hard problem to solve when you have a
> > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > party project.
> > > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > work for hardware vendors and do baremetal things.  With the split we
> > > > > can focus on virtualization and they can focus on moving packets around.
> > > > > 
> > > > > 
> > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > impl to something else.
> > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > commit to that lightly without a good reason.
> > > > > 
> > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > library.
> > > > Who should define it?
> > > > 
> > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > issues.
> > 
> > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> > These are multiple thousands of lines of code each and we'll break them
> > apart into logical chunks and review them?
> 
> No, lets not make that mistake again - we only just got rid of the
> libcacard smartcard library code from QEMU git. 
> 
> Regards,
> Daniel

I don't mean that as an external library. As an integral part of QEMU
adhering to our coding style etc - why not?

I don't know what are the other options.  How is depending on an ABI
with a utility with no other users and not packaged by most distros
good? You are calling out to a CUSE device but who's reviewing that
code?

vl.c weights in a 4500 lines of code. several thousand lines is
not small but not unmanageable.

Anyway, it all boils down to lack of reviewers. I know I am not merging
the current implementation because I could not figure out what do qemu
bits do without looking at the implementation. I don't want to jump
between so many trees and coding styles. bios/qemu/linux/dpdk are
painful enough to manage. If some other maintainer volunteers, or if
Peter wants to merge it directly from Stefan, I won't object.


> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:20                                                 ` Daniel P. Berrange
@ 2017-03-01 18:03                                                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 18:03 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Stefan Berger, Stefan Berger, qemu-devel, Dr. David Alan Gilbert,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 05:20:13PM +0000, Daniel P. Berrange wrote:
> > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > impl to something else.
> > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > commit to that lightly without a good reason.
> > > > 
> > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > library.
> > > 
> > > Who should define it?
> > 
> > No one. Put it in same source tree with QEMU and forget ABI stability
> > issues.
> 
> That doesn't work very well in practice as you have to make sure the
> vTPM process that is running, provides exactly the same ABI as the QEMU
> process that's connecting to it. You could have a single vTPM process
> on the host serving many QEMU processes, each of which could be a
> different QEMU version, due to upgraded RPMs/Debs.
> 
> Regards,
> Daniel

I might be wrong but last time I looked each QEMU instance had to use
its own CUSE device. So the pain seems entirely self-inflicted, you
could have a process per QEMU instance, start and stop it from within
QEMU.


> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:58                                                     ` Michael S. Tsirkin
@ 2017-03-01 18:06                                                       ` Dr. David Alan Gilbert
  2017-03-01 18:09                                                         ` Michael S. Tsirkin
  2017-03-01 18:11                                                       ` Daniel P. Berrange
  1 sibling, 1 reply; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-01 18:06 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Daniel P. Berrange, Stefan Berger, Stefan Berger, qemu-devel,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

* Michael S. Tsirkin (mst@redhat.com) wrote:
> On Wed, Mar 01, 2017 at 05:38:23PM +0000, Daniel P. Berrange wrote:
> > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > > > > private, though.
> > > > > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > them out of process right now.
> > > > > > > > 
> > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > think anyone is against it) it's more a code control/community question.
> > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > party project.
> > > > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > > work for hardware vendors and do baremetal things.  With the split we
> > > > > > can focus on virtualization and they can focus on moving packets around.
> > > > > > 
> > > > > > 
> > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > impl to something else.
> > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > commit to that lightly without a good reason.
> > > > > > 
> > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > library.
> > > > > Who should define it?
> > > > > 
> > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > issues.
> > > 
> > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> > > These are multiple thousands of lines of code each and we'll break them
> > > apart into logical chunks and review them?
> > 
> > No, lets not make that mistake again - we only just got rid of the
> > libcacard smartcard library code from QEMU git. 
> > 
> > Regards,
> > Daniel
> 
> I don't mean that as an external library. As an integral part of QEMU
> adhering to our coding style etc - why not?
> 
> I don't know what are the other options.  How is depending on an ABI
> with a utility with no other users and not packaged by most distros
> good? You are calling out to a CUSE device but who's reviewing that
> code?
> 
> vl.c weights in a 4500 lines of code. several thousand lines is
> not small but not unmanageable.


That's 4500 lines of fairly generic code; not like the TPM where the number
of people who really understand the details of it is pretty slim.

It's better on most counts to have it as a separate process.

Dave

> Anyway, it all boils down to lack of reviewers. I know I am not merging
> the current implementation because I could not figure out what do qemu
> bits do without looking at the implementation. I don't want to jump
> between so many trees and coding styles. bios/qemu/linux/dpdk are
> painful enough to manage. If some other maintainer volunteers, or if
> Peter wants to merge it directly from Stefan, I won't object.
> 
> > -- 
> > |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> > |: http://libvirt.org              -o-             http://virt-manager.org :|
> > |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:06                                                       ` Dr. David Alan Gilbert
@ 2017-03-01 18:09                                                         ` Michael S. Tsirkin
  2017-03-01 18:18                                                           ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 18:09 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Daniel P. Berrange, Stefan Berger, Stefan Berger, qemu-devel,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 06:06:02PM +0000, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (mst@redhat.com) wrote:
> > On Wed, Mar 01, 2017 at 05:38:23PM +0000, Daniel P. Berrange wrote:
> > > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > > > > > private, though.
> > > > > > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > > them out of process right now.
> > > > > > > > > 
> > > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > > think anyone is against it) it's more a code control/community question.
> > > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > > party project.
> > > > > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > > > work for hardware vendors and do baremetal things.  With the split we
> > > > > > > can focus on virtualization and they can focus on moving packets around.
> > > > > > > 
> > > > > > > 
> > > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > > impl to something else.
> > > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > > commit to that lightly without a good reason.
> > > > > > > 
> > > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > > library.
> > > > > > Who should define it?
> > > > > > 
> > > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > > issues.
> > > > 
> > > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> > > > These are multiple thousands of lines of code each and we'll break them
> > > > apart into logical chunks and review them?
> > > 
> > > No, lets not make that mistake again - we only just got rid of the
> > > libcacard smartcard library code from QEMU git. 
> > > 
> > > Regards,
> > > Daniel
> > 
> > I don't mean that as an external library. As an integral part of QEMU
> > adhering to our coding style etc - why not?
> > 
> > I don't know what are the other options.  How is depending on an ABI
> > with a utility with no other users and not packaged by most distros
> > good? You are calling out to a CUSE device but who's reviewing that
> > code?
> > 
> > vl.c weights in a 4500 lines of code. several thousand lines is
> > not small but not unmanageable.
> 
> 
> That's 4500 lines of fairly generic code; not like the TPM where the number
> of people who really understand the details of it is pretty slim.
> 
> It's better on most counts to have it as a separate process.
> 
> Dave

Separate process we start and stop automatically I don't mind. A
separate tree with a distinct coding style where no one will ever even
look at it? Not so much.

> > Anyway, it all boils down to lack of reviewers. I know I am not merging
> > the current implementation because I could not figure out what do qemu
> > bits do without looking at the implementation. I don't want to jump
> > between so many trees and coding styles. bios/qemu/linux/dpdk are
> > painful enough to manage. If some other maintainer volunteers, or if
> > Peter wants to merge it directly from Stefan, I won't object.
> > 
> > > -- 
> > > |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> > > |: http://libvirt.org              -o-             http://virt-manager.org :|
> > > |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 17:58                                                     ` Michael S. Tsirkin
  2017-03-01 18:06                                                       ` Dr. David Alan Gilbert
@ 2017-03-01 18:11                                                       ` Daniel P. Berrange
  2017-03-01 18:20                                                         ` Michael S. Tsirkin
  1 sibling, 1 reply; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 18:11 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Berger, Stefan Berger, qemu-devel, Dr. David Alan Gilbert,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 07:58:36PM +0200, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 05:38:23PM +0000, Daniel P. Berrange wrote:
> > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > > > > private, though.
> > > > > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > them out of process right now.
> > > > > > > > 
> > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > think anyone is against it) it's more a code control/community question.
> > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > party project.
> > > > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > > work for hardware vendors and do baremetal things.  With the split we
> > > > > > can focus on virtualization and they can focus on moving packets around.
> > > > > > 
> > > > > > 
> > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > impl to something else.
> > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > commit to that lightly without a good reason.
> > > > > > 
> > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > library.
> > > > > Who should define it?
> > > > > 
> > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > issues.
> > > 
> > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> > > These are multiple thousands of lines of code each and we'll break them
> > > apart into logical chunks and review them?
> > 
> > No, lets not make that mistake again - we only just got rid of the
> > libcacard smartcard library code from QEMU git. 
> 
> I don't mean that as an external library. As an integral part of QEMU
> adhering to our coding style etc - why not?

Changing swtpm to the QEMU coding style is a pointless exercise - just
busy work for no functional end benefit. You're also tieing the code
into the QEMU release cycle, again for no tangible benefit. Conceptually
swtpm does not depend on, or require, QEMU to be useful - it can have
other non-QEMU consumers - bundling with QEMU is not helpful there.

> I don't know what are the other options.  How is depending on an ABI
> with a utility with no other users and not packaged by most distros
> good? You are calling out to a CUSE device but who's reviewing that
> code?

If anyone is motivated enough to review the code, they can do it whether
it is in QEMU git or its own git. Pulling entire of swtpm into QEMU GIT
isn't magically going to get useful reviews done on the code. The QEMU
maintainers already have far more code to review than available review
bandwidth, and lack domain knowledge in TPM concepts.

> Anyway, it all boils down to lack of reviewers. I know I am not merging
> the current implementation because I could not figure out what do qemu
> bits do without looking at the implementation. I don't want to jump
> between so many trees and coding styles. bios/qemu/linux/dpdk are
> painful enough to manage. If some other maintainer volunteers, or if
> Peter wants to merge it directly from Stefan, I won't object.


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:09                                                         ` Michael S. Tsirkin
@ 2017-03-01 18:18                                                           ` Dr. David Alan Gilbert
  2017-03-01 18:30                                                             ` Michael S. Tsirkin
  0 siblings, 1 reply; 96+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-01 18:18 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Daniel P. Berrange, Stefan Berger, Stefan Berger, qemu-devel,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

* Michael S. Tsirkin (mst@redhat.com) wrote:
> On Wed, Mar 01, 2017 at 06:06:02PM +0000, Dr. David Alan Gilbert wrote:
> > * Michael S. Tsirkin (mst@redhat.com) wrote:
> > > On Wed, Mar 01, 2017 at 05:38:23PM +0000, Daniel P. Berrange wrote:
> > > > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > > > > > > private, though.
> > > > > > > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > > > them out of process right now.
> > > > > > > > > > 
> > > > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > > > think anyone is against it) it's more a code control/community question.
> > > > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > > > party project.
> > > > > > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > > > > work for hardware vendors and do baremetal things.  With the split we
> > > > > > > > can focus on virtualization and they can focus on moving packets around.
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > > > impl to something else.
> > > > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > > > commit to that lightly without a good reason.
> > > > > > > > 
> > > > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > > > library.
> > > > > > > Who should define it?
> > > > > > > 
> > > > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > > > issues.
> > > > > 
> > > > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> > > > > These are multiple thousands of lines of code each and we'll break them
> > > > > apart into logical chunks and review them?
> > > > 
> > > > No, lets not make that mistake again - we only just got rid of the
> > > > libcacard smartcard library code from QEMU git. 
> > > > 
> > > > Regards,
> > > > Daniel
> > > 
> > > I don't mean that as an external library. As an integral part of QEMU
> > > adhering to our coding style etc - why not?
> > > 
> > > I don't know what are the other options.  How is depending on an ABI
> > > with a utility with no other users and not packaged by most distros
> > > good? You are calling out to a CUSE device but who's reviewing that
> > > code?
> > > 
> > > vl.c weights in a 4500 lines of code. several thousand lines is
> > > not small but not unmanageable.
> > 
> > 
> > That's 4500 lines of fairly generic code; not like the TPM where the number
> > of people who really understand the details of it is pretty slim.
> > 
> > It's better on most counts to have it as a separate process.
> > 
> > Dave
> 
> Separate process we start and stop automatically I don't mind. A
> separate tree with a distinct coding style where no one will ever even
> look at it? Not so much.

That code is used elsewhere anyway, so asking them to change the coding style
isn't very nice.
Even if they change the coding style it doesn't mean you're suddenly going to
understand how a TPM works in detail and be able to review it.

Anyway, having it in a separate process locked down by SELinux means that even
if it does go horribly wrong it won't break qemu.

Dave

> > > Anyway, it all boils down to lack of reviewers. I know I am not merging
> > > the current implementation because I could not figure out what do qemu
> > > bits do without looking at the implementation. I don't want to jump
> > > between so many trees and coding styles. bios/qemu/linux/dpdk are
> > > painful enough to manage. If some other maintainer volunteers, or if
> > > Peter wants to merge it directly from Stefan, I won't object.
> > > 
> > > > -- 
> > > > |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> > > > |: http://libvirt.org              -o-             http://virt-manager.org :|
> > > > |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:11                                                       ` Daniel P. Berrange
@ 2017-03-01 18:20                                                         ` Michael S. Tsirkin
  2017-03-01 18:32                                                           ` Marc-André Lureau
  0 siblings, 1 reply; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 18:20 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Stefan Berger, Stefan Berger, qemu-devel, Dr. David Alan Gilbert,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 06:11:17PM +0000, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 07:58:36PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 05:38:23PM +0000, Daniel P. Berrange wrote:
> > > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > > > > > private, though.
> > > > > > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > > them out of process right now.
> > > > > > > > > 
> > > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > > think anyone is against it) it's more a code control/community question.
> > > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > > party project.
> > > > > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > > > work for hardware vendors and do baremetal things.  With the split we
> > > > > > > can focus on virtualization and they can focus on moving packets around.
> > > > > > > 
> > > > > > > 
> > > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > > impl to something else.
> > > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > > commit to that lightly without a good reason.
> > > > > > > 
> > > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > > library.
> > > > > > Who should define it?
> > > > > > 
> > > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > > issues.
> > > > 
> > > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> > > > These are multiple thousands of lines of code each and we'll break them
> > > > apart into logical chunks and review them?
> > > 
> > > No, lets not make that mistake again - we only just got rid of the
> > > libcacard smartcard library code from QEMU git. 
> > 
> > I don't mean that as an external library. As an integral part of QEMU
> > adhering to our coding style etc - why not?
> 
> Changing swtpm to the QEMU coding style is a pointless exercise - just
> busy work for no functional end benefit.

I'm not sure what you are saying here, I don't appreciate extra hurdles
to review, it's hard enough as it is. If others don't care, good for
them.

> You're also tieing the code
> into the QEMU release cycle, again for no tangible benefit.

No need for ABI stability would be the benefit.

> Conceptually
> swtpm does not depend on, or require, QEMU to be useful - it can have
> other non-QEMU consumers - bundling with QEMU is not helpful there.

Maybe it could but it isn't.

> 
> > I don't know what are the other options.  How is depending on an ABI
> > with a utility with no other users and not packaged by most distros
> > good? You are calling out to a CUSE device but who's reviewing that
> > code?
> 
> If anyone is motivated enough to review the code, they can do it whether
> it is in QEMU git or its own git. Pulling entire of swtpm into QEMU GIT
> isn't magically going to get useful reviews done on the code. The QEMU
> maintainers already have far more code to review than available review
> bandwidth, and lack domain knowledge in TPM concepts.

I was the only one merging TPM code so far. I don't call myself an
expert.  If someone steps up to do the work, is trusted by Peter to
maintain it for X years and doesn't care about the extra hurdles, more
power to them.

> > Anyway, it all boils down to lack of reviewers. I know I am not merging
> > the current implementation because I could not figure out what do qemu
> > bits do without looking at the implementation. I don't want to jump
> > between so many trees and coding styles. bios/qemu/linux/dpdk are
> > painful enough to manage. If some other maintainer volunteers, or if
> > Peter wants to merge it directly from Stefan, I won't object.
> 
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:18                                                           ` Dr. David Alan Gilbert
@ 2017-03-01 18:30                                                             ` Michael S. Tsirkin
  2017-03-01 19:24                                                               ` Stefan Berger
  0 siblings, 1 reply; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 18:30 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Daniel P. Berrange, Stefan Berger, Stefan Berger, qemu-devel,
	SERBAN, CRISTINA, Marc-André Lureau, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 06:18:01PM +0000, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (mst@redhat.com) wrote:
> > On Wed, Mar 01, 2017 at 06:06:02PM +0000, Dr. David Alan Gilbert wrote:
> > > * Michael S. Tsirkin (mst@redhat.com) wrote:
> > > > On Wed, Mar 01, 2017 at 05:38:23PM +0000, Daniel P. Berrange wrote:
> > > > > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > > > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
> > > > > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > > > > I had already proposed a linked-in version before I went to the out-of-process
> > > > > > > > > > > > design. Anthony's concerns back then were related to the code not being trusted
> > > > > > > > > > > > and a segfault in the code could bring down all of QEMU. That we have test
> > > > > > > > > > > > suites running over it didn't work as an argument. Some of the test suite are
> > > > > > > > > > > > private, though.
> > > > > > > > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > > > > them out of process right now.
> > > > > > > > > > > 
> > > > > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > > > > think anyone is against it) it's more a code control/community question.
> > > > > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > > > > party project.
> > > > > > > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > > > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > > > > > work for hardware vendors and do baremetal things.  With the split we
> > > > > > > > > can focus on virtualization and they can focus on moving packets around.
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > > > > impl to something else.
> > > > > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > > > > commit to that lightly without a good reason.
> > > > > > > > > 
> > > > > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > > > > library.
> > > > > > > > Who should define it?
> > > > > > > > 
> > > > > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > > > > issues.
> > > > > > 
> > > > > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> > > > > > These are multiple thousands of lines of code each and we'll break them
> > > > > > apart into logical chunks and review them?
> > > > > 
> > > > > No, lets not make that mistake again - we only just got rid of the
> > > > > libcacard smartcard library code from QEMU git. 
> > > > > 
> > > > > Regards,
> > > > > Daniel
> > > > 
> > > > I don't mean that as an external library. As an integral part of QEMU
> > > > adhering to our coding style etc - why not?
> > > > 
> > > > I don't know what are the other options.  How is depending on an ABI
> > > > with a utility with no other users and not packaged by most distros
> > > > good? You are calling out to a CUSE device but who's reviewing that
> > > > code?
> > > > 
> > > > vl.c weights in a 4500 lines of code. several thousand lines is
> > > > not small but not unmanageable.
> > > 
> > > 
> > > That's 4500 lines of fairly generic code; not like the TPM where the number
> > > of people who really understand the details of it is pretty slim.
> > > 
> > > It's better on most counts to have it as a separate process.
> > > 
> > > Dave
> > 
> > Separate process we start and stop automatically I don't mind. A
> > separate tree with a distinct coding style where no one will ever even
> > look at it? Not so much.
> 
> That code is used elsewhere anyway,

Who uses it? Who packages it? Fedora doesn't ...

> so asking them to change the coding style
> isn't very nice.
> Even if they change the coding style it doesn't mean you're suddenly going to
> understand how a TPM works in detail and be able to review it.

I did in the past but I didn't kept abreast of the recent developments.

> Anyway, having it in a separate process locked down by SELinux means that even
> if it does go horribly wrong it won't break qemu.
> 
> Dave

Since qemu does blocking ioctls on it and doesn't validate results
too much it sure can break QEMU - anything from DOS to random
code execution. That's why we want to keep it in tree and
start it ourselves - I don't want CVEs claiming not validating
some parameter we get from it is a remote code execution.
It should be just a library that yes, we can keep out of
process for extra security but no, we can't just out random
stuff in there and never care.


> > > > Anyway, it all boils down to lack of reviewers. I know I am not merging
> > > > the current implementation because I could not figure out what do qemu
> > > > bits do without looking at the implementation. I don't want to jump
> > > > between so many trees and coding styles. bios/qemu/linux/dpdk are
> > > > painful enough to manage. If some other maintainer volunteers, or if
> > > > Peter wants to merge it directly from Stefan, I won't object.
> > > > 
> > > > > -- 
> > > > > |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> > > > > |: http://libvirt.org              -o-             http://virt-manager.org :|
> > > > > |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
> > > --
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:20                                                         ` Michael S. Tsirkin
@ 2017-03-01 18:32                                                           ` Marc-André Lureau
  2017-03-01 18:56                                                             ` Daniel P. Berrange
  0 siblings, 1 reply; 96+ messages in thread
From: Marc-André Lureau @ 2017-03-01 18:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, Daniel P. Berrange
  Cc: Stefan Berger, Stefan Berger, qemu-devel, Dr. David Alan Gilbert,
	SERBAN, CRISTINA, Xu, Quan, silviu.vlasceanu, hagen.lauer, SHIH,
	CHING C

Hi

On Wed, Mar 1, 2017 at 10:20 PM Michael S. Tsirkin <mst@redhat.com> wrote:

>
> > You're also tieing the code
> > into the QEMU release cycle, again for no tangible benefit.
>
> No need for ABI stability would be the benefit.
>

We are talking about the control channel ABI (the data channel is using TCG
defined command streams afaict - don't remember what it is called)


>
> > Conceptually
> > swtpm does not depend on, or require, QEMU to be useful - it can have
> > other non-QEMU consumers - bundling with QEMU is not helpful there.
>
> Maybe it could but it isn't.
>

Right, it would be reasonable to have qemu provide it's own private "swtpm"
(linking with libtpms, doing most of the job), that way it wouldn't have to
rely on a stable ABI (as long as the process isn't shared across different
qemu versions, which should be quite easy to achieve)

>
> >
> > > I don't know what are the other options.  How is depending on an ABI
> > > with a utility with no other users and not packaged by most distros
> > > good? You are calling out to a CUSE device but who's reviewing that
> > > code?
> >
> > If anyone is motivated enough to review the code, they can do it whether
> > it is in QEMU git or its own git. Pulling entire of swtpm into QEMU GIT
> > isn't magically going to get useful reviews done on the code. The QEMU
> > maintainers already have far more code to review than available review
> > bandwidth, and lack domain knowledge in TPM concepts.
>
> I was the only one merging TPM code so far. I don't call myself an
> expert.  If someone steps up to do the work, is trusted by Peter to
> maintain it for X years and doesn't care about the extra hurdles, more
> power to them.
>

Why not give Stefan maintainership of TPM?

>
> > > Anyway, it all boils down to lack of reviewers. I know I am not merging
> > > the current implementation because I could not figure out what do qemu
> > > bits do without looking at the implementation. I don't want to jump
> > > between so many trees and coding styles. bios/qemu/linux/dpdk are
> > > painful enough to manage. If some other maintainer volunteers, or if
> > > Peter wants to merge it directly from Stefan, I won't object.
> >
>

ok


> >
> > Regards,
> > Daniel
> > --
> > |: http://berrange.com      -o-
> http://www.flickr.com/photos/dberrange/ :|
> > |: http://libvirt.org              -o-
> http://virt-manager.org :|
> > |: http://entangle-photo.org       -o-
> http://search.cpan.org/~danberr/ :|
>
-- 
Marc-André Lureau

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:32                                                           ` Marc-André Lureau
@ 2017-03-01 18:56                                                             ` Daniel P. Berrange
  2017-03-01 19:18                                                               ` Marc-André Lureau
  2017-03-01 22:22                                                               ` Michael S. Tsirkin
  0 siblings, 2 replies; 96+ messages in thread
From: Daniel P. Berrange @ 2017-03-01 18:56 UTC (permalink / raw)
  To: Marc-André Lureau
  Cc: Michael S. Tsirkin, Stefan Berger, Stefan Berger, qemu-devel,
	Dr. David Alan Gilbert, SERBAN, CRISTINA, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 06:32:19PM +0000, Marc-André Lureau wrote:
> Hi
> 
> On Wed, Mar 1, 2017 at 10:20 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> 
> >
> > > You're also tieing the code
> > > into the QEMU release cycle, again for no tangible benefit.
> >
> > No need for ABI stability would be the benefit.
> >
> 
> We are talking about the control channel ABI (the data channel is using TCG
> defined command streams afaict - don't remember what it is called)
> 
> 
> >
> > > Conceptually
> > > swtpm does not depend on, or require, QEMU to be useful - it can have
> > > other non-QEMU consumers - bundling with QEMU is not helpful there.
> >
> > Maybe it could but it isn't.
> >
> 
> Right, it would be reasonable to have qemu provide it's own private "swtpm"
> (linking with libtpms, doing most of the job), that way it wouldn't have to
> rely on a stable ABI (as long as the process isn't shared across different
> qemu versions, which should be quite easy to achieve)

I think we need to expect to have a stable ABI no matter what. During
upgrade cycles, it is desirable to be able to upgrade the swtpm process
assocatied with a running VM. Whether this is done by restarting the
process & having QEMU reconnect, or by re-exec'ing swtpm and keeping the
FD open, you still end up with newer swtpm talking to an older QEMU. Or
conversely you might have setup swtpm processes to populate a number of
CUSE devices, and then later launch QEMU binaries to connect to them - at
which point there's no guarantee the QEMU version hasn't been upgraded -
or the user could have requested a custom QEMU binary to virt-install,
etc.


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:56                                                             ` Daniel P. Berrange
@ 2017-03-01 19:18                                                               ` Marc-André Lureau
  2017-03-01 22:22                                                               ` Michael S. Tsirkin
  1 sibling, 0 replies; 96+ messages in thread
From: Marc-André Lureau @ 2017-03-01 19:18 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Michael S. Tsirkin, Stefan Berger, Stefan Berger, qemu-devel,
	Dr. David Alan Gilbert, SERBAN, CRISTINA, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 1, 2017 at 10:56 PM Daniel P. Berrange <berrange@redhat.com>
wrote:

> On Wed, Mar 01, 2017 at 06:32:19PM +0000, Marc-André Lureau wrote:
> > Hi
> >
> > On Wed, Mar 1, 2017 at 10:20 PM Michael S. Tsirkin <mst@redhat.com>
> wrote:
> >
> > >
> > > > You're also tieing the code
> > > > into the QEMU release cycle, again for no tangible benefit.
> > >
> > > No need for ABI stability would be the benefit.
> > >
> >
> > We are talking about the control channel ABI (the data channel is using
> TCG
> > defined command streams afaict - don't remember what it is called)
> >
> >
> > >
> > > > Conceptually
> > > > swtpm does not depend on, or require, QEMU to be useful - it can have
> > > > other non-QEMU consumers - bundling with QEMU is not helpful there.
> > >
> > > Maybe it could but it isn't.
> > >
> >
> > Right, it would be reasonable to have qemu provide it's own private
> "swtpm"
> > (linking with libtpms, doing most of the job), that way it wouldn't have
> to
> > rely on a stable ABI (as long as the process isn't shared across
> different
> > qemu versions, which should be quite easy to achieve)
>
> I think we need to expect to have a stable ABI no matter what. During
> upgrade cycles, it is desirable to be able to upgrade the swtpm process
> assocatied with a running VM. Whether this is done by restarting the
> process & having QEMU reconnect, or by re-exec'ing swtpm and keeping the
> FD open, you still end up with newer swtpm talking to an older QEMU. Or
>

I am not sure why this is required. You could require that both qemu &
helper process are restarted in this case so they stay in sync, no?


> conversely you might have setup swtpm processes to populate a number of
> CUSE devices, and then later launch QEMU binaries to connect to them - at
>

I would rather avoid CUSE device with this private qemu helper process.


> which point there's no guarantee the QEMU version hasn't been upgraded -
> or the user could have requested a custom QEMU binary to virt-install,
> etc.
>

The point is to have the qemu binary tight with the helper process. If they
are incompatible, you broke your installation, it should fail to start.

-- 
Marc-André Lureau

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:30                                                             ` Michael S. Tsirkin
@ 2017-03-01 19:24                                                               ` Stefan Berger
  2017-03-01 23:36                                                                 ` Michael S. Tsirkin
  0 siblings, 1 reply; 96+ messages in thread
From: Stefan Berger @ 2017-03-01 19:24 UTC (permalink / raw)
  To: Michael S. Tsirkin, Dr. David Alan Gilbert
  Cc: Stefan Berger, qemu-devel, hagen.lauer, Marc-André Lureau,
	Xu, Quan, silviu.vlasceanu, SERBAN, CRISTINA, SHIH, CHING C

On 03/01/2017 01:30 PM, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 06:18:01PM +0000, Dr. David Alan Gilbert wrote:
>> * Michael S. Tsirkin (mst@redhat.com) wrote:
>>> On Wed, Mar 01, 2017 at 06:06:02PM +0000, Dr. David Alan Gilbert wrote:
>>>> * Michael S. Tsirkin (mst@redhat.com) wrote:
>>>>> On Wed, Mar 01, 2017 at 05:38:23PM +0000, Daniel P. Berrange wrote:
>>>>>> On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
>>>>>>> On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
>>>>>>>> On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
>>>>>>>>> On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
>>>>>>>>>> On Wed, Mar 01, 2017 at 04:31:04PM +0000, Daniel P. Berrange wrote:
>>>>>>>>>>> On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
>>>>>>>>>>>> On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
>>>>>>>>>>>>> I had already proposed a linked-in version before I went to the out-of-process
>>>>>>>>>>>>> design. Anthony's concerns back then were related to the code not being trusted
>>>>>>>>>>>>> and a segfault in the code could bring down all of QEMU. That we have test
>>>>>>>>>>>>> suites running over it didn't work as an argument. Some of the test suite are
>>>>>>>>>>>>> private, though.
>>>>>>>>>>>> Given how bad the alternative is maybe we should go back to that one.
>>>>>>>>>>>> Same argument can be made for any device and we aren't making
>>>>>>>>>>>> them out of process right now.
>>>>>>>>>>>>
>>>>>>>>>>>> IIMO it's less the in-process question (modularization
>>>>>>>>>>>> of QEMU has been on the agenda since years and I don't
>>>>>>>>>>>> think anyone is against it) it's more a code control/community question.
>>>>>>>>>>> I rather disagree. Modularization of QEMU has seen few results
>>>>>>>>>>> because it is generally a hard problem to solve when you have a
>>>>>>>>>>> complex pre-existing codebase.  I don't think code control has
>>>>>>>>>>> been a factor in this - as long as QEMU can clearly define its
>>>>>>>>>>> ABI/API between core & the modular pieces, it doesn't matter
>>>>>>>>>>> who owns the module. We've seen this with vhost-user which is
>>>>>>>>>>> essentially outsourcing network device backend impls to a 3rd
>>>>>>>>>>> party project.
>>>>>>>>>> And it was done precisely for community reasons.  dpdk/VPP community is
>>>>>>>>>> quite large and fell funded but they just can't all grok QEMU.  They
>>>>>>>>>> work for hardware vendors and do baremetal things.  With the split we
>>>>>>>>>> can focus on virtualization and they can focus on moving packets around.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> QEMU's defined the vhost-user ABI/API and delegated
>>>>>>>>>>> impl to something else.
>>>>>>>>>> The vhost ABI isn't easy to maintain at all though. So I would not
>>>>>>>>>> commit to that lightly without a good reason.
>>>>>>>>>>
>>>>>>>>>> It will be way more painful if the ABI is dictated by a 3rd party
>>>>>>>>>> library.
>>>>>>>>> Who should define it?
>>>>>>>>>
>>>>>>>> No one. Put it in same source tree with QEMU and forget ABI stability
>>>>>>>> issues.
>>>>>>> You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
>>>>>>> These are multiple thousands of lines of code each and we'll break them
>>>>>>> apart into logical chunks and review them?
>>>>>> No, lets not make that mistake again - we only just got rid of the
>>>>>> libcacard smartcard library code from QEMU git.
>>>>>>
>>>>>> Regards,
>>>>>> Daniel
>>>>> I don't mean that as an external library. As an integral part of QEMU
>>>>> adhering to our coding style etc - why not?
>>>>>
>>>>> I don't know what are the other options.  How is depending on an ABI
>>>>> with a utility with no other users and not packaged by most distros
>>>>> good? You are calling out to a CUSE device but who's reviewing that
>>>>> code?
>>>>>
>>>>> vl.c weights in a 4500 lines of code. several thousand lines is
>>>>> not small but not unmanageable.
>>>>
>>>> That's 4500 lines of fairly generic code; not like the TPM where the number
>>>> of people who really understand the details of it is pretty slim.
>>>>
>>>> It's better on most counts to have it as a separate process.
>>>>
>>>> Dave
>>> Separate process we start and stop automatically I don't mind. A
>>> separate tree with a distinct coding style where no one will ever even
>>> look at it? Not so much.
>> That code is used elsewhere anyway,
> Who uses it? Who packages it? Fedora doesn't ...
>
>> so asking them to change the coding style
>> isn't very nice.
>> Even if they change the coding style it doesn't mean you're suddenly going to
>> understand how a TPM works in detail and be able to review it.
> I did in the past but I didn't kept abreast of the recent developments.
>
>> Anyway, having it in a separate process locked down by SELinux means that even
>> if it does go horribly wrong it won't break qemu.
>>
>> Dave
> Since qemu does blocking ioctls on it and doesn't validate results
> too much it sure can break QEMU - anything from DOS to random
> code execution. That's why we want to keep it in tree and
> start it ourselves - I don't want CVEs claiming not validating
> some parameter we get from it is a remote code execution.
> It should be just a library that yes, we can keep out of
> process for extra security but no, we can't just out random
> stuff in there and never care.

So then the question is whether the implementation is hopelessly broken 
or whether we can defend against buffer overflows so that remote code 
execution from a malicious TPM emulator can actually happen? I thought I 
was properly checking the alllocated buffer for size and that we won't 
receive more than the expected number of bytes, but maybe it needs an 
additional check for unreasonable input.

Example of such code is here:

https://github.com/stefanberger/qemu-tpm/commit/27d332dc3b2c6bfd0fcd38e69f5c899651f3a5d8#diff-c9d7e2e1d4b17b93ca5580ec2d0d204aR188


FYI:
TPM 1.2 in libtpms:

$ wc *.c *.h | grep total
   86130  352307 3227530 total


TPM 2 in TPM 2 preview branch of libtpms:

$ wc *.c *.h | grep total
   65318  319043 2651231 total

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 18:56                                                             ` Daniel P. Berrange
  2017-03-01 19:18                                                               ` Marc-André Lureau
@ 2017-03-01 22:22                                                               ` Michael S. Tsirkin
  1 sibling, 0 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 22:22 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Marc-André Lureau, Stefan Berger, Stefan Berger, qemu-devel,
	Dr. David Alan Gilbert, SERBAN, CRISTINA, Xu, Quan,
	silviu.vlasceanu, hagen.lauer, SHIH, CHING C

On Wed, Mar 01, 2017 at 06:56:17PM +0000, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 06:32:19PM +0000, Marc-André Lureau wrote:
> > Hi
> > 
> > On Wed, Mar 1, 2017 at 10:20 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > 
> > >
> > > > You're also tieing the code
> > > > into the QEMU release cycle, again for no tangible benefit.
> > >
> > > No need for ABI stability would be the benefit.
> > >
> > 
> > We are talking about the control channel ABI (the data channel is using TCG
> > defined command streams afaict - don't remember what it is called)
> > 
> > 
> > >
> > > > Conceptually
> > > > swtpm does not depend on, or require, QEMU to be useful - it can have
> > > > other non-QEMU consumers - bundling with QEMU is not helpful there.
> > >
> > > Maybe it could but it isn't.
> > >
> > 
> > Right, it would be reasonable to have qemu provide it's own private "swtpm"
> > (linking with libtpms, doing most of the job), that way it wouldn't have to
> > rely on a stable ABI (as long as the process isn't shared across different
> > qemu versions, which should be quite easy to achieve)
> 
> I think we need to expect to have a stable ABI no matter what. During
> upgrade cycles, it is desirable to be able to upgrade the swtpm process
> assocatied with a running VM.

Why? It should be part of the same rpm as QEMU,
upgrading QEMU requires VM restart and so should this.

We really really do not want a stable ABI if we can get
away with not having one.

> Whether this is done by restarting the
> process & having QEMU reconnect, or by re-exec'ing swtpm and keeping the
> FD open, you still end up with newer swtpm talking to an older QEMU. Or
> conversely you might have setup swtpm processes to populate a number of
> CUSE devices, and then later launch QEMU binaries to connect to them - at
> which point there's no guarantee the QEMU version hasn't been upgraded -
> or the user could have requested a custom QEMU binary to virt-install,
> etc.

Sounds like feature creep to me. A separate processes for parts of QEMU
for extra security make sense. Stable ABI between parts does not.

> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 19:24                                                               ` Stefan Berger
@ 2017-03-01 23:36                                                                 ` Michael S. Tsirkin
  2017-03-01 23:42                                                                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 23:36 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Dr. David Alan Gilbert, Stefan Berger, qemu-devel, hagen.lauer,
	Marc-André Lureau, Xu, Quan, silviu.vlasceanu, SERBAN,
	CRISTINA, SHIH, CHING C

On Wed, Mar 01, 2017 at 02:24:20PM -0500, Stefan Berger wrote:
> > > Anyway, having it in a separate process locked down by SELinux means that even
> > > if it does go horribly wrong it won't break qemu.
> > > 
> > > Dave
> > Since qemu does blocking ioctls on it and doesn't validate results
> > too much it sure can break QEMU - anything from DOS to random
> > code execution. That's why we want to keep it in tree and
> > start it ourselves - I don't want CVEs claiming not validating
> > some parameter we get from it is a remote code execution.
> > It should be just a library that yes, we can keep out of
> > process for extra security but no, we can't just out random
> > stuff in there and never care.
> 
> So then the question is whether the implementation is hopelessly broken or
> whether we can defend against buffer overflows so that remote code execution
> from a malicious TPM emulator can actually happen? I thought I was properly
> checking the alllocated buffer for size and that we won't receive more than
> the expected number of bytes, but maybe it needs an additional check for
> unreasonable input.
> 
> Example of such code is here:
> 
> https://github.com/stefanberger/qemu-tpm/commit/27d332dc3b2c6bfd0fcd38e69f5c899651f3a5d8#diff-c9d7e2e1d4b17b93ca5580ec2d0d204aR188
> 
> 
> FYI:
> TPM 1.2 in libtpms:
> 
> $ wc *.c *.h | grep total
>   86130  352307 3227530 total
> 
> 
> TPM 2 in TPM 2 preview branch of libtpms:
> 
> $ wc *.c *.h | grep total
>   65318  319043 2651231 total

libtpms seems to be packaged and used outside QEMU, I don't say we need
to have that in tree. I thought we were discussing swtpm cuse thing.

> 

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
  2017-03-01 23:36                                                                 ` Michael S. Tsirkin
@ 2017-03-01 23:42                                                                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 96+ messages in thread
From: Michael S. Tsirkin @ 2017-03-01 23:42 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Dr. David Alan Gilbert, Stefan Berger, qemu-devel, hagen.lauer,
	Marc-André Lureau, Xu, Quan, silviu.vlasceanu, SERBAN,
	CRISTINA, SHIH, CHING C

On Thu, Mar 02, 2017 at 01:36:25AM +0200, Michael S. Tsirkin wrote:
> libtpms seems to be packaged and used outside QEMU, I don't say we need
> to have that in tree. I thought we were discussing swtpm cuse thing.

In any case, I'd like to stress that my comments aren't absolute.  I
merely described what would it take for me to be able to review these
patches properly but others more motivated might be able to do it with
the current cuse architecture. Should someone else review and merge
these patches, I won't comment.

-- 
MST

^ permalink raw reply	[flat|nested] 96+ messages in thread

end of thread, other threads:[~2017-03-01 23:42 UTC | newest]

Thread overview: 96+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-04 15:23 [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Stefan Berger
2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM Stefan Berger
2016-01-20 15:00   ` Daniel P. Berrange
2016-01-20 15:31     ` Stefan Berger
     [not found]     ` <201601201532.u0KFW2q2019737@d03av03.boulder.ibm.com>
2016-01-20 15:46       ` Daniel P. Berrange
2016-01-20 15:54         ` Stefan Berger
2016-01-20 16:03           ` Michael S. Tsirkin
2016-01-20 16:13             ` Stefan Berger
2016-01-20 16:22           ` Daniel P. Berrange
2016-01-21 11:36             ` Dr. David Alan Gilbert
2016-05-31 18:58               ` BICKFORD, JEFFREY E
2016-05-31 19:10                 ` Dr. David Alan Gilbert
2016-06-01 22:54                   ` BICKFORD, JEFFREY E
2016-06-13 10:56                   ` Stefan Berger
2016-06-01  1:58                 ` Xu, Quan
2016-06-13 11:02                   ` Stefan Berger
2016-06-15 19:30                     ` Dr. David Alan Gilbert
2016-06-15 20:54                       ` Stefan Berger
2016-06-16  8:05                         ` Dr. David Alan Gilbert
2016-06-16  8:25                           ` Daniel P. Berrange
2016-06-16 15:20                             ` Stefan Berger
2017-03-01 12:25                             ` Stefan Berger
2017-03-01 12:54                               ` Daniel P. Berrange
2017-03-01 13:25                                 ` Stefan Berger
2017-03-01 14:17                                   ` Marc-André Lureau
2017-03-01 14:50                                     ` Stefan Berger
2017-03-01 15:24                                       ` Marc-André Lureau
2017-03-01 15:58                                         ` Stefan Berger
2017-03-01 16:22                                       ` Michael S. Tsirkin
2017-03-01 16:31                                         ` Daniel P. Berrange
2017-03-01 16:57                                           ` Dr. David Alan Gilbert
2017-03-01 17:02                                           ` Michael S. Tsirkin
2017-03-01 17:12                                             ` Stefan Berger
2017-03-01 17:16                                               ` Michael S. Tsirkin
2017-03-01 17:20                                                 ` Daniel P. Berrange
2017-03-01 18:03                                                   ` Michael S. Tsirkin
2017-03-01 17:25                                                 ` Stefan Berger
2017-03-01 17:38                                                   ` Daniel P. Berrange
2017-03-01 17:58                                                     ` Michael S. Tsirkin
2017-03-01 18:06                                                       ` Dr. David Alan Gilbert
2017-03-01 18:09                                                         ` Michael S. Tsirkin
2017-03-01 18:18                                                           ` Dr. David Alan Gilbert
2017-03-01 18:30                                                             ` Michael S. Tsirkin
2017-03-01 19:24                                                               ` Stefan Berger
2017-03-01 23:36                                                                 ` Michael S. Tsirkin
2017-03-01 23:42                                                                   ` Michael S. Tsirkin
2017-03-01 18:11                                                       ` Daniel P. Berrange
2017-03-01 18:20                                                         ` Michael S. Tsirkin
2017-03-01 18:32                                                           ` Marc-André Lureau
2017-03-01 18:56                                                             ` Daniel P. Berrange
2017-03-01 19:18                                                               ` Marc-André Lureau
2017-03-01 22:22                                                               ` Michael S. Tsirkin
2017-03-01 17:36                                               ` Daniel P. Berrange
2017-03-01 15:18                                   ` Daniel P. Berrange
2017-03-01 15:40                                     ` Stefan Berger
2017-03-01 16:13                                       ` Daniel P. Berrange
2016-06-16 13:58                           ` SERBAN, CRISTINA
2016-06-16 15:04                           ` Stefan Berger
2016-06-16 15:22                             ` Dr. David Alan Gilbert
2016-06-16 15:35                               ` Stefan Berger
2016-06-16 17:54                                 ` Dr. David Alan Gilbert
2016-06-16 18:43                                   ` Stefan Berger
2016-06-16 19:24                                     ` Dr. David Alan Gilbert
2016-06-16 21:28                                       ` Stefan Berger
2017-02-28 18:31                                         ` Marc-André Lureau
2017-03-01 12:32                                           ` Stefan Berger
2016-01-28 13:15       ` Daniel P. Berrange
2016-01-28 14:51         ` Stefan Berger
2016-01-20 15:20   ` Michael S. Tsirkin
2016-01-20 15:36     ` Stefan Berger
     [not found]     ` <201601201536.u0KFanwG004844@d01av04.pok.ibm.com>
2016-01-20 15:58       ` Michael S. Tsirkin
2016-01-20 16:06         ` Stefan Berger
2016-01-20 18:54           ` Michael S. Tsirkin
2016-01-20 21:25             ` Stefan Berger
2016-01-21  5:08               ` Michael S. Tsirkin
2016-01-21  5:41                 ` Xu, Quan
2016-01-21  9:19                   ` Michael S. Tsirkin
2016-01-21 12:09                 ` Stefan Berger
2016-01-20 16:15         ` Daniel P. Berrange
2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 2/4] Introduce condition to notify waiters of completed command Stefan Berger
2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 3/4] Introduce condition in TPM backend for notification Stefan Berger
2016-01-04 15:23 ` [Qemu-devel] [PATCH v5 4/4] Add support for VM suspend/resume for TPM TIS Stefan Berger
2016-01-05  1:26 ` [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM Xu, Quan
2016-01-05  3:36   ` Stefan Berger
2016-01-20  1:40 ` Xu, Quan
2016-01-20  9:23   ` Hagen Lauer
2016-01-20  9:41     ` Xu, Quan
2016-01-20 14:58 ` Daniel P. Berrange
2016-01-20 15:23   ` Stefan Berger
     [not found]   ` <201601201523.u0KFNwOH000398@d01av04.pok.ibm.com>
2016-01-20 15:42     ` Daniel P. Berrange
2016-01-20 19:51       ` Stefan Berger
     [not found]       ` <OF1010A111.39918A93-ON00257F40.006CA5ED-85257F40.006D2225@LocalDomain>
2016-01-20 20:16         ` Stefan Berger
2016-01-21 11:40           ` Dr. David Alan Gilbert
2016-01-21 12:31             ` Stefan Berger
     [not found]             ` <201601211231.u0LCVGCZ021111@d01av01.pok.ibm.com>
2016-01-21 14:53               ` Dr. David Alan Gilbert
     [not found]             ` <OF7ED031CA.CDD3196F-ON00257F41.004305BB-85257F41.0044C71A@LocalDomain>
2016-02-01 17:40               ` Stefan Berger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.