All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22  2:14 David Tolnay
  2019-02-22  5:51   ` Michael S. Tsirkin
                   ` (3 more replies)
  0 siblings, 4 replies; 57+ messages in thread
From: David Tolnay @ 2019-02-22  2:14 UTC (permalink / raw)
  To: Peter Huewe, Jarkko Sakkinen
  Cc: Jason Gunthorpe, linux-integrity, Michael S. Tsirkin, Jason Wang,
	virtualization, dgreid, apronin

Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
kernel side of TPM over virtio.

Use case: TPM support is needed for performing trusted work from within
a virtual machine launched by Chrome OS.

Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
implementation of the virtio TPM device can be found in these two source
files:

- https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
- https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs

and is currently backed by the libtpm2 TPM simulator:

- https://chromium.googlesource.com/chromiumos/third_party/tpm2/

Reviewed-on: https://chromium-review.googlesource.com/1387655
Reviewed-by: Andrey Pronin <apronin@chromium.org>
Tested-by: David Tolnay <dtolnay@gmail.com>
Signed-off-by: David Tolnay <dtolnay@gmail.com>
---
UNRESOLVED:
The driver assumes virtio device number VIRTIO_ID_TPM=31. If there is
interest in accepting a virtio TPM driver into the kernel, the Chrome OS
team will coordinate with the OASIS virtio technical committee to secure
an agreed upon device number and include it in a later patchset.

 drivers/char/tpm/Kconfig      |   9 +
 drivers/char/tpm/Makefile     |   1 +
 drivers/char/tpm/tpm_virtio.c | 460 ++++++++++++++++++++++++++++++++++
 3 files changed, 470 insertions(+)
 create mode 100644 drivers/char/tpm/tpm_virtio.c

diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
index 536e55d3919f..8997060e248e 100644
--- a/drivers/char/tpm/Kconfig
+++ b/drivers/char/tpm/Kconfig
@@ -164,6 +164,15 @@ config TCG_VTPM_PROXY
 	  /dev/vtpmX and a server-side file descriptor on which the vTPM
 	  can receive commands.
 
+config TCG_VIRTIO_VTPM
+	tristate "Virtio vTPM"
+	depends on TCG_TPM
+	help
+	  This driver provides the guest kernel side of TPM over Virtio. If
+	  you are building Linux to run inside of a hypervisor that supports
+	  TPM over Virtio, say Yes and the virtualized TPM will be
+	  accessible from the guest.
+
 
 source "drivers/char/tpm/st33zp24/Kconfig"
 endif # TCG_TPM
diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile
index a01c4cab902a..4f5d1071257a 100644
--- a/drivers/char/tpm/Makefile
+++ b/drivers/char/tpm/Makefile
@@ -33,3 +33,4 @@ obj-$(CONFIG_TCG_TIS_ST33ZP24) += st33zp24/
 obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
 obj-$(CONFIG_TCG_CRB) += tpm_crb.o
 obj-$(CONFIG_TCG_VTPM_PROXY) += tpm_vtpm_proxy.o
+obj-$(CONFIG_TCG_VIRTIO_VTPM) += tpm_virtio.o
diff --git a/drivers/char/tpm/tpm_virtio.c b/drivers/char/tpm/tpm_virtio.c
new file mode 100644
index 000000000000..f3239d983f18
--- /dev/null
+++ b/drivers/char/tpm/tpm_virtio.c
@@ -0,0 +1,460 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google Inc.
+ *
+ * Author: David Tolnay <dtolnay@gmail.com>
+ *
+ * ---
+ *
+ * Device driver for TPM over virtio.
+ *
+ * This driver employs a single virtio queue to handle both send and recv. TPM
+ * commands are sent over virtio to the hypervisor during a TPM send operation
+ * and responses are received over the same queue during a recv operation.
+ *
+ * The driver contains a single buffer that is the only buffer we ever place on
+ * the virtio queue. Commands are copied from the caller's command buffer into
+ * the driver's buffer before handing off to virtio, and responses are received
+ * into the driver's buffer then copied into the caller's response buffer. This
+ * allows us to be resilient to timeouts. When a send or recv operation times
+ * out, the caller is free to destroy their buffer; we don't want the hypervisor
+ * continuing to perform reads or writes against that destroyed buffer.
+ *
+ * This driver does not support concurrent send and recv operations. Mutual
+ * exclusion is upheld by the tpm_mutex lock held in tpm-interface.c around the
+ * calls to chip->ops->send and chip->ops->recv.
+ *
+ * The intended hypervisor-side implementation is as follows.
+ *
+ *     while true:
+ *         await next virtio buffer.
+ *         expect first descriptor in chain to be guest-to-host.
+ *         read tpm command from that buffer.
+ *         synchronously perform TPM work determined by the command.
+ *         expect second descriptor in chain to be host-to-guest.
+ *         write TPM response into that buffer.
+ *         place buffer on virtio used queue indicating how many bytes written.
+ */
+
+#include <linux/virtio_config.h>
+
+#include "tpm.h"
+
+/*
+ * Timeout duration when waiting on the hypervisor to complete its end of the
+ * TPM operation. This timeout is relatively high because certain TPM operations
+ * can take dozens of seconds.
+ */
+#define TPM_VIRTIO_TIMEOUT (120 * HZ)
+
+struct vtpm_device {
+	/*
+	 * Data structure for integration with the common code of the TPM driver
+	 * in tpm-chip.c.
+	 */
+	struct tpm_chip *chip;
+
+	/*
+	 * Virtio queue for sending TPM commands out of the virtual machine and
+	 * receiving TPM responses back from the hypervisor.
+	 */
+	struct virtqueue *vq;
+
+	/*
+	 * Completion that is notified when a virtio operation has been
+	 * fulfilled by the hypervisor.
+	 */
+	struct completion complete;
+
+	/*
+	 * Whether driver currently holds ownership of the virtqueue buffer.
+	 * When false, the hypervisor is in the process of reading or writing
+	 * the buffer and the driver must not touch it.
+	 */
+	bool driver_has_buffer;
+
+	/*
+	 * Whether during the most recent TPM operation, a virtqueue_kick failed
+	 * or a wait timed out.
+	 *
+	 * The next send or recv operation will attempt a kick upon seeing this
+	 * status. That should clear up the queue in the case that the
+	 * hypervisor became temporarily nonresponsive, such as by resource
+	 * exhaustion on the host. The extra kick enables recovery from kicks
+	 * going unnoticed by the hypervisor as well as recovery from virtio
+	 * callbacks going unnoticed by the guest kernel.
+	 */
+	bool needs_kick;
+
+	/* Number of bytes available to read from the virtqueue buffer. */
+	unsigned int readable;
+
+	/*
+	 * Buffer in which all virtio transfers take place. Buffer size is the
+	 * maximum legal TPM command or response message size.
+	 */
+	u8 virtqueue_buffer[TPM_BUFSIZE];
+};
+
+/*
+ * Wait for ownership of the virtqueue buffer.
+ *
+ * The why-string should begin with "waiting to..." or "waiting for..." with no
+ * trailing newline. It will appear in log output.
+ *
+ * Returns zero for success, otherwise negative error.
+ */
+static int vtpm_wait_for_buffer(struct vtpm_device *dev, const char *why)
+{
+	int ret;
+	bool did_kick;
+	struct tpm_chip *chip = dev->chip;
+	unsigned long deadline = jiffies + TPM_VIRTIO_TIMEOUT;
+
+	/* Kick queue if needed. */
+	if (dev->needs_kick) {
+		did_kick = virtqueue_kick(dev->vq);
+		if (!did_kick) {
+			dev_notice(&chip->dev, "kick failed; will retry\n");
+			return -EBUSY;
+		}
+		dev->needs_kick = false;
+	}
+
+	while (!dev->driver_has_buffer) {
+		unsigned long now = jiffies;
+
+		/* Check timeout, otherwise `deadline - now` may underflow. */
+		if time_after_eq(now, deadline) {
+			dev_warn(&chip->dev, "timed out %s\n", why);
+			dev->needs_kick = true;
+			return -ETIMEDOUT;
+		}
+
+		/*
+		 * Wait to be signaled by virtio callback.
+		 *
+		 * Positive ret is jiffies remaining until timeout when the
+		 * completion occurred, which means successful completion. Zero
+		 * ret is timeout. Negative ret is error.
+		 */
+		ret = wait_for_completion_killable_timeout(
+				&dev->complete, deadline - now);
+
+		/* Log if completion did not occur. */
+		if (ret == -ERESTARTSYS) {
+			/* Not a warning if it was simply interrupted. */
+			dev_dbg(&chip->dev, "interrupted %s\n", why);
+		} else if (ret == 0) {
+			dev_warn(&chip->dev, "timed out %s\n", why);
+			ret = -ETIMEDOUT;
+		} else if (ret < 0) {
+			dev_warn(&chip->dev, "failed while %s: error %d\n",
+					why, -ret);
+		}
+
+		/*
+		 * Return error if completion did not occur. Schedule kick to be
+		 * retried at the start of the next send/recv to help unblock
+		 * the queue.
+		 */
+		if (ret < 0) {
+			dev->needs_kick = true;
+			return ret;
+		}
+
+		/* Completion occurred. Expect response buffer back. */
+		if (virtqueue_get_buf(dev->vq, &dev->readable)) {
+			dev->driver_has_buffer = true;
+
+			if (dev->readable > TPM_BUFSIZE) {
+				dev_crit(&chip->dev,
+						"hypervisor bug: response exceeds max size, %u > %u\n",
+						dev->readable,
+						(unsigned int) TPM_BUFSIZE);
+				dev->readable = TPM_BUFSIZE;
+				return -EPROTO;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int vtpm_op_send(struct tpm_chip *chip, u8 *caller_buf, size_t len)
+{
+	int ret;
+	bool did_kick;
+	struct scatterlist sg_outbuf, sg_inbuf;
+	struct scatterlist *sgs[2] = { &sg_outbuf, &sg_inbuf };
+	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
+	u8 *virtqueue_buf = dev->virtqueue_buffer;
+
+	dev_dbg(&chip->dev, __func__ " %zu bytes\n", len);
+
+	if (len > TPM_BUFSIZE) {
+		dev_err(&chip->dev,
+				"command is too long, %zu > %zu\n",
+				len, (size_t) TPM_BUFSIZE);
+		return -EINVAL;
+	}
+
+	/*
+	 * Wait until hypervisor relinquishes ownership of the virtqueue buffer.
+	 *
+	 * This may block if the previous recv operation timed out in the guest
+	 * kernel but is still being processed by the hypervisor. Also may block
+	 * if send operations are performed back-to-back, such as if something
+	 * in the caller failed in between a send and recv.
+	 *
+	 * During normal operation absent of any errors or timeouts, this does
+	 * not block.
+	 */
+	ret = vtpm_wait_for_buffer(dev, "waiting to begin send");
+	if (ret)
+		return ret;
+
+	/* Driver owns virtqueue buffer and may now write into it. */
+	memcpy(virtqueue_buf, caller_buf, len);
+
+	/*
+	 * Enqueue the virtqueue buffer once as outgoing virtio data (written by
+	 * the virtual machine and read by the hypervisor) and again as incoming
+	 * data (written by the hypervisor and read by the virtual machine).
+	 * This step moves ownership of the virtqueue buffer from driver to
+	 * hypervisor.
+	 *
+	 * Note that we don't know here how big of a buffer the caller will use
+	 * with their later call to recv. We allow the hypervisor to write up to
+	 * the TPM max message size. If the caller ends up using a smaller
+	 * buffer with recv that is too small to hold the entire response, the
+	 * recv will return an error. This conceivably breaks TPM
+	 * implementations that want to produce a different verbosity of
+	 * response depending on the receiver's buffer size.
+	 */
+	sg_init_one(&sg_outbuf, virtqueue_buf, len);
+	sg_init_one(&sg_inbuf, virtqueue_buf, TPM_BUFSIZE);
+	ret = virtqueue_add_sgs(dev->vq, sgs, 1, 1, virtqueue_buf, GFP_KERNEL);
+	if (ret) {
+		dev_err(&chip->dev, "failed virtqueue_add_sgs\n");
+		return ret;
+	}
+
+	/* Kick the other end of the virtqueue after having added a buffer. */
+	did_kick = virtqueue_kick(dev->vq);
+	if (!did_kick) {
+		dev->needs_kick = true;
+		dev_notice(&chip->dev, "kick failed; will retry\n");
+
+		/*
+		 * We return 0 anyway because what the caller doesn't know can't
+		 * hurt them. They can call recv and it will retry the kick. If
+		 * that works, everything is fine.
+		 *
+		 * If the retry in recv fails too, they will get -EBUSY from
+		 * recv.
+		 */
+	}
+
+	/*
+	 * Hypervisor is now processing the TPM command asynchronously. It will
+	 * read the command from the output buffer and write the response into
+	 * the input buffer (which are the same buffer). When done, it will send
+	 * back the buffers over virtio and the driver's virtio callback will
+	 * complete dev->complete so that we know the response is ready to be
+	 * read.
+	 *
+	 * It is important to have copied data out of the caller's buffer into
+	 * the driver's virtqueue buffer because the caller is free to destroy
+	 * their buffer when this call returns. We can't avoid copying by
+	 * waiting here for the hypervisor to finish reading, because if that
+	 * wait times out, we return and the caller may destroy their buffer
+	 * while the hypervisor is continuing to read from it.
+	 */
+	dev->driver_has_buffer = false;
+	return 0;
+}
+
+static int vtpm_op_recv(struct tpm_chip *chip, u8 *caller_buf, size_t len)
+{
+	int ret;
+	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
+	u8 *virtqueue_buf = dev->virtqueue_buffer;
+
+	dev_dbg(&chip->dev, __func__ "\n");
+
+	/*
+	 * Wait until the virtqueue buffer is owned by the driver.
+	 *
+	 * This will usually block while the hypervisor finishes processing the
+	 * most recent TPM command.
+	 */
+	ret = vtpm_wait_for_buffer(dev, "waiting for TPM response");
+	if (ret)
+		return ret;
+
+	dev_dbg(&chip->dev, "received %u bytes\n", dev->readable);
+
+	if (dev->readable > len) {
+		dev_notice(&chip->dev,
+				"TPM response is bigger than receiver's buffer: %u > %zu\n",
+				dev->readable, len);
+		return -EINVAL;
+	}
+
+	/* Copy response over to the caller. */
+	memcpy(caller_buf, virtqueue_buf, dev->readable);
+
+	return dev->readable;
+}
+
+static void vtpm_op_cancel(struct tpm_chip *chip)
+{
+	/*
+	 * Cancel is not called in this driver's use of tpm-interface.c. It may
+	 * be triggered through tpm-sysfs but that can be implemented as needed.
+	 * Be aware that tpm-sysfs performs cancellation without holding the
+	 * tpm_mutex that protects our send and recv operations, so a future
+	 * implementation will need to consider thread safety of concurrent
+	 * send/recv and cancel.
+	 */
+	dev_notice(&chip->dev, "cancellation is not implemented\n");
+}
+
+static u8 vtpm_op_status(struct tpm_chip *chip)
+{
+	/*
+	 * Status is for TPM drivers that want tpm-interface.c to poll for
+	 * completion before calling recv. Usually this is when the hardware
+	 * needs to be polled i.e. there is no other way for recv to block on
+	 * the TPM command completion.
+	 *
+	 * Polling goes until `(status & complete_mask) == complete_val`. This
+	 * driver defines both complete_mask and complete_val as 0 and blocks on
+	 * our own completion object in recv instead.
+	 */
+	return 0;
+}
+
+static const struct tpm_class_ops vtpm_ops = {
+	.flags = TPM_OPS_AUTO_STARTUP,
+	.send = vtpm_op_send,
+	.recv = vtpm_op_recv,
+	.cancel = vtpm_op_cancel,
+	.status = vtpm_op_status,
+	.req_complete_mask = 0,
+	.req_complete_val = 0,
+};
+
+static void vtpm_virtio_complete(struct virtqueue *vq)
+{
+	struct virtio_device *vdev = vq->vdev;
+	struct vtpm_device *dev = vdev->priv;
+
+	complete(&dev->complete);
+}
+
+static int vtpm_probe(struct virtio_device *vdev)
+{
+	int err;
+	struct vtpm_device *dev;
+	struct virtqueue *vq;
+	struct tpm_chip *chip;
+
+	dev_dbg(&vdev->dev, __func__ "\n");
+
+	dev = kzalloc(sizeof(struct vtpm_device), GFP_KERNEL);
+	if (!dev) {
+		err = -ENOMEM;
+		dev_err(&vdev->dev, "failed kzalloc\n");
+		goto err_dev_alloc;
+	}
+	vdev->priv = dev;
+
+	vq = virtio_find_single_vq(vdev, vtpm_virtio_complete, "vtpm");
+	if (IS_ERR(vq)) {
+		err = PTR_ERR(vq);
+		dev_err(&vdev->dev, "failed virtio_find_single_vq\n");
+		goto err_virtio_find;
+	}
+	dev->vq = vq;
+
+	chip = tpm_chip_alloc(&vdev->dev, &vtpm_ops);
+	if (IS_ERR(chip)) {
+		err = PTR_ERR(chip);
+		dev_err(&vdev->dev, "failed tpm_chip_alloc\n");
+		goto err_chip_alloc;
+	}
+	dev_set_drvdata(&chip->dev, dev);
+	chip->flags |= TPM_CHIP_FLAG_TPM2;
+	dev->chip = chip;
+
+	init_completion(&dev->complete);
+	dev->driver_has_buffer = true;
+	dev->needs_kick = false;
+	dev->readable = 0;
+
+	/*
+	 * Required in order to enable vq use in probe function for auto
+	 * startup.
+	 */
+	virtio_device_ready(vdev);
+
+	err = tpm_chip_register(dev->chip);
+	if (err) {
+		dev_err(&vdev->dev, "failed tpm_chip_register\n");
+		goto err_chip_register;
+	}
+
+	return 0;
+
+err_chip_register:
+	put_device(&dev->chip->dev);
+err_chip_alloc:
+	vdev->config->del_vqs(vdev);
+err_virtio_find:
+	kfree(dev);
+err_dev_alloc:
+	return err;
+}
+
+static void vtpm_remove(struct virtio_device *vdev)
+{
+	struct vtpm_device *dev = vdev->priv;
+
+	/* Undo tpm_chip_register. */
+	tpm_chip_unregister(dev->chip);
+
+	/* Undo tpm_chip_alloc. */
+	put_device(&dev->chip->dev);
+
+	vdev->config->reset(vdev);
+	vdev->config->del_vqs(vdev);
+
+	kfree(dev);
+}
+
+#define VIRTIO_ID_TPM 31
+
+static struct virtio_device_id id_table[] = {
+	{
+		.device = VIRTIO_ID_TPM,
+		.vendor = VIRTIO_DEV_ANY_ID,
+	},
+	{},
+};
+
+static struct virtio_driver vtpm_driver = {
+	.driver.name = KBUILD_MODNAME,
+	.driver.owner = THIS_MODULE,
+	.id_table = id_table,
+	.probe = vtpm_probe,
+	.remove = vtpm_remove,
+};
+
+module_virtio_driver(vtpm_driver);
+
+MODULE_AUTHOR("David Tolnay (dtolnay@gmail.com)");
+MODULE_DESCRIPTION("Virtio vTPM Driver");
+MODULE_VERSION("1.0");
+MODULE_LICENSE("GPL");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22  2:14 [PATCH] tpm: Add driver for TPM over virtio David Tolnay
@ 2019-02-22  5:51   ` Michael S. Tsirkin
  2019-02-22 10:26 ` Jarkko Sakkinen
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22  5:51 UTC (permalink / raw)
  To: David Tolnay
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> kernel side of TPM over virtio.
> 
> Use case: TPM support is needed for performing trusted work from within
> a virtual machine launched by Chrome OS.
> 
> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> implementation of the virtio TPM device can be found in these two source
> files:
> 
> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> 
> and is currently backed by the libtpm2 TPM simulator:
> 
> - https://chromium.googlesource.com/chromiumos/third_party/tpm2/
> 
> Reviewed-on: https://chromium-review.googlesource.com/1387655
> Reviewed-by: Andrey Pronin <apronin@chromium.org>
> Tested-by: David Tolnay <dtolnay@gmail.com>
> Signed-off-by: David Tolnay <dtolnay@gmail.com>
> ---
> UNRESOLVED:
> The driver assumes virtio device number VIRTIO_ID_TPM=31. If there is
> interest in accepting a virtio TPM driver into the kernel, the Chrome OS
> team will coordinate with the OASIS virtio technical committee to secure
> an agreed upon device number and include it in a later patchset.

I am not a tpm expert but I don't see why not.


>  drivers/char/tpm/Kconfig      |   9 +
>  drivers/char/tpm/Makefile     |   1 +
>  drivers/char/tpm/tpm_virtio.c | 460 ++++++++++++++++++++++++++++++++++

Maintainer entry?

>  3 files changed, 470 insertions(+)
>  create mode 100644 drivers/char/tpm/tpm_virtio.c
> 
> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> index 536e55d3919f..8997060e248e 100644
> --- a/drivers/char/tpm/Kconfig
> +++ b/drivers/char/tpm/Kconfig
> @@ -164,6 +164,15 @@ config TCG_VTPM_PROXY
>  	  /dev/vtpmX and a server-side file descriptor on which the vTPM
>  	  can receive commands.
>  
> +config TCG_VIRTIO_VTPM
> +	tristate "Virtio vTPM"
> +	depends on TCG_TPM
> +	help
> +	  This driver provides the guest kernel side of TPM over Virtio. If
> +	  you are building Linux to run inside of a hypervisor that supports
> +	  TPM over Virtio, say Yes and the virtualized TPM will be
> +	  accessible from the guest.
> +
>  
>  source "drivers/char/tpm/st33zp24/Kconfig"
>  endif # TCG_TPM
> diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile
> index a01c4cab902a..4f5d1071257a 100644
> --- a/drivers/char/tpm/Makefile
> +++ b/drivers/char/tpm/Makefile
> @@ -33,3 +33,4 @@ obj-$(CONFIG_TCG_TIS_ST33ZP24) += st33zp24/
>  obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
>  obj-$(CONFIG_TCG_CRB) += tpm_crb.o
>  obj-$(CONFIG_TCG_VTPM_PROXY) += tpm_vtpm_proxy.o
> +obj-$(CONFIG_TCG_VIRTIO_VTPM) += tpm_virtio.o
> diff --git a/drivers/char/tpm/tpm_virtio.c b/drivers/char/tpm/tpm_virtio.c
> new file mode 100644
> index 000000000000..f3239d983f18
> --- /dev/null
> +++ b/drivers/char/tpm/tpm_virtio.c
> @@ -0,0 +1,460 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2019 Google Inc.
> + *
> + * Author: David Tolnay <dtolnay@gmail.com>
> + *
> + * ---
> + *
> + * Device driver for TPM over virtio.
> + *
> + * This driver employs a single virtio queue to handle both send and recv. TPM
> + * commands are sent over virtio to the hypervisor during a TPM send operation
> + * and responses are received over the same queue during a recv operation.
> + *
> + * The driver contains a single buffer that is the only buffer we ever place on
> + * the virtio queue. Commands are copied from the caller's command buffer into
> + * the driver's buffer before handing off to virtio, and responses are received
> + * into the driver's buffer then copied into the caller's response buffer. This
> + * allows us to be resilient to timeouts. When a send or recv operation times
> + * out, the caller is free to destroy their buffer; we don't want the hypervisor
> + * continuing to perform reads or writes against that destroyed buffer.
> + *
> + * This driver does not support concurrent send and recv operations. Mutual
> + * exclusion is upheld by the tpm_mutex lock held in tpm-interface.c around the
> + * calls to chip->ops->send and chip->ops->recv.
> + *
> + * The intended hypervisor-side implementation is as follows.
> + *
> + *     while true:
> + *         await next virtio buffer.
> + *         expect first descriptor in chain to be guest-to-host.
> + *         read tpm command from that buffer.
> + *         synchronously perform TPM work determined by the command.
> + *         expect second descriptor in chain to be host-to-guest.
> + *         write TPM response into that buffer.
> + *         place buffer on virtio used queue indicating how many bytes written.

That's fine I think except generally it should be legal for guest
to split each buffer to several segments.



> + */
> +
> +#include <linux/virtio_config.h>
> +
> +#include "tpm.h"
> +
> +/*
> + * Timeout duration when waiting on the hypervisor to complete its end of the
> + * TPM operation. This timeout is relatively high because certain TPM operations
> + * can take dozens of seconds.
> + */
> +#define TPM_VIRTIO_TIMEOUT (120 * HZ)

Should we read this from device? E.g. a busy hypervisor might
take longer to respond ...


> +
> +struct vtpm_device {
> +	/*
> +	 * Data structure for integration with the common code of the TPM driver
> +	 * in tpm-chip.c.
> +	 */
> +	struct tpm_chip *chip;
> +
> +	/*
> +	 * Virtio queue for sending TPM commands out of the virtual machine and
> +	 * receiving TPM responses back from the hypervisor.
> +	 */
> +	struct virtqueue *vq;
> +
> +	/*
> +	 * Completion that is notified when a virtio operation has been
> +	 * fulfilled by the hypervisor.
> +	 */
> +	struct completion complete;
> +
> +	/*
> +	 * Whether driver currently holds ownership of the virtqueue buffer.
> +	 * When false, the hypervisor is in the process of reading or writing
> +	 * the buffer and the driver must not touch it.
> +	 */
> +	bool driver_has_buffer;
> +
> +	/*
> +	 * Whether during the most recent TPM operation, a virtqueue_kick failed
> +	 * or a wait timed out.
> +	 *
> +	 * The next send or recv operation will attempt a kick upon seeing this
> +	 * status. That should clear up the queue in the case that the
> +	 * hypervisor became temporarily nonresponsive, such as by resource
> +	 * exhaustion on the host. The extra kick enables recovery from kicks
> +	 * going unnoticed by the hypervisor as well as recovery from virtio
> +	 * callbacks going unnoticed by the guest kernel.

Well not necessarily. E.g. virtqueue_kick does not
kick if hypervisor didn't enable notifications
(e.g. with event idx they get disabled automatically).
So it won't recover if hypervisor can discard
kicks.

I think this comment needs clarification.


> +	 */
> +	bool needs_kick;
> +
> +	/* Number of bytes available to read from the virtqueue buffer. */
> +	unsigned int readable;
> +
> +	/*
> +	 * Buffer in which all virtio transfers take place. Buffer size is the
> +	 * maximum legal TPM command or response message size.
> +	 */
> +	u8 virtqueue_buffer[TPM_BUFSIZE];
> +};
> +
> +/*
> + * Wait for ownership of the virtqueue buffer.
> + *
> + * The why-string should begin with "waiting to..." or "waiting for..." with no
> + * trailing newline. It will appear in log output.
> + *
> + * Returns zero for success, otherwise negative error.
> + */
> +static int vtpm_wait_for_buffer(struct vtpm_device *dev, const char *why)
> +{
> +	int ret;
> +	bool did_kick;
> +	struct tpm_chip *chip = dev->chip;
> +	unsigned long deadline = jiffies + TPM_VIRTIO_TIMEOUT;
> +
> +	/* Kick queue if needed. */
> +	if (dev->needs_kick) {
> +		did_kick = virtqueue_kick(dev->vq);
> +		if (!did_kick) {
> +			dev_notice(&chip->dev, "kick failed; will retry\n");
> +			return -EBUSY;
> +		}
> +		dev->needs_kick = false;
> +	}
> +
> +	while (!dev->driver_has_buffer) {
> +		unsigned long now = jiffies;
> +
> +		/* Check timeout, otherwise `deadline - now` may underflow. */
> +		if time_after_eq(now, deadline) {
> +			dev_warn(&chip->dev, "timed out %s\n", why);
> +			dev->needs_kick = true;
> +			return -ETIMEDOUT;
> +		}
> +
> +		/*
> +		 * Wait to be signaled by virtio callback.
> +		 *
> +		 * Positive ret is jiffies remaining until timeout when the
> +		 * completion occurred, which means successful completion. Zero
> +		 * ret is timeout. Negative ret is error.
> +		 */
> +		ret = wait_for_completion_killable_timeout(
> +				&dev->complete, deadline - now);
> +
> +		/* Log if completion did not occur. */
> +		if (ret == -ERESTARTSYS) {
> +			/* Not a warning if it was simply interrupted. */
> +			dev_dbg(&chip->dev, "interrupted %s\n", why);
> +		} else if (ret == 0) {
> +			dev_warn(&chip->dev, "timed out %s\n", why);
> +			ret = -ETIMEDOUT;

Should we check NEEDS_RESET bit and try to reset the device?
Or is that too drastic?

> +		} else if (ret < 0) {
> +			dev_warn(&chip->dev, "failed while %s: error %d\n",
> +					why, -ret);
> +		}
> +
> +		/*
> +		 * Return error if completion did not occur. Schedule kick to be
> +		 * retried at the start of the next send/recv to help unblock
> +		 * the queue.
> +		 */
> +		if (ret < 0) {
> +			dev->needs_kick = true;
> +			return ret;
> +		}
> +
> +		/* Completion occurred. Expect response buffer back. */
> +		if (virtqueue_get_buf(dev->vq, &dev->readable)) {
> +			dev->driver_has_buffer = true;
> +
> +			if (dev->readable > TPM_BUFSIZE) {
> +				dev_crit(&chip->dev,
> +						"hypervisor bug: response exceeds max size, %u > %u\n",
> +						dev->readable,
> +						(unsigned int) TPM_BUFSIZE);
> +				dev->readable = TPM_BUFSIZE;
> +				return -EPROTO;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int vtpm_op_send(struct tpm_chip *chip, u8 *caller_buf, size_t len)
> +{
> +	int ret;
> +	bool did_kick;
> +	struct scatterlist sg_outbuf, sg_inbuf;
> +	struct scatterlist *sgs[2] = { &sg_outbuf, &sg_inbuf };
> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
> +
> +	dev_dbg(&chip->dev, __func__ " %zu bytes\n", len);
> +
> +	if (len > TPM_BUFSIZE) {
> +		dev_err(&chip->dev,
> +				"command is too long, %zu > %zu\n",
> +				len, (size_t) TPM_BUFSIZE);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * Wait until hypervisor relinquishes ownership of the virtqueue buffer.
> +	 *
> +	 * This may block if the previous recv operation timed out in the guest
> +	 * kernel but is still being processed by the hypervisor. Also may block
> +	 * if send operations are performed back-to-back, such as if something
> +	 * in the caller failed in between a send and recv.
> +	 *
> +	 * During normal operation absent of any errors or timeouts, this does
> +	 * not block.
> +	 */
> +	ret = vtpm_wait_for_buffer(dev, "waiting to begin send");
> +	if (ret)
> +		return ret;
> +
> +	/* Driver owns virtqueue buffer and may now write into it. */
> +	memcpy(virtqueue_buf, caller_buf, len);
> +
> +	/*
> +	 * Enqueue the virtqueue buffer once as outgoing virtio data (written by
> +	 * the virtual machine and read by the hypervisor) and again as incoming
> +	 * data (written by the hypervisor and read by the virtual machine).
> +	 * This step moves ownership of the virtqueue buffer from driver to
> +	 * hypervisor.
> +	 *
> +	 * Note that we don't know here how big of a buffer the caller will use
> +	 * with their later call to recv. We allow the hypervisor to write up to
> +	 * the TPM max message size. If the caller ends up using a smaller
> +	 * buffer with recv that is too small to hold the entire response, the
> +	 * recv will return an error. This conceivably breaks TPM
> +	 * implementations that want to produce a different verbosity of
> +	 * response depending on the receiver's buffer size.
> +	 */
> +	sg_init_one(&sg_outbuf, virtqueue_buf, len);
> +	sg_init_one(&sg_inbuf, virtqueue_buf, TPM_BUFSIZE);
> +	ret = virtqueue_add_sgs(dev->vq, sgs, 1, 1, virtqueue_buf, GFP_KERNEL);
> +	if (ret) {
> +		dev_err(&chip->dev, "failed virtqueue_add_sgs\n");
> +		return ret;
> +	}
> +
> +	/* Kick the other end of the virtqueue after having added a buffer. */
> +	did_kick = virtqueue_kick(dev->vq);
> +	if (!did_kick) {
> +		dev->needs_kick = true;
> +		dev_notice(&chip->dev, "kick failed; will retry\n");
> +
> +		/*
> +		 * We return 0 anyway because what the caller doesn't know can't
> +		 * hurt them. They can call recv and it will retry the kick. If
> +		 * that works, everything is fine.
> +		 *
> +		 * If the retry in recv fails too, they will get -EBUSY from
> +		 * recv.
> +		 */
> +	}
> +
> +	/*
> +	 * Hypervisor is now processing the TPM command asynchronously. It will
> +	 * read the command from the output buffer and write the response into
> +	 * the input buffer (which are the same buffer). When done, it will send
> +	 * back the buffers over virtio and the driver's virtio callback will
> +	 * complete dev->complete so that we know the response is ready to be
> +	 * read.
> +	 *
> +	 * It is important to have copied data out of the caller's buffer into
> +	 * the driver's virtqueue buffer because the caller is free to destroy
> +	 * their buffer when this call returns. We can't avoid copying by
> +	 * waiting here for the hypervisor to finish reading, because if that
> +	 * wait times out, we return and the caller may destroy their buffer
> +	 * while the hypervisor is continuing to read from it.
> +	 */
> +	dev->driver_has_buffer = false;
> +	return 0;
> +}
> +
> +static int vtpm_op_recv(struct tpm_chip *chip, u8 *caller_buf, size_t len)
> +{
> +	int ret;
> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
> +
> +	dev_dbg(&chip->dev, __func__ "\n");
> +
> +	/*
> +	 * Wait until the virtqueue buffer is owned by the driver.
> +	 *
> +	 * This will usually block while the hypervisor finishes processing the
> +	 * most recent TPM command.
> +	 */
> +	ret = vtpm_wait_for_buffer(dev, "waiting for TPM response");
> +	if (ret)
> +		return ret;
> +
> +	dev_dbg(&chip->dev, "received %u bytes\n", dev->readable);
> +
> +	if (dev->readable > len) {
> +		dev_notice(&chip->dev,
> +				"TPM response is bigger than receiver's buffer: %u > %zu\n",
> +				dev->readable, len);
> +		return -EINVAL;
> +	}
> +
> +	/* Copy response over to the caller. */
> +	memcpy(caller_buf, virtqueue_buf, dev->readable);
> +
> +	return dev->readable;
> +}
> +
> +static void vtpm_op_cancel(struct tpm_chip *chip)
> +{
> +	/*
> +	 * Cancel is not called in this driver's use of tpm-interface.c. It may
> +	 * be triggered through tpm-sysfs but that can be implemented as needed.
> +	 * Be aware that tpm-sysfs performs cancellation without holding the
> +	 * tpm_mutex that protects our send and recv operations, so a future
> +	 * implementation will need to consider thread safety of concurrent
> +	 * send/recv and cancel.
> +	 */
> +	dev_notice(&chip->dev, "cancellation is not implemented\n");
> +}
> +
> +static u8 vtpm_op_status(struct tpm_chip *chip)
> +{
> +	/*
> +	 * Status is for TPM drivers that want tpm-interface.c to poll for
> +	 * completion before calling recv. Usually this is when the hardware
> +	 * needs to be polled i.e. there is no other way for recv to block on
> +	 * the TPM command completion.
> +	 *
> +	 * Polling goes until `(status & complete_mask) == complete_val`. This
> +	 * driver defines both complete_mask and complete_val as 0 and blocks on
> +	 * our own completion object in recv instead.
> +	 */
> +	return 0;
> +}
> +
> +static const struct tpm_class_ops vtpm_ops = {
> +	.flags = TPM_OPS_AUTO_STARTUP,
> +	.send = vtpm_op_send,
> +	.recv = vtpm_op_recv,
> +	.cancel = vtpm_op_cancel,
> +	.status = vtpm_op_status,
> +	.req_complete_mask = 0,
> +	.req_complete_val = 0,
> +};
> +
> +static void vtpm_virtio_complete(struct virtqueue *vq)
> +{
> +	struct virtio_device *vdev = vq->vdev;
> +	struct vtpm_device *dev = vdev->priv;
> +
> +	complete(&dev->complete);
> +}
> +
> +static int vtpm_probe(struct virtio_device *vdev)
> +{
> +	int err;
> +	struct vtpm_device *dev;
> +	struct virtqueue *vq;
> +	struct tpm_chip *chip;
> +
> +	dev_dbg(&vdev->dev, __func__ "\n");
> +
> +	dev = kzalloc(sizeof(struct vtpm_device), GFP_KERNEL);
> +	if (!dev) {
> +		err = -ENOMEM;
> +		dev_err(&vdev->dev, "failed kzalloc\n");
> +		goto err_dev_alloc;
> +	}
> +	vdev->priv = dev;
> +
> +	vq = virtio_find_single_vq(vdev, vtpm_virtio_complete, "vtpm");
> +	if (IS_ERR(vq)) {
> +		err = PTR_ERR(vq);
> +		dev_err(&vdev->dev, "failed virtio_find_single_vq\n");
> +		goto err_virtio_find;
> +	}
> +	dev->vq = vq;
> +
> +	chip = tpm_chip_alloc(&vdev->dev, &vtpm_ops);
> +	if (IS_ERR(chip)) {
> +		err = PTR_ERR(chip);
> +		dev_err(&vdev->dev, "failed tpm_chip_alloc\n");
> +		goto err_chip_alloc;
> +	}
> +	dev_set_drvdata(&chip->dev, dev);
> +	chip->flags |= TPM_CHIP_FLAG_TPM2;
> +	dev->chip = chip;
> +
> +	init_completion(&dev->complete);
> +	dev->driver_has_buffer = true;
> +	dev->needs_kick = false;
> +	dev->readable = 0;
> +
> +	/*
> +	 * Required in order to enable vq use in probe function for auto
> +	 * startup.
> +	 */
> +	virtio_device_ready(vdev);
> +
> +	err = tpm_chip_register(dev->chip);
> +	if (err) {
> +		dev_err(&vdev->dev, "failed tpm_chip_register\n");
> +		goto err_chip_register;
> +	}
> +
> +	return 0;
> +
> +err_chip_register:
> +	put_device(&dev->chip->dev);
> +err_chip_alloc:
> +	vdev->config->del_vqs(vdev);
> +err_virtio_find:
> +	kfree(dev);
> +err_dev_alloc:
> +	return err;
> +}
> +
> +static void vtpm_remove(struct virtio_device *vdev)
> +{
> +	struct vtpm_device *dev = vdev->priv;
> +
> +	/* Undo tpm_chip_register. */
> +	tpm_chip_unregister(dev->chip);
> +
> +	/* Undo tpm_chip_alloc. */
> +	put_device(&dev->chip->dev);
> +
> +	vdev->config->reset(vdev);
> +	vdev->config->del_vqs(vdev);
> +
> +	kfree(dev);
> +}
> +
> +#define VIRTIO_ID_TPM 31
> +
> +static struct virtio_device_id id_table[] = {
> +	{
> +		.device = VIRTIO_ID_TPM,
> +		.vendor = VIRTIO_DEV_ANY_ID,
> +	},
> +	{},
> +};

Let's write

static struct virtio_device_id id_table[] = {
        { VIRTIO_ID_TPM, VIRTIO_DEV_ANY_ID },
        { 0 },
};

for consistency with other virtio devices.

> +
> +static struct virtio_driver vtpm_driver = {
> +	.driver.name = KBUILD_MODNAME,
> +	.driver.owner = THIS_MODULE,
> +	.id_table = id_table,
> +	.probe = vtpm_probe,
> +	.remove = vtpm_remove,

Freeze/restore?


> +};
> +
> +module_virtio_driver(vtpm_driver);
> +
> +MODULE_AUTHOR("David Tolnay (dtolnay@gmail.com)");
> +MODULE_DESCRIPTION("Virtio vTPM Driver");
> +MODULE_VERSION("1.0");
> +MODULE_LICENSE("GPL");
> -- 
> 2.20.1

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22  5:51   ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22  5:51 UTC (permalink / raw)
  To: David Tolnay
  Cc: dgreid, Jarkko Sakkinen, virtualization, Jason Gunthorpe,
	Peter Huewe, apronin, linux-integrity

On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> kernel side of TPM over virtio.
> 
> Use case: TPM support is needed for performing trusted work from within
> a virtual machine launched by Chrome OS.
> 
> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> implementation of the virtio TPM device can be found in these two source
> files:
> 
> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> 
> and is currently backed by the libtpm2 TPM simulator:
> 
> - https://chromium.googlesource.com/chromiumos/third_party/tpm2/
> 
> Reviewed-on: https://chromium-review.googlesource.com/1387655
> Reviewed-by: Andrey Pronin <apronin@chromium.org>
> Tested-by: David Tolnay <dtolnay@gmail.com>
> Signed-off-by: David Tolnay <dtolnay@gmail.com>
> ---
> UNRESOLVED:
> The driver assumes virtio device number VIRTIO_ID_TPM=31. If there is
> interest in accepting a virtio TPM driver into the kernel, the Chrome OS
> team will coordinate with the OASIS virtio technical committee to secure
> an agreed upon device number and include it in a later patchset.

I am not a tpm expert but I don't see why not.


>  drivers/char/tpm/Kconfig      |   9 +
>  drivers/char/tpm/Makefile     |   1 +
>  drivers/char/tpm/tpm_virtio.c | 460 ++++++++++++++++++++++++++++++++++

Maintainer entry?

>  3 files changed, 470 insertions(+)
>  create mode 100644 drivers/char/tpm/tpm_virtio.c
> 
> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> index 536e55d3919f..8997060e248e 100644
> --- a/drivers/char/tpm/Kconfig
> +++ b/drivers/char/tpm/Kconfig
> @@ -164,6 +164,15 @@ config TCG_VTPM_PROXY
>  	  /dev/vtpmX and a server-side file descriptor on which the vTPM
>  	  can receive commands.
>  
> +config TCG_VIRTIO_VTPM
> +	tristate "Virtio vTPM"
> +	depends on TCG_TPM
> +	help
> +	  This driver provides the guest kernel side of TPM over Virtio. If
> +	  you are building Linux to run inside of a hypervisor that supports
> +	  TPM over Virtio, say Yes and the virtualized TPM will be
> +	  accessible from the guest.
> +
>  
>  source "drivers/char/tpm/st33zp24/Kconfig"
>  endif # TCG_TPM
> diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile
> index a01c4cab902a..4f5d1071257a 100644
> --- a/drivers/char/tpm/Makefile
> +++ b/drivers/char/tpm/Makefile
> @@ -33,3 +33,4 @@ obj-$(CONFIG_TCG_TIS_ST33ZP24) += st33zp24/
>  obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
>  obj-$(CONFIG_TCG_CRB) += tpm_crb.o
>  obj-$(CONFIG_TCG_VTPM_PROXY) += tpm_vtpm_proxy.o
> +obj-$(CONFIG_TCG_VIRTIO_VTPM) += tpm_virtio.o
> diff --git a/drivers/char/tpm/tpm_virtio.c b/drivers/char/tpm/tpm_virtio.c
> new file mode 100644
> index 000000000000..f3239d983f18
> --- /dev/null
> +++ b/drivers/char/tpm/tpm_virtio.c
> @@ -0,0 +1,460 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2019 Google Inc.
> + *
> + * Author: David Tolnay <dtolnay@gmail.com>
> + *
> + * ---
> + *
> + * Device driver for TPM over virtio.
> + *
> + * This driver employs a single virtio queue to handle both send and recv. TPM
> + * commands are sent over virtio to the hypervisor during a TPM send operation
> + * and responses are received over the same queue during a recv operation.
> + *
> + * The driver contains a single buffer that is the only buffer we ever place on
> + * the virtio queue. Commands are copied from the caller's command buffer into
> + * the driver's buffer before handing off to virtio, and responses are received
> + * into the driver's buffer then copied into the caller's response buffer. This
> + * allows us to be resilient to timeouts. When a send or recv operation times
> + * out, the caller is free to destroy their buffer; we don't want the hypervisor
> + * continuing to perform reads or writes against that destroyed buffer.
> + *
> + * This driver does not support concurrent send and recv operations. Mutual
> + * exclusion is upheld by the tpm_mutex lock held in tpm-interface.c around the
> + * calls to chip->ops->send and chip->ops->recv.
> + *
> + * The intended hypervisor-side implementation is as follows.
> + *
> + *     while true:
> + *         await next virtio buffer.
> + *         expect first descriptor in chain to be guest-to-host.
> + *         read tpm command from that buffer.
> + *         synchronously perform TPM work determined by the command.
> + *         expect second descriptor in chain to be host-to-guest.
> + *         write TPM response into that buffer.
> + *         place buffer on virtio used queue indicating how many bytes written.

That's fine I think except generally it should be legal for guest
to split each buffer to several segments.



> + */
> +
> +#include <linux/virtio_config.h>
> +
> +#include "tpm.h"
> +
> +/*
> + * Timeout duration when waiting on the hypervisor to complete its end of the
> + * TPM operation. This timeout is relatively high because certain TPM operations
> + * can take dozens of seconds.
> + */
> +#define TPM_VIRTIO_TIMEOUT (120 * HZ)

Should we read this from device? E.g. a busy hypervisor might
take longer to respond ...


> +
> +struct vtpm_device {
> +	/*
> +	 * Data structure for integration with the common code of the TPM driver
> +	 * in tpm-chip.c.
> +	 */
> +	struct tpm_chip *chip;
> +
> +	/*
> +	 * Virtio queue for sending TPM commands out of the virtual machine and
> +	 * receiving TPM responses back from the hypervisor.
> +	 */
> +	struct virtqueue *vq;
> +
> +	/*
> +	 * Completion that is notified when a virtio operation has been
> +	 * fulfilled by the hypervisor.
> +	 */
> +	struct completion complete;
> +
> +	/*
> +	 * Whether driver currently holds ownership of the virtqueue buffer.
> +	 * When false, the hypervisor is in the process of reading or writing
> +	 * the buffer and the driver must not touch it.
> +	 */
> +	bool driver_has_buffer;
> +
> +	/*
> +	 * Whether during the most recent TPM operation, a virtqueue_kick failed
> +	 * or a wait timed out.
> +	 *
> +	 * The next send or recv operation will attempt a kick upon seeing this
> +	 * status. That should clear up the queue in the case that the
> +	 * hypervisor became temporarily nonresponsive, such as by resource
> +	 * exhaustion on the host. The extra kick enables recovery from kicks
> +	 * going unnoticed by the hypervisor as well as recovery from virtio
> +	 * callbacks going unnoticed by the guest kernel.

Well not necessarily. E.g. virtqueue_kick does not
kick if hypervisor didn't enable notifications
(e.g. with event idx they get disabled automatically).
So it won't recover if hypervisor can discard
kicks.

I think this comment needs clarification.


> +	 */
> +	bool needs_kick;
> +
> +	/* Number of bytes available to read from the virtqueue buffer. */
> +	unsigned int readable;
> +
> +	/*
> +	 * Buffer in which all virtio transfers take place. Buffer size is the
> +	 * maximum legal TPM command or response message size.
> +	 */
> +	u8 virtqueue_buffer[TPM_BUFSIZE];
> +};
> +
> +/*
> + * Wait for ownership of the virtqueue buffer.
> + *
> + * The why-string should begin with "waiting to..." or "waiting for..." with no
> + * trailing newline. It will appear in log output.
> + *
> + * Returns zero for success, otherwise negative error.
> + */
> +static int vtpm_wait_for_buffer(struct vtpm_device *dev, const char *why)
> +{
> +	int ret;
> +	bool did_kick;
> +	struct tpm_chip *chip = dev->chip;
> +	unsigned long deadline = jiffies + TPM_VIRTIO_TIMEOUT;
> +
> +	/* Kick queue if needed. */
> +	if (dev->needs_kick) {
> +		did_kick = virtqueue_kick(dev->vq);
> +		if (!did_kick) {
> +			dev_notice(&chip->dev, "kick failed; will retry\n");
> +			return -EBUSY;
> +		}
> +		dev->needs_kick = false;
> +	}
> +
> +	while (!dev->driver_has_buffer) {
> +		unsigned long now = jiffies;
> +
> +		/* Check timeout, otherwise `deadline - now` may underflow. */
> +		if time_after_eq(now, deadline) {
> +			dev_warn(&chip->dev, "timed out %s\n", why);
> +			dev->needs_kick = true;
> +			return -ETIMEDOUT;
> +		}
> +
> +		/*
> +		 * Wait to be signaled by virtio callback.
> +		 *
> +		 * Positive ret is jiffies remaining until timeout when the
> +		 * completion occurred, which means successful completion. Zero
> +		 * ret is timeout. Negative ret is error.
> +		 */
> +		ret = wait_for_completion_killable_timeout(
> +				&dev->complete, deadline - now);
> +
> +		/* Log if completion did not occur. */
> +		if (ret == -ERESTARTSYS) {
> +			/* Not a warning if it was simply interrupted. */
> +			dev_dbg(&chip->dev, "interrupted %s\n", why);
> +		} else if (ret == 0) {
> +			dev_warn(&chip->dev, "timed out %s\n", why);
> +			ret = -ETIMEDOUT;

Should we check NEEDS_RESET bit and try to reset the device?
Or is that too drastic?

> +		} else if (ret < 0) {
> +			dev_warn(&chip->dev, "failed while %s: error %d\n",
> +					why, -ret);
> +		}
> +
> +		/*
> +		 * Return error if completion did not occur. Schedule kick to be
> +		 * retried at the start of the next send/recv to help unblock
> +		 * the queue.
> +		 */
> +		if (ret < 0) {
> +			dev->needs_kick = true;
> +			return ret;
> +		}
> +
> +		/* Completion occurred. Expect response buffer back. */
> +		if (virtqueue_get_buf(dev->vq, &dev->readable)) {
> +			dev->driver_has_buffer = true;
> +
> +			if (dev->readable > TPM_BUFSIZE) {
> +				dev_crit(&chip->dev,
> +						"hypervisor bug: response exceeds max size, %u > %u\n",
> +						dev->readable,
> +						(unsigned int) TPM_BUFSIZE);
> +				dev->readable = TPM_BUFSIZE;
> +				return -EPROTO;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int vtpm_op_send(struct tpm_chip *chip, u8 *caller_buf, size_t len)
> +{
> +	int ret;
> +	bool did_kick;
> +	struct scatterlist sg_outbuf, sg_inbuf;
> +	struct scatterlist *sgs[2] = { &sg_outbuf, &sg_inbuf };
> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
> +
> +	dev_dbg(&chip->dev, __func__ " %zu bytes\n", len);
> +
> +	if (len > TPM_BUFSIZE) {
> +		dev_err(&chip->dev,
> +				"command is too long, %zu > %zu\n",
> +				len, (size_t) TPM_BUFSIZE);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * Wait until hypervisor relinquishes ownership of the virtqueue buffer.
> +	 *
> +	 * This may block if the previous recv operation timed out in the guest
> +	 * kernel but is still being processed by the hypervisor. Also may block
> +	 * if send operations are performed back-to-back, such as if something
> +	 * in the caller failed in between a send and recv.
> +	 *
> +	 * During normal operation absent of any errors or timeouts, this does
> +	 * not block.
> +	 */
> +	ret = vtpm_wait_for_buffer(dev, "waiting to begin send");
> +	if (ret)
> +		return ret;
> +
> +	/* Driver owns virtqueue buffer and may now write into it. */
> +	memcpy(virtqueue_buf, caller_buf, len);
> +
> +	/*
> +	 * Enqueue the virtqueue buffer once as outgoing virtio data (written by
> +	 * the virtual machine and read by the hypervisor) and again as incoming
> +	 * data (written by the hypervisor and read by the virtual machine).
> +	 * This step moves ownership of the virtqueue buffer from driver to
> +	 * hypervisor.
> +	 *
> +	 * Note that we don't know here how big of a buffer the caller will use
> +	 * with their later call to recv. We allow the hypervisor to write up to
> +	 * the TPM max message size. If the caller ends up using a smaller
> +	 * buffer with recv that is too small to hold the entire response, the
> +	 * recv will return an error. This conceivably breaks TPM
> +	 * implementations that want to produce a different verbosity of
> +	 * response depending on the receiver's buffer size.
> +	 */
> +	sg_init_one(&sg_outbuf, virtqueue_buf, len);
> +	sg_init_one(&sg_inbuf, virtqueue_buf, TPM_BUFSIZE);
> +	ret = virtqueue_add_sgs(dev->vq, sgs, 1, 1, virtqueue_buf, GFP_KERNEL);
> +	if (ret) {
> +		dev_err(&chip->dev, "failed virtqueue_add_sgs\n");
> +		return ret;
> +	}
> +
> +	/* Kick the other end of the virtqueue after having added a buffer. */
> +	did_kick = virtqueue_kick(dev->vq);
> +	if (!did_kick) {
> +		dev->needs_kick = true;
> +		dev_notice(&chip->dev, "kick failed; will retry\n");
> +
> +		/*
> +		 * We return 0 anyway because what the caller doesn't know can't
> +		 * hurt them. They can call recv and it will retry the kick. If
> +		 * that works, everything is fine.
> +		 *
> +		 * If the retry in recv fails too, they will get -EBUSY from
> +		 * recv.
> +		 */
> +	}
> +
> +	/*
> +	 * Hypervisor is now processing the TPM command asynchronously. It will
> +	 * read the command from the output buffer and write the response into
> +	 * the input buffer (which are the same buffer). When done, it will send
> +	 * back the buffers over virtio and the driver's virtio callback will
> +	 * complete dev->complete so that we know the response is ready to be
> +	 * read.
> +	 *
> +	 * It is important to have copied data out of the caller's buffer into
> +	 * the driver's virtqueue buffer because the caller is free to destroy
> +	 * their buffer when this call returns. We can't avoid copying by
> +	 * waiting here for the hypervisor to finish reading, because if that
> +	 * wait times out, we return and the caller may destroy their buffer
> +	 * while the hypervisor is continuing to read from it.
> +	 */
> +	dev->driver_has_buffer = false;
> +	return 0;
> +}
> +
> +static int vtpm_op_recv(struct tpm_chip *chip, u8 *caller_buf, size_t len)
> +{
> +	int ret;
> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
> +
> +	dev_dbg(&chip->dev, __func__ "\n");
> +
> +	/*
> +	 * Wait until the virtqueue buffer is owned by the driver.
> +	 *
> +	 * This will usually block while the hypervisor finishes processing the
> +	 * most recent TPM command.
> +	 */
> +	ret = vtpm_wait_for_buffer(dev, "waiting for TPM response");
> +	if (ret)
> +		return ret;
> +
> +	dev_dbg(&chip->dev, "received %u bytes\n", dev->readable);
> +
> +	if (dev->readable > len) {
> +		dev_notice(&chip->dev,
> +				"TPM response is bigger than receiver's buffer: %u > %zu\n",
> +				dev->readable, len);
> +		return -EINVAL;
> +	}
> +
> +	/* Copy response over to the caller. */
> +	memcpy(caller_buf, virtqueue_buf, dev->readable);
> +
> +	return dev->readable;
> +}
> +
> +static void vtpm_op_cancel(struct tpm_chip *chip)
> +{
> +	/*
> +	 * Cancel is not called in this driver's use of tpm-interface.c. It may
> +	 * be triggered through tpm-sysfs but that can be implemented as needed.
> +	 * Be aware that tpm-sysfs performs cancellation without holding the
> +	 * tpm_mutex that protects our send and recv operations, so a future
> +	 * implementation will need to consider thread safety of concurrent
> +	 * send/recv and cancel.
> +	 */
> +	dev_notice(&chip->dev, "cancellation is not implemented\n");
> +}
> +
> +static u8 vtpm_op_status(struct tpm_chip *chip)
> +{
> +	/*
> +	 * Status is for TPM drivers that want tpm-interface.c to poll for
> +	 * completion before calling recv. Usually this is when the hardware
> +	 * needs to be polled i.e. there is no other way for recv to block on
> +	 * the TPM command completion.
> +	 *
> +	 * Polling goes until `(status & complete_mask) == complete_val`. This
> +	 * driver defines both complete_mask and complete_val as 0 and blocks on
> +	 * our own completion object in recv instead.
> +	 */
> +	return 0;
> +}
> +
> +static const struct tpm_class_ops vtpm_ops = {
> +	.flags = TPM_OPS_AUTO_STARTUP,
> +	.send = vtpm_op_send,
> +	.recv = vtpm_op_recv,
> +	.cancel = vtpm_op_cancel,
> +	.status = vtpm_op_status,
> +	.req_complete_mask = 0,
> +	.req_complete_val = 0,
> +};
> +
> +static void vtpm_virtio_complete(struct virtqueue *vq)
> +{
> +	struct virtio_device *vdev = vq->vdev;
> +	struct vtpm_device *dev = vdev->priv;
> +
> +	complete(&dev->complete);
> +}
> +
> +static int vtpm_probe(struct virtio_device *vdev)
> +{
> +	int err;
> +	struct vtpm_device *dev;
> +	struct virtqueue *vq;
> +	struct tpm_chip *chip;
> +
> +	dev_dbg(&vdev->dev, __func__ "\n");
> +
> +	dev = kzalloc(sizeof(struct vtpm_device), GFP_KERNEL);
> +	if (!dev) {
> +		err = -ENOMEM;
> +		dev_err(&vdev->dev, "failed kzalloc\n");
> +		goto err_dev_alloc;
> +	}
> +	vdev->priv = dev;
> +
> +	vq = virtio_find_single_vq(vdev, vtpm_virtio_complete, "vtpm");
> +	if (IS_ERR(vq)) {
> +		err = PTR_ERR(vq);
> +		dev_err(&vdev->dev, "failed virtio_find_single_vq\n");
> +		goto err_virtio_find;
> +	}
> +	dev->vq = vq;
> +
> +	chip = tpm_chip_alloc(&vdev->dev, &vtpm_ops);
> +	if (IS_ERR(chip)) {
> +		err = PTR_ERR(chip);
> +		dev_err(&vdev->dev, "failed tpm_chip_alloc\n");
> +		goto err_chip_alloc;
> +	}
> +	dev_set_drvdata(&chip->dev, dev);
> +	chip->flags |= TPM_CHIP_FLAG_TPM2;
> +	dev->chip = chip;
> +
> +	init_completion(&dev->complete);
> +	dev->driver_has_buffer = true;
> +	dev->needs_kick = false;
> +	dev->readable = 0;
> +
> +	/*
> +	 * Required in order to enable vq use in probe function for auto
> +	 * startup.
> +	 */
> +	virtio_device_ready(vdev);
> +
> +	err = tpm_chip_register(dev->chip);
> +	if (err) {
> +		dev_err(&vdev->dev, "failed tpm_chip_register\n");
> +		goto err_chip_register;
> +	}
> +
> +	return 0;
> +
> +err_chip_register:
> +	put_device(&dev->chip->dev);
> +err_chip_alloc:
> +	vdev->config->del_vqs(vdev);
> +err_virtio_find:
> +	kfree(dev);
> +err_dev_alloc:
> +	return err;
> +}
> +
> +static void vtpm_remove(struct virtio_device *vdev)
> +{
> +	struct vtpm_device *dev = vdev->priv;
> +
> +	/* Undo tpm_chip_register. */
> +	tpm_chip_unregister(dev->chip);
> +
> +	/* Undo tpm_chip_alloc. */
> +	put_device(&dev->chip->dev);
> +
> +	vdev->config->reset(vdev);
> +	vdev->config->del_vqs(vdev);
> +
> +	kfree(dev);
> +}
> +
> +#define VIRTIO_ID_TPM 31
> +
> +static struct virtio_device_id id_table[] = {
> +	{
> +		.device = VIRTIO_ID_TPM,
> +		.vendor = VIRTIO_DEV_ANY_ID,
> +	},
> +	{},
> +};

Let's write

static struct virtio_device_id id_table[] = {
        { VIRTIO_ID_TPM, VIRTIO_DEV_ANY_ID },
        { 0 },
};

for consistency with other virtio devices.

> +
> +static struct virtio_driver vtpm_driver = {
> +	.driver.name = KBUILD_MODNAME,
> +	.driver.owner = THIS_MODULE,
> +	.id_table = id_table,
> +	.probe = vtpm_probe,
> +	.remove = vtpm_remove,

Freeze/restore?


> +};
> +
> +module_virtio_driver(vtpm_driver);
> +
> +MODULE_AUTHOR("David Tolnay (dtolnay@gmail.com)");
> +MODULE_DESCRIPTION("Virtio vTPM Driver");
> +MODULE_VERSION("1.0");
> +MODULE_LICENSE("GPL");
> -- 
> 2.20.1

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22  2:14 [PATCH] tpm: Add driver for TPM over virtio David Tolnay
  2019-02-22  5:51   ` Michael S. Tsirkin
@ 2019-02-22 10:26 ` Jarkko Sakkinen
  2019-02-22 15:23     ` Michael S. Tsirkin
  2019-02-22 10:30 ` Jarkko Sakkinen
  2019-02-22 15:30 ` James Bottomley
  3 siblings, 1 reply; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 10:26 UTC (permalink / raw)
  To: David Tolnay
  Cc: Peter Huewe, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> kernel side of TPM over virtio.
> 
> Use case: TPM support is needed for performing trusted work from within
> a virtual machine launched by Chrome OS.
> 
> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> implementation of the virtio TPM device can be found in these two source
> files:
> 
> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs

These files/links do not make sense for kernel testing. Please remove
them from the next version.

> and is currently backed by the libtpm2 TPM simulator:
> 
> - https://chromium.googlesource.com/chromiumos/third_party/tpm2/
> 
> Reviewed-on: https://chromium-review.googlesource.com/1387655

A non-standard flag. Should be removed. Also

> Reviewed-by: Andrey Pronin <apronin@chromium.org>
> Tested-by: David Tolnay <dtolnay@gmail.com>
> Signed-off-by: David Tolnay <dtolnay@gmail.com>

Your SOB should first and you cannot peer test your own patches. Please
remove tested-by.

The whole thing looks like an early draft. Why the patch does not have
an RFC tag? You should use it for early drafts. Now it is like saying
"please merge this".

I don't have much knowledge of virtio. The commit message should at
least give rough overview what is meant by "kernel side" in this
context.

Since one cannot use standard Linux environment to test this I'm not too
optimistic about this getting merged any time soon. And since even the
commit message is broken I don't think it makes sense to review the code
in detail at this point.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22  2:14 [PATCH] tpm: Add driver for TPM over virtio David Tolnay
  2019-02-22  5:51   ` Michael S. Tsirkin
  2019-02-22 10:26 ` Jarkko Sakkinen
@ 2019-02-22 10:30 ` Jarkko Sakkinen
  2019-02-22 15:30 ` James Bottomley
  3 siblings, 0 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 10:30 UTC (permalink / raw)
  To: David Tolnay
  Cc: Peter Huewe, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> +MODULE_AUTHOR("David Tolnay (dtolnay@gmail.com)");

Off-topic: but started to think is this field still required for
LKM's. GIT log is the only non-flakky way to describe authority.

Not related to this code review but feels just a misguiding thing
to have.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 10:26 ` Jarkko Sakkinen
@ 2019-02-22 15:23     ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 15:23 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > kernel side of TPM over virtio.
> > 
> > Use case: TPM support is needed for performing trusted work from within
> > a virtual machine launched by Chrome OS.
> > 
> > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > implementation of the virtio TPM device can be found in these two source
> > files:
> > 
> > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> 
> These files/links do not make sense for kernel testing. Please remove
> them from the next version.

To clarify generally for a virtio device we want
- guest support
- device support
- spec

If the device is implemented in qemu and guest in linux kernel,
then there are lots of people familiar with these
programming environments, so sometimes we merge
guest and host code even if spec isn't written up at all.

If you don't want to do that there's a small number of people who can
properly review code, e.g. I don't think lots of people on this list are
familiar with crosvm.  One way to address this would be to build a QEMU
implementation. Another would be to write up a spec.  You can do both
too :)



> > and is currently backed by the libtpm2 TPM simulator:
> > 
> > - https://chromium.googlesource.com/chromiumos/third_party/tpm2/
> > 
> > Reviewed-on: https://chromium-review.googlesource.com/1387655
> 
> A non-standard flag. Should be removed. Also
> 
> > Reviewed-by: Andrey Pronin <apronin@chromium.org>
> > Tested-by: David Tolnay <dtolnay@gmail.com>
> > Signed-off-by: David Tolnay <dtolnay@gmail.com>
> 
> Your SOB should first and you cannot peer test your own patches. Please
> remove tested-by.
> 
> The whole thing looks like an early draft. Why the patch does not have
> an RFC tag? You should use it for early drafts. Now it is like saying
> "please merge this".
> 
> I don't have much knowledge of virtio. The commit message should at
> least give rough overview what is meant by "kernel side" in this
> context.
> 
> Since one cannot use standard Linux environment to test this I'm not too
> optimistic about this getting merged any time soon. And since even the
> commit message is broken I don't think it makes sense to review the code
> in detail at this point.
> 
> /Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22 15:23     ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 15:23 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: dgreid, virtualization, Jason Gunthorpe, linux-integrity,
	Peter Huewe, apronin, David Tolnay

On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > kernel side of TPM over virtio.
> > 
> > Use case: TPM support is needed for performing trusted work from within
> > a virtual machine launched by Chrome OS.
> > 
> > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > implementation of the virtio TPM device can be found in these two source
> > files:
> > 
> > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> 
> These files/links do not make sense for kernel testing. Please remove
> them from the next version.

To clarify generally for a virtio device we want
- guest support
- device support
- spec

If the device is implemented in qemu and guest in linux kernel,
then there are lots of people familiar with these
programming environments, so sometimes we merge
guest and host code even if spec isn't written up at all.

If you don't want to do that there's a small number of people who can
properly review code, e.g. I don't think lots of people on this list are
familiar with crosvm.  One way to address this would be to build a QEMU
implementation. Another would be to write up a spec.  You can do both
too :)



> > and is currently backed by the libtpm2 TPM simulator:
> > 
> > - https://chromium.googlesource.com/chromiumos/third_party/tpm2/
> > 
> > Reviewed-on: https://chromium-review.googlesource.com/1387655
> 
> A non-standard flag. Should be removed. Also
> 
> > Reviewed-by: Andrey Pronin <apronin@chromium.org>
> > Tested-by: David Tolnay <dtolnay@gmail.com>
> > Signed-off-by: David Tolnay <dtolnay@gmail.com>
> 
> Your SOB should first and you cannot peer test your own patches. Please
> remove tested-by.
> 
> The whole thing looks like an early draft. Why the patch does not have
> an RFC tag? You should use it for early drafts. Now it is like saying
> "please merge this".
> 
> I don't have much knowledge of virtio. The commit message should at
> least give rough overview what is meant by "kernel side" in this
> context.
> 
> Since one cannot use standard Linux environment to test this I'm not too
> optimistic about this getting merged any time soon. And since even the
> commit message is broken I don't think it makes sense to review the code
> in detail at this point.
> 
> /Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22  2:14 [PATCH] tpm: Add driver for TPM over virtio David Tolnay
                   ` (2 preceding siblings ...)
  2019-02-22 10:30 ` Jarkko Sakkinen
@ 2019-02-22 15:30 ` James Bottomley
  2019-02-22 21:16     ` Michael S. Tsirkin
  2019-02-22 22:00   ` David Tolnay
  3 siblings, 2 replies; 57+ messages in thread
From: James Bottomley @ 2019-02-22 15:30 UTC (permalink / raw)
  To: David Tolnay, Peter Huewe, Jarkko Sakkinen
  Cc: Jason Gunthorpe, linux-integrity, Michael S. Tsirkin, Jason Wang,
	virtualization, dgreid, apronin

On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> guest kernel side of TPM over virtio.

What's the use case for using this over the current non-virtio vTPM?. 
I always thought virtio was about guest to host transport efficiency,
but the phsical TPM, being connected over a very slow bus, is about as
inefficient as you can get in that regard, so why do we need to use
virtio to drive the virtual one?

> Use case: TPM support is needed for performing trusted work from
> within a virtual machine launched by Chrome OS.

The current vTPM does this, what's the use case for your special one?

> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> implementation of the virtio TPM device can be found in these two
> source
> files:
> 
> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce
> 5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce
> 5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> 
> and is currently backed by the libtpm2 TPM simulator:

So is the reason simply that crosvm did vTPM emulation differently from
qemu and thus needs a different driver?

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 15:23     ` Michael S. Tsirkin
  (?)
@ 2019-02-22 19:31     ` Jarkko Sakkinen
  2019-02-22 19:33       ` Jarkko Sakkinen
  2019-02-22 20:55         ` Michael S. Tsirkin
  -1 siblings, 2 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 19:31 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
> On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > > kernel side of TPM over virtio.
> > > 
> > > Use case: TPM support is needed for performing trusted work from within
> > > a virtual machine launched by Chrome OS.
> > > 
> > > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > > implementation of the virtio TPM device can be found in these two source
> > > files:
> > > 
> > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> > 
> > These files/links do not make sense for kernel testing. Please remove
> > them from the next version.
> 
> To clarify generally for a virtio device we want
> - guest support
> - device support
> - spec
> 
> If the device is implemented in qemu and guest in linux kernel,
> then there are lots of people familiar with these
> programming environments, so sometimes we merge
> guest and host code even if spec isn't written up at all.
> 
> If you don't want to do that there's a small number of people who can
> properly review code, e.g. I don't think lots of people on this list are
> familiar with crosvm.  One way to address this would be to build a QEMU
> implementation. Another would be to write up a spec.  You can do both
> too :)

I don't really understand your arguments.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 19:31     ` Jarkko Sakkinen
@ 2019-02-22 19:33       ` Jarkko Sakkinen
  2019-02-22 21:25           ` Michael S. Tsirkin
  2019-02-22 20:55         ` Michael S. Tsirkin
  1 sibling, 1 reply; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 19:33 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 09:31:56PM +0200, Jarkko Sakkinen wrote:
> On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
> > On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> > > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > > > kernel side of TPM over virtio.
> > > > 
> > > > Use case: TPM support is needed for performing trusted work from within
> > > > a virtual machine launched by Chrome OS.
> > > > 
> > > > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > > > implementation of the virtio TPM device can be found in these two source
> > > > files:
> > > > 
> > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> > > 
> > > These files/links do not make sense for kernel testing. Please remove
> > > them from the next version.
> > 
> > To clarify generally for a virtio device we want
> > - guest support
> > - device support
> > - spec
> > 
> > If the device is implemented in qemu and guest in linux kernel,
> > then there are lots of people familiar with these
> > programming environments, so sometimes we merge
> > guest and host code even if spec isn't written up at all.
> > 
> > If you don't want to do that there's a small number of people who can
> > properly review code, e.g. I don't think lots of people on this list are
> > familiar with crosvm.  One way to address this would be to build a QEMU
> > implementation. Another would be to write up a spec.  You can do both
> > too :)
> 
> I don't really understand your arguments.

... and I did your response total three times and did not find any
causality of any sort from anything.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 19:31     ` Jarkko Sakkinen
@ 2019-02-22 20:55         ` Michael S. Tsirkin
  2019-02-22 20:55         ` Michael S. Tsirkin
  1 sibling, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 20:55 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 09:31:56PM +0200, Jarkko Sakkinen wrote:
> On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
> > On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> > > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > > > kernel side of TPM over virtio.
> > > > 
> > > > Use case: TPM support is needed for performing trusted work from within
> > > > a virtual machine launched by Chrome OS.
> > > > 
> > > > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > > > implementation of the virtio TPM device can be found in these two source
> > > > files:
> > > > 
> > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> > > 
> > > These files/links do not make sense for kernel testing. Please remove
> > > them from the next version.
> > 
> > To clarify generally for a virtio device we want
> > - guest support
> > - device support
> > - spec
> > 
> > If the device is implemented in qemu and guest in linux kernel,
> > then there are lots of people familiar with these
> > programming environments, so sometimes we merge
> > guest and host code even if spec isn't written up at all.
> > 
> > If you don't want to do that there's a small number of people who can
> > properly review code, e.g. I don't think lots of people on this list are
> > familiar with crosvm.  One way to address this would be to build a QEMU
> > implementation. Another would be to write up a spec.  You can do both
> > too :)
> 
> I don't really understand your arguments.
> 
> /Jarkko

Jarkko, I am not making any argument at all.

I am trying to suggest ways for this driver to get upstream.

And I am saying that while I agree availability of an implementation in
crosvm isn't sufficient, availability of a software device
implementation in QEMU is not an absolute prerequisite for including a
virtio driver in Linux.  If there's a virtio spec explaining how to
write one, that can be enough.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22 20:55         ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 20:55 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: dgreid, virtualization, Jason Gunthorpe, linux-integrity,
	Peter Huewe, apronin, David Tolnay

On Fri, Feb 22, 2019 at 09:31:56PM +0200, Jarkko Sakkinen wrote:
> On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
> > On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> > > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > > > kernel side of TPM over virtio.
> > > > 
> > > > Use case: TPM support is needed for performing trusted work from within
> > > > a virtual machine launched by Chrome OS.
> > > > 
> > > > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > > > implementation of the virtio TPM device can be found in these two source
> > > > files:
> > > > 
> > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> > > 
> > > These files/links do not make sense for kernel testing. Please remove
> > > them from the next version.
> > 
> > To clarify generally for a virtio device we want
> > - guest support
> > - device support
> > - spec
> > 
> > If the device is implemented in qemu and guest in linux kernel,
> > then there are lots of people familiar with these
> > programming environments, so sometimes we merge
> > guest and host code even if spec isn't written up at all.
> > 
> > If you don't want to do that there's a small number of people who can
> > properly review code, e.g. I don't think lots of people on this list are
> > familiar with crosvm.  One way to address this would be to build a QEMU
> > implementation. Another would be to write up a spec.  You can do both
> > too :)
> 
> I don't really understand your arguments.
> 
> /Jarkko

Jarkko, I am not making any argument at all.

I am trying to suggest ways for this driver to get upstream.

And I am saying that while I agree availability of an implementation in
crosvm isn't sufficient, availability of a software device
implementation in QEMU is not an absolute prerequisite for including a
virtio driver in Linux.  If there's a virtio spec explaining how to
write one, that can be enough.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 15:30 ` James Bottomley
@ 2019-02-22 21:16     ` Michael S. Tsirkin
  2019-02-22 22:00   ` David Tolnay
  1 sibling, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 21:16 UTC (permalink / raw)
  To: James Bottomley
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 07:30:16AM -0800, James Bottomley wrote:
> On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> > Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> > guest kernel side of TPM over virtio.
> 
> What's the use case for using this over the current non-virtio vTPM?. 
> I always thought virtio was about guest to host transport efficiency,
> but the phsical TPM, being connected over a very slow bus, is about as
> inefficient as you can get in that regard, so why do we need to use
> virtio to drive the virtual one?

I can't say for sure about TPM.

But generally there are many reasons to do virtio rather than emulating
a hardware device.

Ease of extending the device could be one. E.g. what if you want to make
an extension that hardware does not support?  You are at cross-purposes
with a hardware vendor who can happen to be the driver maintainer as
well.

A decent specification and readiness to fix bugs in the right place
(e.g.  driver violates spec? we'll fix driver not as you to work around
it in the device) is another.

You can also download the spec without clicking I agree once - and it
follows the Non-Assertion IPR Mode to help people not get sued.

Stuff like that is conductive to getting things done.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22 21:16     ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 21:16 UTC (permalink / raw)
  To: James Bottomley
  Cc: dgreid, Jarkko Sakkinen, virtualization, Jason Gunthorpe,
	linux-integrity, Peter Huewe, apronin, David Tolnay

On Fri, Feb 22, 2019 at 07:30:16AM -0800, James Bottomley wrote:
> On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> > Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> > guest kernel side of TPM over virtio.
> 
> What's the use case for using this over the current non-virtio vTPM?. 
> I always thought virtio was about guest to host transport efficiency,
> but the phsical TPM, being connected over a very slow bus, is about as
> inefficient as you can get in that regard, so why do we need to use
> virtio to drive the virtual one?

I can't say for sure about TPM.

But generally there are many reasons to do virtio rather than emulating
a hardware device.

Ease of extending the device could be one. E.g. what if you want to make
an extension that hardware does not support?  You are at cross-purposes
with a hardware vendor who can happen to be the driver maintainer as
well.

A decent specification and readiness to fix bugs in the right place
(e.g.  driver violates spec? we'll fix driver not as you to work around
it in the device) is another.

You can also download the spec without clicking I agree once - and it
follows the Non-Assertion IPR Mode to help people not get sued.

Stuff like that is conductive to getting things done.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 19:33       ` Jarkko Sakkinen
@ 2019-02-22 21:25           ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 21:25 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 09:33:05PM +0200, Jarkko Sakkinen wrote:
> On Fri, Feb 22, 2019 at 09:31:56PM +0200, Jarkko Sakkinen wrote:
> > On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> > > > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > > > > kernel side of TPM over virtio.
> > > > > 
> > > > > Use case: TPM support is needed for performing trusted work from within
> > > > > a virtual machine launched by Chrome OS.
> > > > > 
> > > > > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > > > > implementation of the virtio TPM device can be found in these two source
> > > > > files:
> > > > > 
> > > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> > > > 
> > > > These files/links do not make sense for kernel testing. Please remove
> > > > them from the next version.
> > > 
> > > To clarify generally for a virtio device we want
> > > - guest support
> > > - device support
> > > - spec
> > > 
> > > If the device is implemented in qemu and guest in linux kernel,
> > > then there are lots of people familiar with these
> > > programming environments, so sometimes we merge
> > > guest and host code even if spec isn't written up at all.
> > > 
> > > If you don't want to do that there's a small number of people who can
> > > properly review code, e.g. I don't think lots of people on this list are
> > > familiar with crosvm.  One way to address this would be to build a QEMU
> > > implementation. Another would be to write up a spec.  You can do both
> > > too :)
> > 
> > I don't really understand your arguments.
> 
> ... and I did your response total three times and did not find any
> causality of any sort from anything.
> 
> /Jarkko

Thanks for spending the time reading my response.  What was included in
it was a general suggestion for a virtio based driver to be acceptable
in upstream Linux.

You pointed out that a pointer to a prototype implementation in Rust
isn't relevant. However, FYI just posting guest code and asking for it
to be merged alone won't work for a virtio driver either. I am merely
trying to speed things up instead of having the contributor repost with
a tweaked commit log just to immediately get another set of nacks.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22 21:25           ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 21:25 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: dgreid, virtualization, Jason Gunthorpe, linux-integrity,
	Peter Huewe, apronin, David Tolnay

On Fri, Feb 22, 2019 at 09:33:05PM +0200, Jarkko Sakkinen wrote:
> On Fri, Feb 22, 2019 at 09:31:56PM +0200, Jarkko Sakkinen wrote:
> > On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> > > > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > > > > kernel side of TPM over virtio.
> > > > > 
> > > > > Use case: TPM support is needed for performing trusted work from within
> > > > > a virtual machine launched by Chrome OS.
> > > > > 
> > > > > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > > > > implementation of the virtio TPM device can be found in these two source
> > > > > files:
> > > > > 
> > > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> > > > 
> > > > These files/links do not make sense for kernel testing. Please remove
> > > > them from the next version.
> > > 
> > > To clarify generally for a virtio device we want
> > > - guest support
> > > - device support
> > > - spec
> > > 
> > > If the device is implemented in qemu and guest in linux kernel,
> > > then there are lots of people familiar with these
> > > programming environments, so sometimes we merge
> > > guest and host code even if spec isn't written up at all.
> > > 
> > > If you don't want to do that there's a small number of people who can
> > > properly review code, e.g. I don't think lots of people on this list are
> > > familiar with crosvm.  One way to address this would be to build a QEMU
> > > implementation. Another would be to write up a spec.  You can do both
> > > too :)
> > 
> > I don't really understand your arguments.
> 
> ... and I did your response total three times and did not find any
> causality of any sort from anything.
> 
> /Jarkko

Thanks for spending the time reading my response.  What was included in
it was a general suggestion for a virtio based driver to be acceptable
in upstream Linux.

You pointed out that a pointer to a prototype implementation in Rust
isn't relevant. However, FYI just posting guest code and asking for it
to be merged alone won't work for a virtio driver either. I am merely
trying to speed things up instead of having the contributor repost with
a tweaked commit log just to immediately get another set of nacks.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 20:55         ` Michael S. Tsirkin
  (?)
@ 2019-02-22 21:30         ` Jarkko Sakkinen
  -1 siblings, 0 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 21:30 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 03:55:33PM -0500, Michael S. Tsirkin wrote:
> Jarkko, I am not making any argument at all.
> 
> I am trying to suggest ways for this driver to get upstream.
> 
> And I am saying that while I agree availability of an implementation in
> crosvm isn't sufficient, availability of a software device
> implementation in QEMU is not an absolute prerequisite for including a
> virtio driver in Linux.  If there's a virtio spec explaining how to
> write one, that can be enough.

QEMU is definitely not absolute requirement for anything but I could not
find crosvm in easily distributed form for any major distribution.

The only reason for favoring QEMU is that I can actually install it
without hustle. I have no idea how to get crosvm up and running.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 21:16     ` Michael S. Tsirkin
  (?)
@ 2019-02-22 21:31     ` Jason Gunthorpe
  2019-02-22 21:59       ` Jarkko Sakkinen
  -1 siblings, 1 reply; 57+ messages in thread
From: Jason Gunthorpe @ 2019-02-22 21:31 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: James Bottomley, David Tolnay, Peter Huewe, Jarkko Sakkinen,
	linux-integrity, Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 04:16:01PM -0500, Michael S. Tsirkin wrote:
> On Fri, Feb 22, 2019 at 07:30:16AM -0800, James Bottomley wrote:
> > On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> > > guest kernel side of TPM over virtio.
> > 
> > What's the use case for using this over the current non-virtio vTPM?. 
> > I always thought virtio was about guest to host transport efficiency,
> > but the phsical TPM, being connected over a very slow bus, is about as
> > inefficient as you can get in that regard, so why do we need to use
> > virtio to drive the virtual one?
> 
> I can't say for sure about TPM.
> 
> But generally there are many reasons to do virtio rather than emulating
> a hardware device.

We already have a xen 'virtioish' TPM driver, so I don't think there
is a good reason to block a virtio driver if someone cares about
it. There are enough good reasons to prefer virtio to other options,
IMHO.

Provided it meets the general requirements for new virtio stuff you
outlined.

Jason

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22  5:51   ` Michael S. Tsirkin
  (?)
@ 2019-02-22 21:40   ` David Tolnay
  2019-02-22 22:24       ` Michael S. Tsirkin
  -1 siblings, 1 reply; 57+ messages in thread
From: David Tolnay @ 2019-02-22 21:40 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On 2/21/19 9:51 PM, Michael S. Tsirkin wrote:
> On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
>> Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
>> kernel side of TPM over virtio.
>>
>> Use case: TPM support is needed for performing trusted work from within
>> a virtual machine launched by Chrome OS.
>>
>> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
>> implementation of the virtio TPM device can be found in these two source
>> files:
>>
>> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
>> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
>>
>> and is currently backed by the libtpm2 TPM simulator:
>>
>> - https://chromium.googlesource.com/chromiumos/third_party/tpm2/
>>
>> Reviewed-on: https://chromium-review.googlesource.com/1387655
>> Reviewed-by: Andrey Pronin <apronin@chromium.org>
>> Tested-by: David Tolnay <dtolnay@gmail.com>
>> Signed-off-by: David Tolnay <dtolnay@gmail.com>
>> ---
>> UNRESOLVED:
>> The driver assumes virtio device number VIRTIO_ID_TPM=31. If there is
>> interest in accepting a virtio TPM driver into the kernel, the Chrome OS
>> team will coordinate with the OASIS virtio technical committee to secure
>> an agreed upon device number and include it in a later patchset.
> 
> I am not a tpm expert but I don't see why not.
> 
> 
>>  drivers/char/tpm/Kconfig      |   9 +
>>  drivers/char/tpm/Makefile     |   1 +
>>  drivers/char/tpm/tpm_virtio.c | 460 ++++++++++++++++++++++++++++++++++
> 
> Maintainer entry?

Good call, I will add one.

Could you let me know what workflow would work best for you? I can direct
patches toward Chrome OS's kernel tree and only a Chrome OS list, then send them
your way only once they have been reviewed and accepted there. Or I can list one
or both of the linux-integrity@v.k.o and virtualization@l.l-f.o lists along with
a list for Chrome OS reviewers.

Either way, we'll want eyes from people who know virtio and from people who know
TPM on most changes.


> 
>>  3 files changed, 470 insertions(+)
>>  create mode 100644 drivers/char/tpm/tpm_virtio.c
>>
>> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
>> index 536e55d3919f..8997060e248e 100644
>> --- a/drivers/char/tpm/Kconfig
>> +++ b/drivers/char/tpm/Kconfig
>> @@ -164,6 +164,15 @@ config TCG_VTPM_PROXY
>>  	  /dev/vtpmX and a server-side file descriptor on which the vTPM
>>  	  can receive commands.
>>  
>> +config TCG_VIRTIO_VTPM
>> +	tristate "Virtio vTPM"
>> +	depends on TCG_TPM
>> +	help
>> +	  This driver provides the guest kernel side of TPM over Virtio. If
>> +	  you are building Linux to run inside of a hypervisor that supports
>> +	  TPM over Virtio, say Yes and the virtualized TPM will be
>> +	  accessible from the guest.
>> +
>>  
>>  source "drivers/char/tpm/st33zp24/Kconfig"
>>  endif # TCG_TPM
>> diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile
>> index a01c4cab902a..4f5d1071257a 100644
>> --- a/drivers/char/tpm/Makefile
>> +++ b/drivers/char/tpm/Makefile
>> @@ -33,3 +33,4 @@ obj-$(CONFIG_TCG_TIS_ST33ZP24) += st33zp24/
>>  obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
>>  obj-$(CONFIG_TCG_CRB) += tpm_crb.o
>>  obj-$(CONFIG_TCG_VTPM_PROXY) += tpm_vtpm_proxy.o
>> +obj-$(CONFIG_TCG_VIRTIO_VTPM) += tpm_virtio.o
>> diff --git a/drivers/char/tpm/tpm_virtio.c b/drivers/char/tpm/tpm_virtio.c
>> new file mode 100644
>> index 000000000000..f3239d983f18
>> --- /dev/null
>> +++ b/drivers/char/tpm/tpm_virtio.c
>> @@ -0,0 +1,460 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright 2019 Google Inc.
>> + *
>> + * Author: David Tolnay <dtolnay@gmail.com>
>> + *
>> + * ---
>> + *
>> + * Device driver for TPM over virtio.
>> + *
>> + * This driver employs a single virtio queue to handle both send and recv. TPM
>> + * commands are sent over virtio to the hypervisor during a TPM send operation
>> + * and responses are received over the same queue during a recv operation.
>> + *
>> + * The driver contains a single buffer that is the only buffer we ever place on
>> + * the virtio queue. Commands are copied from the caller's command buffer into
>> + * the driver's buffer before handing off to virtio, and responses are received
>> + * into the driver's buffer then copied into the caller's response buffer. This
>> + * allows us to be resilient to timeouts. When a send or recv operation times
>> + * out, the caller is free to destroy their buffer; we don't want the hypervisor
>> + * continuing to perform reads or writes against that destroyed buffer.
>> + *
>> + * This driver does not support concurrent send and recv operations. Mutual
>> + * exclusion is upheld by the tpm_mutex lock held in tpm-interface.c around the
>> + * calls to chip->ops->send and chip->ops->recv.
>> + *
>> + * The intended hypervisor-side implementation is as follows.
>> + *
>> + *     while true:
>> + *         await next virtio buffer.
>> + *         expect first descriptor in chain to be guest-to-host.
>> + *         read tpm command from that buffer.
>> + *         synchronously perform TPM work determined by the command.
>> + *         expect second descriptor in chain to be host-to-guest.
>> + *         write TPM response into that buffer.
>> + *         place buffer on virtio used queue indicating how many bytes written.
> 
> That's fine I think except generally it should be legal for guest
> to split each buffer to several segments.

Acknowledged. I will adjust this comment.

To clarify, this means the hypervisor will need to accept a single descriptor
chain consisting of one or more guest-to-host descriptors which together form
the command, followed by one or more host-to-guest descriptors into which the
response may be written. Is that what you had in mind?

TPM commands and responses are both capped at a moderate size (4096 bytes). With
that in mind, is it still necessary to allow split buffers? We won't be dealing
with multi-megabyte transfers.


>> + */
>> +
>> +#include <linux/virtio_config.h>
>> +
>> +#include "tpm.h"
>> +
>> +/*
>> + * Timeout duration when waiting on the hypervisor to complete its end of the
>> + * TPM operation. This timeout is relatively high because certain TPM operations
>> + * can take dozens of seconds.
>> + */
>> +#define TPM_VIRTIO_TIMEOUT (120 * HZ)
> 
> Should we read this from device? E.g. a busy hypervisor might
> take longer to respond ...

That's true. Do you have an example of an existing virtio driver that reads
timeout from the hypervisor?

To understand your perspective, do you see one of the following as being the
bigger advantage? -- either that we can use a timeout tighter than 120s for
hypervisors that believe they can respond quickly to any command, or that we can
relax the timeout beyond 120s for an especially overburdened hypervisor. The
first is nice because the caller in the guest will receive their error code
sooner in certain failure modes, but maybe that would be better addressed by
some other way of poking the host to find out whether it is still alive and
working. For the second, 120s is pretty generous and in our use case we would
just want to give up and retry much later at that point; I don't know how much
further it would be reasonable to grow a timeout.


>> +
>> +struct vtpm_device {
>> +	/*
>> +	 * Data structure for integration with the common code of the TPM driver
>> +	 * in tpm-chip.c.
>> +	 */
>> +	struct tpm_chip *chip;
>> +
>> +	/*
>> +	 * Virtio queue for sending TPM commands out of the virtual machine and
>> +	 * receiving TPM responses back from the hypervisor.
>> +	 */
>> +	struct virtqueue *vq;
>> +
>> +	/*
>> +	 * Completion that is notified when a virtio operation has been
>> +	 * fulfilled by the hypervisor.
>> +	 */
>> +	struct completion complete;
>> +
>> +	/*
>> +	 * Whether driver currently holds ownership of the virtqueue buffer.
>> +	 * When false, the hypervisor is in the process of reading or writing
>> +	 * the buffer and the driver must not touch it.
>> +	 */
>> +	bool driver_has_buffer;
>> +
>> +	/*
>> +	 * Whether during the most recent TPM operation, a virtqueue_kick failed
>> +	 * or a wait timed out.
>> +	 *
>> +	 * The next send or recv operation will attempt a kick upon seeing this
>> +	 * status. That should clear up the queue in the case that the
>> +	 * hypervisor became temporarily nonresponsive, such as by resource
>> +	 * exhaustion on the host. The extra kick enables recovery from kicks
>> +	 * going unnoticed by the hypervisor as well as recovery from virtio
>> +	 * callbacks going unnoticed by the guest kernel.
> 
> Well not necessarily. E.g. virtqueue_kick does not
> kick if hypervisor didn't enable notifications
> (e.g. with event idx they get disabled automatically).
> So it won't recover if hypervisor can discard
> kicks.
> 
> I think this comment needs clarification.

Makes sense. I didn't think to consider "hypervisor discards kicks" as a failure
mode. :) Would virtqueue_kick return false in that case? If so, needs_kick will
stay true and the kick will keep being retried on subsequent send/recv
operations until the hypervisor has enabled notifications again.

I will clarify the comment to touch on that situation.


>> +	 */
>> +	bool needs_kick;
>> +
>> +	/* Number of bytes available to read from the virtqueue buffer. */
>> +	unsigned int readable;
>> +
>> +	/*
>> +	 * Buffer in which all virtio transfers take place. Buffer size is the
>> +	 * maximum legal TPM command or response message size.
>> +	 */
>> +	u8 virtqueue_buffer[TPM_BUFSIZE];
>> +};
>> +
>> +/*
>> + * Wait for ownership of the virtqueue buffer.
>> + *
>> + * The why-string should begin with "waiting to..." or "waiting for..." with no
>> + * trailing newline. It will appear in log output.
>> + *
>> + * Returns zero for success, otherwise negative error.
>> + */
>> +static int vtpm_wait_for_buffer(struct vtpm_device *dev, const char *why)
>> +{
>> +	int ret;
>> +	bool did_kick;
>> +	struct tpm_chip *chip = dev->chip;
>> +	unsigned long deadline = jiffies + TPM_VIRTIO_TIMEOUT;
>> +
>> +	/* Kick queue if needed. */
>> +	if (dev->needs_kick) {
>> +		did_kick = virtqueue_kick(dev->vq);
>> +		if (!did_kick) {
>> +			dev_notice(&chip->dev, "kick failed; will retry\n");
>> +			return -EBUSY;
>> +		}
>> +		dev->needs_kick = false;
>> +	}
>> +
>> +	while (!dev->driver_has_buffer) {
>> +		unsigned long now = jiffies;
>> +
>> +		/* Check timeout, otherwise `deadline - now` may underflow. */
>> +		if time_after_eq(now, deadline) {
>> +			dev_warn(&chip->dev, "timed out %s\n", why);
>> +			dev->needs_kick = true;
>> +			return -ETIMEDOUT;
>> +		}
>> +
>> +		/*
>> +		 * Wait to be signaled by virtio callback.
>> +		 *
>> +		 * Positive ret is jiffies remaining until timeout when the
>> +		 * completion occurred, which means successful completion. Zero
>> +		 * ret is timeout. Negative ret is error.
>> +		 */
>> +		ret = wait_for_completion_killable_timeout(
>> +				&dev->complete, deadline - now);
>> +
>> +		/* Log if completion did not occur. */
>> +		if (ret == -ERESTARTSYS) {
>> +			/* Not a warning if it was simply interrupted. */
>> +			dev_dbg(&chip->dev, "interrupted %s\n", why);
>> +		} else if (ret == 0) {
>> +			dev_warn(&chip->dev, "timed out %s\n", why);
>> +			ret = -ETIMEDOUT;
> 
> Should we check NEEDS_RESET bit and try to reset the device?
> Or is that too drastic?

I'll let you make the call on whether we need a reset implemented. This driver
was initially based on the simple virtio-rng driver which doesn't reset (but nor
does it detect timeout). Let me know and I can give it a shot.


> 
>> +		} else if (ret < 0) {
>> +			dev_warn(&chip->dev, "failed while %s: error %d\n",
>> +					why, -ret);
>> +		}
>> +
>> +		/*
>> +		 * Return error if completion did not occur. Schedule kick to be
>> +		 * retried at the start of the next send/recv to help unblock
>> +		 * the queue.
>> +		 */
>> +		if (ret < 0) {
>> +			dev->needs_kick = true;
>> +			return ret;
>> +		}
>> +
>> +		/* Completion occurred. Expect response buffer back. */
>> +		if (virtqueue_get_buf(dev->vq, &dev->readable)) {
>> +			dev->driver_has_buffer = true;
>> +
>> +			if (dev->readable > TPM_BUFSIZE) {
>> +				dev_crit(&chip->dev,
>> +						"hypervisor bug: response exceeds max size, %u > %u\n",
>> +						dev->readable,
>> +						(unsigned int) TPM_BUFSIZE);
>> +				dev->readable = TPM_BUFSIZE;
>> +				return -EPROTO;
>> +			}
>> +		}
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int vtpm_op_send(struct tpm_chip *chip, u8 *caller_buf, size_t len)
>> +{
>> +	int ret;
>> +	bool did_kick;
>> +	struct scatterlist sg_outbuf, sg_inbuf;
>> +	struct scatterlist *sgs[2] = { &sg_outbuf, &sg_inbuf };
>> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
>> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
>> +
>> +	dev_dbg(&chip->dev, __func__ " %zu bytes\n", len);
>> +
>> +	if (len > TPM_BUFSIZE) {
>> +		dev_err(&chip->dev,
>> +				"command is too long, %zu > %zu\n",
>> +				len, (size_t) TPM_BUFSIZE);
>> +		return -EINVAL;
>> +	}
>> +
>> +	/*
>> +	 * Wait until hypervisor relinquishes ownership of the virtqueue buffer.
>> +	 *
>> +	 * This may block if the previous recv operation timed out in the guest
>> +	 * kernel but is still being processed by the hypervisor. Also may block
>> +	 * if send operations are performed back-to-back, such as if something
>> +	 * in the caller failed in between a send and recv.
>> +	 *
>> +	 * During normal operation absent of any errors or timeouts, this does
>> +	 * not block.
>> +	 */
>> +	ret = vtpm_wait_for_buffer(dev, "waiting to begin send");
>> +	if (ret)
>> +		return ret;
>> +
>> +	/* Driver owns virtqueue buffer and may now write into it. */
>> +	memcpy(virtqueue_buf, caller_buf, len);
>> +
>> +	/*
>> +	 * Enqueue the virtqueue buffer once as outgoing virtio data (written by
>> +	 * the virtual machine and read by the hypervisor) and again as incoming
>> +	 * data (written by the hypervisor and read by the virtual machine).
>> +	 * This step moves ownership of the virtqueue buffer from driver to
>> +	 * hypervisor.
>> +	 *
>> +	 * Note that we don't know here how big of a buffer the caller will use
>> +	 * with their later call to recv. We allow the hypervisor to write up to
>> +	 * the TPM max message size. If the caller ends up using a smaller
>> +	 * buffer with recv that is too small to hold the entire response, the
>> +	 * recv will return an error. This conceivably breaks TPM
>> +	 * implementations that want to produce a different verbosity of
>> +	 * response depending on the receiver's buffer size.
>> +	 */
>> +	sg_init_one(&sg_outbuf, virtqueue_buf, len);
>> +	sg_init_one(&sg_inbuf, virtqueue_buf, TPM_BUFSIZE);
>> +	ret = virtqueue_add_sgs(dev->vq, sgs, 1, 1, virtqueue_buf, GFP_KERNEL);
>> +	if (ret) {
>> +		dev_err(&chip->dev, "failed virtqueue_add_sgs\n");
>> +		return ret;
>> +	}
>> +
>> +	/* Kick the other end of the virtqueue after having added a buffer. */
>> +	did_kick = virtqueue_kick(dev->vq);
>> +	if (!did_kick) {
>> +		dev->needs_kick = true;
>> +		dev_notice(&chip->dev, "kick failed; will retry\n");
>> +
>> +		/*
>> +		 * We return 0 anyway because what the caller doesn't know can't
>> +		 * hurt them. They can call recv and it will retry the kick. If
>> +		 * that works, everything is fine.
>> +		 *
>> +		 * If the retry in recv fails too, they will get -EBUSY from
>> +		 * recv.
>> +		 */
>> +	}
>> +
>> +	/*
>> +	 * Hypervisor is now processing the TPM command asynchronously. It will
>> +	 * read the command from the output buffer and write the response into
>> +	 * the input buffer (which are the same buffer). When done, it will send
>> +	 * back the buffers over virtio and the driver's virtio callback will
>> +	 * complete dev->complete so that we know the response is ready to be
>> +	 * read.
>> +	 *
>> +	 * It is important to have copied data out of the caller's buffer into
>> +	 * the driver's virtqueue buffer because the caller is free to destroy
>> +	 * their buffer when this call returns. We can't avoid copying by
>> +	 * waiting here for the hypervisor to finish reading, because if that
>> +	 * wait times out, we return and the caller may destroy their buffer
>> +	 * while the hypervisor is continuing to read from it.
>> +	 */
>> +	dev->driver_has_buffer = false;
>> +	return 0;
>> +}
>> +
>> +static int vtpm_op_recv(struct tpm_chip *chip, u8 *caller_buf, size_t len)
>> +{
>> +	int ret;
>> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
>> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
>> +
>> +	dev_dbg(&chip->dev, __func__ "\n");
>> +
>> +	/*
>> +	 * Wait until the virtqueue buffer is owned by the driver.
>> +	 *
>> +	 * This will usually block while the hypervisor finishes processing the
>> +	 * most recent TPM command.
>> +	 */
>> +	ret = vtpm_wait_for_buffer(dev, "waiting for TPM response");
>> +	if (ret)
>> +		return ret;
>> +
>> +	dev_dbg(&chip->dev, "received %u bytes\n", dev->readable);
>> +
>> +	if (dev->readable > len) {
>> +		dev_notice(&chip->dev,
>> +				"TPM response is bigger than receiver's buffer: %u > %zu\n",
>> +				dev->readable, len);
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Copy response over to the caller. */
>> +	memcpy(caller_buf, virtqueue_buf, dev->readable);
>> +
>> +	return dev->readable;
>> +}
>> +
>> +static void vtpm_op_cancel(struct tpm_chip *chip)
>> +{
>> +	/*
>> +	 * Cancel is not called in this driver's use of tpm-interface.c. It may
>> +	 * be triggered through tpm-sysfs but that can be implemented as needed.
>> +	 * Be aware that tpm-sysfs performs cancellation without holding the
>> +	 * tpm_mutex that protects our send and recv operations, so a future
>> +	 * implementation will need to consider thread safety of concurrent
>> +	 * send/recv and cancel.
>> +	 */
>> +	dev_notice(&chip->dev, "cancellation is not implemented\n");
>> +}
>> +
>> +static u8 vtpm_op_status(struct tpm_chip *chip)
>> +{
>> +	/*
>> +	 * Status is for TPM drivers that want tpm-interface.c to poll for
>> +	 * completion before calling recv. Usually this is when the hardware
>> +	 * needs to be polled i.e. there is no other way for recv to block on
>> +	 * the TPM command completion.
>> +	 *
>> +	 * Polling goes until `(status & complete_mask) == complete_val`. This
>> +	 * driver defines both complete_mask and complete_val as 0 and blocks on
>> +	 * our own completion object in recv instead.
>> +	 */
>> +	return 0;
>> +}
>> +
>> +static const struct tpm_class_ops vtpm_ops = {
>> +	.flags = TPM_OPS_AUTO_STARTUP,
>> +	.send = vtpm_op_send,
>> +	.recv = vtpm_op_recv,
>> +	.cancel = vtpm_op_cancel,
>> +	.status = vtpm_op_status,
>> +	.req_complete_mask = 0,
>> +	.req_complete_val = 0,
>> +};
>> +
>> +static void vtpm_virtio_complete(struct virtqueue *vq)
>> +{
>> +	struct virtio_device *vdev = vq->vdev;
>> +	struct vtpm_device *dev = vdev->priv;
>> +
>> +	complete(&dev->complete);
>> +}
>> +
>> +static int vtpm_probe(struct virtio_device *vdev)
>> +{
>> +	int err;
>> +	struct vtpm_device *dev;
>> +	struct virtqueue *vq;
>> +	struct tpm_chip *chip;
>> +
>> +	dev_dbg(&vdev->dev, __func__ "\n");
>> +
>> +	dev = kzalloc(sizeof(struct vtpm_device), GFP_KERNEL);
>> +	if (!dev) {
>> +		err = -ENOMEM;
>> +		dev_err(&vdev->dev, "failed kzalloc\n");
>> +		goto err_dev_alloc;
>> +	}
>> +	vdev->priv = dev;
>> +
>> +	vq = virtio_find_single_vq(vdev, vtpm_virtio_complete, "vtpm");
>> +	if (IS_ERR(vq)) {
>> +		err = PTR_ERR(vq);
>> +		dev_err(&vdev->dev, "failed virtio_find_single_vq\n");
>> +		goto err_virtio_find;
>> +	}
>> +	dev->vq = vq;
>> +
>> +	chip = tpm_chip_alloc(&vdev->dev, &vtpm_ops);
>> +	if (IS_ERR(chip)) {
>> +		err = PTR_ERR(chip);
>> +		dev_err(&vdev->dev, "failed tpm_chip_alloc\n");
>> +		goto err_chip_alloc;
>> +	}
>> +	dev_set_drvdata(&chip->dev, dev);
>> +	chip->flags |= TPM_CHIP_FLAG_TPM2;
>> +	dev->chip = chip;
>> +
>> +	init_completion(&dev->complete);
>> +	dev->driver_has_buffer = true;
>> +	dev->needs_kick = false;
>> +	dev->readable = 0;
>> +
>> +	/*
>> +	 * Required in order to enable vq use in probe function for auto
>> +	 * startup.
>> +	 */
>> +	virtio_device_ready(vdev);
>> +
>> +	err = tpm_chip_register(dev->chip);
>> +	if (err) {
>> +		dev_err(&vdev->dev, "failed tpm_chip_register\n");
>> +		goto err_chip_register;
>> +	}
>> +
>> +	return 0;
>> +
>> +err_chip_register:
>> +	put_device(&dev->chip->dev);
>> +err_chip_alloc:
>> +	vdev->config->del_vqs(vdev);
>> +err_virtio_find:
>> +	kfree(dev);
>> +err_dev_alloc:
>> +	return err;
>> +}
>> +
>> +static void vtpm_remove(struct virtio_device *vdev)
>> +{
>> +	struct vtpm_device *dev = vdev->priv;
>> +
>> +	/* Undo tpm_chip_register. */
>> +	tpm_chip_unregister(dev->chip);
>> +
>> +	/* Undo tpm_chip_alloc. */
>> +	put_device(&dev->chip->dev);
>> +
>> +	vdev->config->reset(vdev);
>> +	vdev->config->del_vqs(vdev);
>> +
>> +	kfree(dev);
>> +}
>> +
>> +#define VIRTIO_ID_TPM 31
>> +
>> +static struct virtio_device_id id_table[] = {
>> +	{
>> +		.device = VIRTIO_ID_TPM,
>> +		.vendor = VIRTIO_DEV_ANY_ID,
>> +	},
>> +	{},
>> +};
> 
> Let's write
> 
> static struct virtio_device_id id_table[] = {
>         { VIRTIO_ID_TPM, VIRTIO_DEV_ANY_ID },
>         { 0 },
> };
> 
> for consistency with other virtio devices.

Acknowledged, will do.


> 
>> +
>> +static struct virtio_driver vtpm_driver = {
>> +	.driver.name = KBUILD_MODNAME,
>> +	.driver.owner = THIS_MODULE,
>> +	.id_table = id_table,
>> +	.probe = vtpm_probe,
>> +	.remove = vtpm_remove,
> 
> Freeze/restore?

Good call. We don't need this for Chrome OS but I agree it should be in the
driver. I will provide a basic freeze/restore in line with what I see in
virtio-rng.

Thanks!


> 
> 
>> +};
>> +
>> +module_virtio_driver(vtpm_driver);
>> +
>> +MODULE_AUTHOR("David Tolnay (dtolnay@gmail.com)");
>> +MODULE_DESCRIPTION("Virtio vTPM Driver");
>> +MODULE_VERSION("1.0");
>> +MODULE_LICENSE("GPL");
>> -- 
>> 2.20.1
> 


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 21:25           ` Michael S. Tsirkin
  (?)
@ 2019-02-22 21:50           ` Jarkko Sakkinen
  2019-02-22 22:24             ` David Tolnay
  -1 siblings, 1 reply; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 21:50 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 04:25:08PM -0500, Michael S. Tsirkin wrote:
> On Fri, Feb 22, 2019 at 09:33:05PM +0200, Jarkko Sakkinen wrote:
> > On Fri, Feb 22, 2019 at 09:31:56PM +0200, Jarkko Sakkinen wrote:
> > > On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
> > > > On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> > > > > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> > > > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> > > > > > kernel side of TPM over virtio.
> > > > > > 
> > > > > > Use case: TPM support is needed for performing trusted work from within
> > > > > > a virtual machine launched by Chrome OS.
> > > > > > 
> > > > > > Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> > > > > > implementation of the virtio TPM device can be found in these two source
> > > > > > files:
> > > > > > 
> > > > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> > > > > > - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> > > > > 
> > > > > These files/links do not make sense for kernel testing. Please remove
> > > > > them from the next version.
> > > > 
> > > > To clarify generally for a virtio device we want
> > > > - guest support
> > > > - device support
> > > > - spec
> > > > 
> > > > If the device is implemented in qemu and guest in linux kernel,
> > > > then there are lots of people familiar with these
> > > > programming environments, so sometimes we merge
> > > > guest and host code even if spec isn't written up at all.
> > > > 
> > > > If you don't want to do that there's a small number of people who can
> > > > properly review code, e.g. I don't think lots of people on this list are
> > > > familiar with crosvm.  One way to address this would be to build a QEMU
> > > > implementation. Another would be to write up a spec.  You can do both
> > > > too :)
> > > 
> > > I don't really understand your arguments.
> > 
> > ... and I did your response total three times and did not find any
> > causality of any sort from anything.
> > 
> > /Jarkko
> 
> Thanks for spending the time reading my response.  What was included in
> it was a general suggestion for a virtio based driver to be acceptable
> in upstream Linux.
> 
> You pointed out that a pointer to a prototype implementation in Rust
> isn't relevant. However, FYI just posting guest code and asking for it
> to be merged alone won't work for a virtio driver either. I am merely
> trying to speed things up instead of having the contributor repost with
> a tweaked commit log just to immediately get another set of nacks.

I did not say anything about relevance of any implementation. I tried to
mainly point out that looking at random source files implemented with a
relatively alien language to most does not really make a case.

Things that would help to move this forward:

- Documentation of the stack, nothing spectacular but more like how it
  works in practical terms.
- Sufficiently easy ways to test the code.
- Explain in the commit message why we want this.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 21:31     ` Jason Gunthorpe
@ 2019-02-22 21:59       ` Jarkko Sakkinen
  2019-02-22 22:07           ` Michael S. Tsirkin
  0 siblings, 1 reply; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 21:59 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Michael S. Tsirkin, James Bottomley, David Tolnay, Peter Huewe,
	linux-integrity, Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 02:31:37PM -0700, Jason Gunthorpe wrote:
> On Fri, Feb 22, 2019 at 04:16:01PM -0500, Michael S. Tsirkin wrote:
> > On Fri, Feb 22, 2019 at 07:30:16AM -0800, James Bottomley wrote:
> > > On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> > > > guest kernel side of TPM over virtio.
> > > 
> > > What's the use case for using this over the current non-virtio vTPM?. 
> > > I always thought virtio was about guest to host transport efficiency,
> > > but the phsical TPM, being connected over a very slow bus, is about as
> > > inefficient as you can get in that regard, so why do we need to use
> > > virtio to drive the virtual one?
> > 
> > I can't say for sure about TPM.
> > 
> > But generally there are many reasons to do virtio rather than emulating
> > a hardware device.
> 
> We already have a xen 'virtioish' TPM driver, so I don't think there
> is a good reason to block a virtio driver if someone cares about
> it. There are enough good reasons to prefer virtio to other options,
> IMHO.
> 
> Provided it meets the general requirements for new virtio stuff you
> outlined.

Yeah, absolutely we can consider this.

For me it boils down to testing and documentation part.

No plans to merge code that I'm unable to run...

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 15:30 ` James Bottomley
  2019-02-22 21:16     ` Michael S. Tsirkin
@ 2019-02-22 22:00   ` David Tolnay
  2019-02-22 22:18     ` James Bottomley
  1 sibling, 1 reply; 57+ messages in thread
From: David Tolnay @ 2019-02-22 22:00 UTC (permalink / raw)
  To: James Bottomley
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On 2/22/19 7:30 AM, James Bottomley wrote:
> On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
>> Add a config TCG_VIRTIO_VTPM which enables a driver providing the
>> guest kernel side of TPM over virtio.
> 
> What's the use case for using this over the current non-virtio vTPM?. 
> I always thought virtio was about guest to host transport efficiency,
> but the phsical TPM, being connected over a very slow bus, is about as
> inefficient as you can get in that regard, so why do we need to use
> virtio to drive the virtual one?
> 
>> Use case: TPM support is needed for performing trusted work from
>> within a virtual machine launched by Chrome OS.
> 
> The current vTPM does this, what's the use case for your special one?

Thanks James, these are important questions and the intention certainly isn't to
have another driver that does the same thing with differences for no reason.

I see three existing vTPM drivers already in drivers/char/tpm.

- tpm_ibmvtpm, which is specific to PowerPC and implemented in terms of PowerPC
  hcalls.

- xen-tpmfront, which is specific to Xen.

- tpm_vtpm_proxy, which as I understand it is intended to enable userspace TPM.
  That is, if we are using this driver in a guest kernel, the TPM implementation
  also needs to reside in the guest kernel rather than in the hypervisor.

For our use case which is not PowerPC and is running in our own hypervisor with
the TPM needing to be provided by the hypervisor, none of the existing vTPM
drivers seemed to fit the bill.

Please let me know if I arrived at the wrong conclusion on this!

Thanks,
David

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 21:59       ` Jarkko Sakkinen
@ 2019-02-22 22:07           ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 22:07 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: Jason Gunthorpe, James Bottomley, David Tolnay, Peter Huewe,
	linux-integrity, Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 11:59:23PM +0200, Jarkko Sakkinen wrote:
> On Fri, Feb 22, 2019 at 02:31:37PM -0700, Jason Gunthorpe wrote:
> > On Fri, Feb 22, 2019 at 04:16:01PM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Feb 22, 2019 at 07:30:16AM -0800, James Bottomley wrote:
> > > > On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> > > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> > > > > guest kernel side of TPM over virtio.
> > > > 
> > > > What's the use case for using this over the current non-virtio vTPM?. 
> > > > I always thought virtio was about guest to host transport efficiency,
> > > > but the phsical TPM, being connected over a very slow bus, is about as
> > > > inefficient as you can get in that regard, so why do we need to use
> > > > virtio to drive the virtual one?
> > > 
> > > I can't say for sure about TPM.
> > > 
> > > But generally there are many reasons to do virtio rather than emulating
> > > a hardware device.
> > 
> > We already have a xen 'virtioish' TPM driver, so I don't think there
> > is a good reason to block a virtio driver if someone cares about
> > it. There are enough good reasons to prefer virtio to other options,
> > IMHO.
> > 
> > Provided it meets the general requirements for new virtio stuff you
> > outlined.
> 
> Yeah, absolutely we can consider this.
> 
> For me it boils down to testing and documentation part.
> 
> No plans to merge code that I'm unable to run...
> 
> /Jarkko

I do this sometimes. One can't require samples for all supported
hardware. If I can check that code matches spec, I might settle for
that.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22 22:07           ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 22:07 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: dgreid, virtualization, James Bottomley, Jason Gunthorpe,
	linux-integrity, Peter Huewe, apronin, David Tolnay

On Fri, Feb 22, 2019 at 11:59:23PM +0200, Jarkko Sakkinen wrote:
> On Fri, Feb 22, 2019 at 02:31:37PM -0700, Jason Gunthorpe wrote:
> > On Fri, Feb 22, 2019 at 04:16:01PM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Feb 22, 2019 at 07:30:16AM -0800, James Bottomley wrote:
> > > > On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> > > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> > > > > guest kernel side of TPM over virtio.
> > > > 
> > > > What's the use case for using this over the current non-virtio vTPM?. 
> > > > I always thought virtio was about guest to host transport efficiency,
> > > > but the phsical TPM, being connected over a very slow bus, is about as
> > > > inefficient as you can get in that regard, so why do we need to use
> > > > virtio to drive the virtual one?
> > > 
> > > I can't say for sure about TPM.
> > > 
> > > But generally there are many reasons to do virtio rather than emulating
> > > a hardware device.
> > 
> > We already have a xen 'virtioish' TPM driver, so I don't think there
> > is a good reason to block a virtio driver if someone cares about
> > it. There are enough good reasons to prefer virtio to other options,
> > IMHO.
> > 
> > Provided it meets the general requirements for new virtio stuff you
> > outlined.
> 
> Yeah, absolutely we can consider this.
> 
> For me it boils down to testing and documentation part.
> 
> No plans to merge code that I'm unable to run...
> 
> /Jarkko

I do this sometimes. One can't require samples for all supported
hardware. If I can check that code matches spec, I might settle for
that.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 22:07           ` Michael S. Tsirkin
  (?)
@ 2019-02-22 22:14           ` Jarkko Sakkinen
  -1 siblings, 0 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 22:14 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Gunthorpe, James Bottomley, David Tolnay, Peter Huewe,
	linux-integrity, Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 05:07:19PM -0500, Michael S. Tsirkin wrote:
> On Fri, Feb 22, 2019 at 11:59:23PM +0200, Jarkko Sakkinen wrote:
> > On Fri, Feb 22, 2019 at 02:31:37PM -0700, Jason Gunthorpe wrote:
> > > On Fri, Feb 22, 2019 at 04:16:01PM -0500, Michael S. Tsirkin wrote:
> > > > On Fri, Feb 22, 2019 at 07:30:16AM -0800, James Bottomley wrote:
> > > > > On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> > > > > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> > > > > > guest kernel side of TPM over virtio.
> > > > > 
> > > > > What's the use case for using this over the current non-virtio vTPM?. 
> > > > > I always thought virtio was about guest to host transport efficiency,
> > > > > but the phsical TPM, being connected over a very slow bus, is about as
> > > > > inefficient as you can get in that regard, so why do we need to use
> > > > > virtio to drive the virtual one?
> > > > 
> > > > I can't say for sure about TPM.
> > > > 
> > > > But generally there are many reasons to do virtio rather than emulating
> > > > a hardware device.
> > > 
> > > We already have a xen 'virtioish' TPM driver, so I don't think there
> > > is a good reason to block a virtio driver if someone cares about
> > > it. There are enough good reasons to prefer virtio to other options,
> > > IMHO.
> > > 
> > > Provided it meets the general requirements for new virtio stuff you
> > > outlined.
> > 
> > Yeah, absolutely we can consider this.
> > 
> > For me it boils down to testing and documentation part.
> > 
> > No plans to merge code that I'm unable to run...
> > 
> > /Jarkko
> 
> I do this sometimes. One can't require samples for all supported
> hardware. If I can check that code matches spec, I might settle for
> that.

What kind of specialized hardware this requires?

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 22:00   ` David Tolnay
@ 2019-02-22 22:18     ` James Bottomley
  2019-02-23  0:45       ` David Tolnay
  0 siblings, 1 reply; 57+ messages in thread
From: James Bottomley @ 2019-02-22 22:18 UTC (permalink / raw)
  To: David Tolnay
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On Fri, 2019-02-22 at 14:00 -0800, David Tolnay wrote:
> On 2/22/19 7:30 AM, James Bottomley wrote:
> > On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
> > > Add a config TCG_VIRTIO_VTPM which enables a driver providing the
> > > guest kernel side of TPM over virtio.
> > 
> > What's the use case for using this over the current non-virtio
> > vTPM?. I always thought virtio was about guest to host transport
> > efficiency, but the phsical TPM, being connected over a very slow
> > bus, is about as inefficient as you can get in that regard, so why
> > do we need to use virtio to drive the virtual one?
> > 
> > > Use case: TPM support is needed for performing trusted work from
> > > within a virtual machine launched by Chrome OS.
> > 
> > The current vTPM does this, what's the use case for your special
> > one?
> 
> Thanks James, these are important questions and the intention
> certainly isn't to have another driver that does the same thing with
> differences for no reason.
> 
> I see three existing vTPM drivers already in drivers/char/tpm.
> 
> - tpm_ibmvtpm, which is specific to PowerPC and implemented in terms
> of PowerPC hcalls.
> 
> - xen-tpmfront, which is specific to Xen.
> 
> - tpm_vtpm_proxy, which as I understand it is intended to enable
> userspace TPM.
>   That is, if we are using this driver in a guest kernel, the TPM
> implementation
>   also needs to reside in the guest kernel rather than in the
> hypervisor.
> 
> For our use case which is not PowerPC and is running in our own
> hypervisor with the TPM needing to be provided by the hypervisor,
> none of the existing vTPM drivers seemed to fit the bill.
> 
> Please let me know if I arrived at the wrong conclusion on this!

Actually, yes, your third statement is not wholly correct:  The in-
kernel vTPM proxy can certainly be used to emulate a TPM within a guest
for that guest to use without any support from the hypervisor. 
However, when you have the correct qemu (requires a recent one), the
vTPM emulator can run in the host (or hypervisor) and be passed through
to the guest.  The best description of how to do that seems to be this
blog entry:

https://s3hh.wordpress.com/2018/06/03/tpm-2-0-in-qemu/

So won't this mode of operation exactly work for you (obviously with
necessary modifications to the crosvm hypervisor)?

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 21:40   ` David Tolnay
@ 2019-02-22 22:24       ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 22:24 UTC (permalink / raw)
  To: David Tolnay
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 01:40:25PM -0800, David Tolnay wrote:
> On 2/21/19 9:51 PM, Michael S. Tsirkin wrote:
> > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> >> Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> >> kernel side of TPM over virtio.
> >>
> >> Use case: TPM support is needed for performing trusted work from within
> >> a virtual machine launched by Chrome OS.
> >>
> >> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> >> implementation of the virtio TPM device can be found in these two source
> >> files:
> >>
> >> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> >> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> >>
> >> and is currently backed by the libtpm2 TPM simulator:
> >>
> >> - https://chromium.googlesource.com/chromiumos/third_party/tpm2/
> >>
> >> Reviewed-on: https://chromium-review.googlesource.com/1387655
> >> Reviewed-by: Andrey Pronin <apronin@chromium.org>
> >> Tested-by: David Tolnay <dtolnay@gmail.com>
> >> Signed-off-by: David Tolnay <dtolnay@gmail.com>
> >> ---
> >> UNRESOLVED:
> >> The driver assumes virtio device number VIRTIO_ID_TPM=31. If there is
> >> interest in accepting a virtio TPM driver into the kernel, the Chrome OS
> >> team will coordinate with the OASIS virtio technical committee to secure
> >> an agreed upon device number and include it in a later patchset.
> > 
> > I am not a tpm expert but I don't see why not.
> > 
> > 
> >>  drivers/char/tpm/Kconfig      |   9 +
> >>  drivers/char/tpm/Makefile     |   1 +
> >>  drivers/char/tpm/tpm_virtio.c | 460 ++++++++++++++++++++++++++++++++++
> > 
> > Maintainer entry?
> 
> Good call, I will add one.
> 
> Could you let me know what workflow would work best for you? I can direct
> patches toward Chrome OS's kernel tree and only a Chrome OS list, then send them
> your way only once they have been reviewed and accepted there. Or I can list one
> or both of the linux-integrity@v.k.o and virtualization@l.l-f.o lists along with
> a list for Chrome OS reviewers.
> 
> Either way, we'll want eyes from people who know virtio and from people who know
> TPM on most changes.

Well if you are changing a host/guest interface then you also need to copy
virtio-dev. That one is subscriber only so that would
imply sending there after it's reviewed in chrome.
As an extra bonus reviewed code is hopefully higher quality
so less review work for me ;)
so the first option sounds like a better plan.

But I hope accepted above just means it goes on some branch,
such that we won't end up with production code that
is set in stone and needs to be maintained forever?

> 
> > 
> >>  3 files changed, 470 insertions(+)
> >>  create mode 100644 drivers/char/tpm/tpm_virtio.c
> >>
> >> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> >> index 536e55d3919f..8997060e248e 100644
> >> --- a/drivers/char/tpm/Kconfig
> >> +++ b/drivers/char/tpm/Kconfig
> >> @@ -164,6 +164,15 @@ config TCG_VTPM_PROXY
> >>  	  /dev/vtpmX and a server-side file descriptor on which the vTPM
> >>  	  can receive commands.
> >>  
> >> +config TCG_VIRTIO_VTPM
> >> +	tristate "Virtio vTPM"
> >> +	depends on TCG_TPM
> >> +	help
> >> +	  This driver provides the guest kernel side of TPM over Virtio. If
> >> +	  you are building Linux to run inside of a hypervisor that supports
> >> +	  TPM over Virtio, say Yes and the virtualized TPM will be
> >> +	  accessible from the guest.
> >> +
> >>  
> >>  source "drivers/char/tpm/st33zp24/Kconfig"
> >>  endif # TCG_TPM
> >> diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile
> >> index a01c4cab902a..4f5d1071257a 100644
> >> --- a/drivers/char/tpm/Makefile
> >> +++ b/drivers/char/tpm/Makefile
> >> @@ -33,3 +33,4 @@ obj-$(CONFIG_TCG_TIS_ST33ZP24) += st33zp24/
> >>  obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
> >>  obj-$(CONFIG_TCG_CRB) += tpm_crb.o
> >>  obj-$(CONFIG_TCG_VTPM_PROXY) += tpm_vtpm_proxy.o
> >> +obj-$(CONFIG_TCG_VIRTIO_VTPM) += tpm_virtio.o
> >> diff --git a/drivers/char/tpm/tpm_virtio.c b/drivers/char/tpm/tpm_virtio.c
> >> new file mode 100644
> >> index 000000000000..f3239d983f18
> >> --- /dev/null
> >> +++ b/drivers/char/tpm/tpm_virtio.c
> >> @@ -0,0 +1,460 @@
> >> +// SPDX-License-Identifier: GPL-2.0
> >> +/*
> >> + * Copyright 2019 Google Inc.
> >> + *
> >> + * Author: David Tolnay <dtolnay@gmail.com>
> >> + *
> >> + * ---
> >> + *
> >> + * Device driver for TPM over virtio.
> >> + *
> >> + * This driver employs a single virtio queue to handle both send and recv. TPM
> >> + * commands are sent over virtio to the hypervisor during a TPM send operation
> >> + * and responses are received over the same queue during a recv operation.
> >> + *
> >> + * The driver contains a single buffer that is the only buffer we ever place on
> >> + * the virtio queue. Commands are copied from the caller's command buffer into
> >> + * the driver's buffer before handing off to virtio, and responses are received
> >> + * into the driver's buffer then copied into the caller's response buffer. This
> >> + * allows us to be resilient to timeouts. When a send or recv operation times
> >> + * out, the caller is free to destroy their buffer; we don't want the hypervisor
> >> + * continuing to perform reads or writes against that destroyed buffer.
> >> + *
> >> + * This driver does not support concurrent send and recv operations. Mutual
> >> + * exclusion is upheld by the tpm_mutex lock held in tpm-interface.c around the
> >> + * calls to chip->ops->send and chip->ops->recv.
> >> + *
> >> + * The intended hypervisor-side implementation is as follows.
> >> + *
> >> + *     while true:
> >> + *         await next virtio buffer.
> >> + *         expect first descriptor in chain to be guest-to-host.
> >> + *         read tpm command from that buffer.
> >> + *         synchronously perform TPM work determined by the command.
> >> + *         expect second descriptor in chain to be host-to-guest.
> >> + *         write TPM response into that buffer.
> >> + *         place buffer on virtio used queue indicating how many bytes written.
> > 
> > That's fine I think except generally it should be legal for guest
> > to split each buffer to several segments.
> 
> Acknowledged. I will adjust this comment.
> 
> To clarify, this means the hypervisor will need to accept a single descriptor
> chain consisting of one or more guest-to-host descriptors which together form
> the command, followed by one or more host-to-guest descriptors into which the
> response may be written. Is that what you had in mind?

Exactly.

> TPM commands and responses are both capped at a moderate size (4096 bytes). With
> that in mind, is it still necessary to allow split buffers? We won't be dealing
> with multi-megabyte transfers.

This has been a general virtio rule, yes. See for example "2.6.4 Message
Framing" in v1.1 draft.

It's generally not hard to implement and generally easier than argue about
whether it's necessary. If you think it's a big problem, it's a good
idea to take it up with the virtio tc. Address this to virtio-comment
list.

> 
> >> + */
> >> +
> >> +#include <linux/virtio_config.h>
> >> +
> >> +#include "tpm.h"
> >> +
> >> +/*
> >> + * Timeout duration when waiting on the hypervisor to complete its end of the
> >> + * TPM operation. This timeout is relatively high because certain TPM operations
> >> + * can take dozens of seconds.
> >> + */
> >> +#define TPM_VIRTIO_TIMEOUT (120 * HZ)
> > 
> > Should we read this from device? E.g. a busy hypervisor might
> > take longer to respond ...
> 
> That's true. Do you have an example of an existing virtio driver that reads
> timeout from the hypervisor?
> 
> To understand your perspective, do you see one of the following as being the
> bigger advantage? -- either that we can use a timeout tighter than 120s for
> hypervisors that believe they can respond quickly to any command, or that we can
> relax the timeout beyond 120s for an especially overburdened hypervisor. The
> first is nice because the caller in the guest will receive their error code
> sooner in certain failure modes, but maybe that would be better addressed by
> some other way of poking the host to find out whether it is still alive and
> working. For the second, 120s is pretty generous and in our use case we would
> just want to give up and retry much later at that point; I don't know how much
> further it would be reasonable to grow a timeout.

I think the whole idea around timeout handling needs a bit more
thought. What kind of reasons for the timeout do you envision
that require the extra kicks?


> 
> >> +
> >> +struct vtpm_device {
> >> +	/*
> >> +	 * Data structure for integration with the common code of the TPM driver
> >> +	 * in tpm-chip.c.
> >> +	 */
> >> +	struct tpm_chip *chip;
> >> +
> >> +	/*
> >> +	 * Virtio queue for sending TPM commands out of the virtual machine and
> >> +	 * receiving TPM responses back from the hypervisor.
> >> +	 */
> >> +	struct virtqueue *vq;
> >> +
> >> +	/*
> >> +	 * Completion that is notified when a virtio operation has been
> >> +	 * fulfilled by the hypervisor.
> >> +	 */
> >> +	struct completion complete;
> >> +
> >> +	/*
> >> +	 * Whether driver currently holds ownership of the virtqueue buffer.
> >> +	 * When false, the hypervisor is in the process of reading or writing
> >> +	 * the buffer and the driver must not touch it.
> >> +	 */
> >> +	bool driver_has_buffer;
> >> +
> >> +	/*
> >> +	 * Whether during the most recent TPM operation, a virtqueue_kick failed
> >> +	 * or a wait timed out.
> >> +	 *
> >> +	 * The next send or recv operation will attempt a kick upon seeing this
> >> +	 * status. That should clear up the queue in the case that the
> >> +	 * hypervisor became temporarily nonresponsive, such as by resource
> >> +	 * exhaustion on the host. The extra kick enables recovery from kicks
> >> +	 * going unnoticed by the hypervisor as well as recovery from virtio
> >> +	 * callbacks going unnoticed by the guest kernel.
> > 
> > Well not necessarily. E.g. virtqueue_kick does not
> > kick if hypervisor didn't enable notifications
> > (e.g. with event idx they get disabled automatically).
> > So it won't recover if hypervisor can discard
> > kicks.
> > 
> > I think this comment needs clarification.
> 
> Makes sense. I didn't think to consider "hypervisor discards kicks" as a failure
> mode. :) Would virtqueue_kick return false in that case? If so, needs_kick will
> stay true and the kick will keep being retried on subsequent send/recv
> operations until the hypervisor has enabled notifications again.
> 
> I will clarify the comment to touch on that situation.

IIRC virtqueue_kick returns false when device is broken.

If you want to handle that you generally need to reset
the device. Kicking it won't fix it :)

> 
> >> +	 */
> >> +	bool needs_kick;
> >> +
> >> +	/* Number of bytes available to read from the virtqueue buffer. */
> >> +	unsigned int readable;
> >> +
> >> +	/*
> >> +	 * Buffer in which all virtio transfers take place. Buffer size is the
> >> +	 * maximum legal TPM command or response message size.
> >> +	 */
> >> +	u8 virtqueue_buffer[TPM_BUFSIZE];
> >> +};
> >> +
> >> +/*
> >> + * Wait for ownership of the virtqueue buffer.
> >> + *
> >> + * The why-string should begin with "waiting to..." or "waiting for..." with no
> >> + * trailing newline. It will appear in log output.
> >> + *
> >> + * Returns zero for success, otherwise negative error.
> >> + */
> >> +static int vtpm_wait_for_buffer(struct vtpm_device *dev, const char *why)
> >> +{
> >> +	int ret;
> >> +	bool did_kick;
> >> +	struct tpm_chip *chip = dev->chip;
> >> +	unsigned long deadline = jiffies + TPM_VIRTIO_TIMEOUT;
> >> +
> >> +	/* Kick queue if needed. */
> >> +	if (dev->needs_kick) {
> >> +		did_kick = virtqueue_kick(dev->vq);
> >> +		if (!did_kick) {
> >> +			dev_notice(&chip->dev, "kick failed; will retry\n");
> >> +			return -EBUSY;
> >> +		}
> >> +		dev->needs_kick = false;
> >> +	}
> >> +
> >> +	while (!dev->driver_has_buffer) {
> >> +		unsigned long now = jiffies;
> >> +
> >> +		/* Check timeout, otherwise `deadline - now` may underflow. */
> >> +		if time_after_eq(now, deadline) {
> >> +			dev_warn(&chip->dev, "timed out %s\n", why);
> >> +			dev->needs_kick = true;
> >> +			return -ETIMEDOUT;
> >> +		}
> >> +
> >> +		/*
> >> +		 * Wait to be signaled by virtio callback.
> >> +		 *
> >> +		 * Positive ret is jiffies remaining until timeout when the
> >> +		 * completion occurred, which means successful completion. Zero
> >> +		 * ret is timeout. Negative ret is error.
> >> +		 */
> >> +		ret = wait_for_completion_killable_timeout(
> >> +				&dev->complete, deadline - now);
> >> +
> >> +		/* Log if completion did not occur. */
> >> +		if (ret == -ERESTARTSYS) {
> >> +			/* Not a warning if it was simply interrupted. */
> >> +			dev_dbg(&chip->dev, "interrupted %s\n", why);
> >> +		} else if (ret == 0) {
> >> +			dev_warn(&chip->dev, "timed out %s\n", why);
> >> +			ret = -ETIMEDOUT;
> > 
> > Should we check NEEDS_RESET bit and try to reset the device?
> > Or is that too drastic?
> 
> I'll let you make the call on whether we need a reset implemented. This driver
> was initially based on the simple virtio-rng driver which doesn't reset (but nor
> does it detect timeout). Let me know and I can give it a shot.

I think yes - generally NEEDS_RESET is the only mechanism we have for recovering from
device errors. And yes no one implemented it yet :)



> 
> > 
> >> +		} else if (ret < 0) {
> >> +			dev_warn(&chip->dev, "failed while %s: error %d\n",
> >> +					why, -ret);
> >> +		}
> >> +
> >> +		/*
> >> +		 * Return error if completion did not occur. Schedule kick to be
> >> +		 * retried at the start of the next send/recv to help unblock
> >> +		 * the queue.
> >> +		 */
> >> +		if (ret < 0) {
> >> +			dev->needs_kick = true;
> >> +			return ret;
> >> +		}
> >> +
> >> +		/* Completion occurred. Expect response buffer back. */
> >> +		if (virtqueue_get_buf(dev->vq, &dev->readable)) {
> >> +			dev->driver_has_buffer = true;
> >> +
> >> +			if (dev->readable > TPM_BUFSIZE) {
> >> +				dev_crit(&chip->dev,
> >> +						"hypervisor bug: response exceeds max size, %u > %u\n",
> >> +						dev->readable,
> >> +						(unsigned int) TPM_BUFSIZE);
> >> +				dev->readable = TPM_BUFSIZE;
> >> +				return -EPROTO;
> >> +			}
> >> +		}
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int vtpm_op_send(struct tpm_chip *chip, u8 *caller_buf, size_t len)
> >> +{
> >> +	int ret;
> >> +	bool did_kick;
> >> +	struct scatterlist sg_outbuf, sg_inbuf;
> >> +	struct scatterlist *sgs[2] = { &sg_outbuf, &sg_inbuf };
> >> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
> >> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
> >> +
> >> +	dev_dbg(&chip->dev, __func__ " %zu bytes\n", len);
> >> +
> >> +	if (len > TPM_BUFSIZE) {
> >> +		dev_err(&chip->dev,
> >> +				"command is too long, %zu > %zu\n",
> >> +				len, (size_t) TPM_BUFSIZE);
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Wait until hypervisor relinquishes ownership of the virtqueue buffer.
> >> +	 *
> >> +	 * This may block if the previous recv operation timed out in the guest
> >> +	 * kernel but is still being processed by the hypervisor. Also may block
> >> +	 * if send operations are performed back-to-back, such as if something
> >> +	 * in the caller failed in between a send and recv.
> >> +	 *
> >> +	 * During normal operation absent of any errors or timeouts, this does
> >> +	 * not block.
> >> +	 */
> >> +	ret = vtpm_wait_for_buffer(dev, "waiting to begin send");
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	/* Driver owns virtqueue buffer and may now write into it. */
> >> +	memcpy(virtqueue_buf, caller_buf, len);
> >> +
> >> +	/*
> >> +	 * Enqueue the virtqueue buffer once as outgoing virtio data (written by
> >> +	 * the virtual machine and read by the hypervisor) and again as incoming
> >> +	 * data (written by the hypervisor and read by the virtual machine).
> >> +	 * This step moves ownership of the virtqueue buffer from driver to
> >> +	 * hypervisor.
> >> +	 *
> >> +	 * Note that we don't know here how big of a buffer the caller will use
> >> +	 * with their later call to recv. We allow the hypervisor to write up to
> >> +	 * the TPM max message size. If the caller ends up using a smaller
> >> +	 * buffer with recv that is too small to hold the entire response, the
> >> +	 * recv will return an error. This conceivably breaks TPM
> >> +	 * implementations that want to produce a different verbosity of
> >> +	 * response depending on the receiver's buffer size.
> >> +	 */
> >> +	sg_init_one(&sg_outbuf, virtqueue_buf, len);
> >> +	sg_init_one(&sg_inbuf, virtqueue_buf, TPM_BUFSIZE);
> >> +	ret = virtqueue_add_sgs(dev->vq, sgs, 1, 1, virtqueue_buf, GFP_KERNEL);
> >> +	if (ret) {
> >> +		dev_err(&chip->dev, "failed virtqueue_add_sgs\n");
> >> +		return ret;
> >> +	}
> >> +
> >> +	/* Kick the other end of the virtqueue after having added a buffer. */
> >> +	did_kick = virtqueue_kick(dev->vq);
> >> +	if (!did_kick) {
> >> +		dev->needs_kick = true;
> >> +		dev_notice(&chip->dev, "kick failed; will retry\n");
> >> +
> >> +		/*
> >> +		 * We return 0 anyway because what the caller doesn't know can't
> >> +		 * hurt them. They can call recv and it will retry the kick. If
> >> +		 * that works, everything is fine.
> >> +		 *
> >> +		 * If the retry in recv fails too, they will get -EBUSY from
> >> +		 * recv.
> >> +		 */
> >> +	}
> >> +
> >> +	/*
> >> +	 * Hypervisor is now processing the TPM command asynchronously. It will
> >> +	 * read the command from the output buffer and write the response into
> >> +	 * the input buffer (which are the same buffer). When done, it will send
> >> +	 * back the buffers over virtio and the driver's virtio callback will
> >> +	 * complete dev->complete so that we know the response is ready to be
> >> +	 * read.
> >> +	 *
> >> +	 * It is important to have copied data out of the caller's buffer into
> >> +	 * the driver's virtqueue buffer because the caller is free to destroy
> >> +	 * their buffer when this call returns. We can't avoid copying by
> >> +	 * waiting here for the hypervisor to finish reading, because if that
> >> +	 * wait times out, we return and the caller may destroy their buffer
> >> +	 * while the hypervisor is continuing to read from it.
> >> +	 */
> >> +	dev->driver_has_buffer = false;
> >> +	return 0;
> >> +}
> >> +
> >> +static int vtpm_op_recv(struct tpm_chip *chip, u8 *caller_buf, size_t len)
> >> +{
> >> +	int ret;
> >> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
> >> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
> >> +
> >> +	dev_dbg(&chip->dev, __func__ "\n");
> >> +
> >> +	/*
> >> +	 * Wait until the virtqueue buffer is owned by the driver.
> >> +	 *
> >> +	 * This will usually block while the hypervisor finishes processing the
> >> +	 * most recent TPM command.
> >> +	 */
> >> +	ret = vtpm_wait_for_buffer(dev, "waiting for TPM response");
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	dev_dbg(&chip->dev, "received %u bytes\n", dev->readable);
> >> +
> >> +	if (dev->readable > len) {
> >> +		dev_notice(&chip->dev,
> >> +				"TPM response is bigger than receiver's buffer: %u > %zu\n",
> >> +				dev->readable, len);
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/* Copy response over to the caller. */
> >> +	memcpy(caller_buf, virtqueue_buf, dev->readable);
> >> +
> >> +	return dev->readable;
> >> +}
> >> +
> >> +static void vtpm_op_cancel(struct tpm_chip *chip)
> >> +{
> >> +	/*
> >> +	 * Cancel is not called in this driver's use of tpm-interface.c. It may
> >> +	 * be triggered through tpm-sysfs but that can be implemented as needed.
> >> +	 * Be aware that tpm-sysfs performs cancellation without holding the
> >> +	 * tpm_mutex that protects our send and recv operations, so a future
> >> +	 * implementation will need to consider thread safety of concurrent
> >> +	 * send/recv and cancel.
> >> +	 */
> >> +	dev_notice(&chip->dev, "cancellation is not implemented\n");
> >> +}
> >> +
> >> +static u8 vtpm_op_status(struct tpm_chip *chip)
> >> +{
> >> +	/*
> >> +	 * Status is for TPM drivers that want tpm-interface.c to poll for
> >> +	 * completion before calling recv. Usually this is when the hardware
> >> +	 * needs to be polled i.e. there is no other way for recv to block on
> >> +	 * the TPM command completion.
> >> +	 *
> >> +	 * Polling goes until `(status & complete_mask) == complete_val`. This
> >> +	 * driver defines both complete_mask and complete_val as 0 and blocks on
> >> +	 * our own completion object in recv instead.
> >> +	 */
> >> +	return 0;
> >> +}
> >> +
> >> +static const struct tpm_class_ops vtpm_ops = {
> >> +	.flags = TPM_OPS_AUTO_STARTUP,
> >> +	.send = vtpm_op_send,
> >> +	.recv = vtpm_op_recv,
> >> +	.cancel = vtpm_op_cancel,
> >> +	.status = vtpm_op_status,
> >> +	.req_complete_mask = 0,
> >> +	.req_complete_val = 0,
> >> +};
> >> +
> >> +static void vtpm_virtio_complete(struct virtqueue *vq)
> >> +{
> >> +	struct virtio_device *vdev = vq->vdev;
> >> +	struct vtpm_device *dev = vdev->priv;
> >> +
> >> +	complete(&dev->complete);
> >> +}
> >> +
> >> +static int vtpm_probe(struct virtio_device *vdev)
> >> +{
> >> +	int err;
> >> +	struct vtpm_device *dev;
> >> +	struct virtqueue *vq;
> >> +	struct tpm_chip *chip;
> >> +
> >> +	dev_dbg(&vdev->dev, __func__ "\n");
> >> +
> >> +	dev = kzalloc(sizeof(struct vtpm_device), GFP_KERNEL);
> >> +	if (!dev) {
> >> +		err = -ENOMEM;
> >> +		dev_err(&vdev->dev, "failed kzalloc\n");
> >> +		goto err_dev_alloc;
> >> +	}
> >> +	vdev->priv = dev;
> >> +
> >> +	vq = virtio_find_single_vq(vdev, vtpm_virtio_complete, "vtpm");
> >> +	if (IS_ERR(vq)) {
> >> +		err = PTR_ERR(vq);
> >> +		dev_err(&vdev->dev, "failed virtio_find_single_vq\n");
> >> +		goto err_virtio_find;
> >> +	}
> >> +	dev->vq = vq;
> >> +
> >> +	chip = tpm_chip_alloc(&vdev->dev, &vtpm_ops);
> >> +	if (IS_ERR(chip)) {
> >> +		err = PTR_ERR(chip);
> >> +		dev_err(&vdev->dev, "failed tpm_chip_alloc\n");
> >> +		goto err_chip_alloc;
> >> +	}
> >> +	dev_set_drvdata(&chip->dev, dev);
> >> +	chip->flags |= TPM_CHIP_FLAG_TPM2;
> >> +	dev->chip = chip;
> >> +
> >> +	init_completion(&dev->complete);
> >> +	dev->driver_has_buffer = true;
> >> +	dev->needs_kick = false;
> >> +	dev->readable = 0;
> >> +
> >> +	/*
> >> +	 * Required in order to enable vq use in probe function for auto
> >> +	 * startup.
> >> +	 */
> >> +	virtio_device_ready(vdev);
> >> +
> >> +	err = tpm_chip_register(dev->chip);
> >> +	if (err) {
> >> +		dev_err(&vdev->dev, "failed tpm_chip_register\n");
> >> +		goto err_chip_register;
> >> +	}
> >> +
> >> +	return 0;
> >> +
> >> +err_chip_register:
> >> +	put_device(&dev->chip->dev);
> >> +err_chip_alloc:
> >> +	vdev->config->del_vqs(vdev);
> >> +err_virtio_find:
> >> +	kfree(dev);
> >> +err_dev_alloc:
> >> +	return err;
> >> +}
> >> +
> >> +static void vtpm_remove(struct virtio_device *vdev)
> >> +{
> >> +	struct vtpm_device *dev = vdev->priv;
> >> +
> >> +	/* Undo tpm_chip_register. */
> >> +	tpm_chip_unregister(dev->chip);
> >> +
> >> +	/* Undo tpm_chip_alloc. */
> >> +	put_device(&dev->chip->dev);
> >> +
> >> +	vdev->config->reset(vdev);
> >> +	vdev->config->del_vqs(vdev);
> >> +
> >> +	kfree(dev);
> >> +}
> >> +
> >> +#define VIRTIO_ID_TPM 31
> >> +
> >> +static struct virtio_device_id id_table[] = {
> >> +	{
> >> +		.device = VIRTIO_ID_TPM,
> >> +		.vendor = VIRTIO_DEV_ANY_ID,
> >> +	},
> >> +	{},
> >> +};
> > 
> > Let's write
> > 
> > static struct virtio_device_id id_table[] = {
> >         { VIRTIO_ID_TPM, VIRTIO_DEV_ANY_ID },
> >         { 0 },
> > };
> > 
> > for consistency with other virtio devices.
> 
> Acknowledged, will do.
> 
> 
> > 
> >> +
> >> +static struct virtio_driver vtpm_driver = {
> >> +	.driver.name = KBUILD_MODNAME,
> >> +	.driver.owner = THIS_MODULE,
> >> +	.id_table = id_table,
> >> +	.probe = vtpm_probe,
> >> +	.remove = vtpm_remove,
> > 
> > Freeze/restore?
> 
> Good call. We don't need this for Chrome OS but I agree it should be in the
> driver. I will provide a basic freeze/restore in line with what I see in
> virtio-rng.
> 
> Thanks!
> 
> 
> > 
> > 
> >> +};
> >> +
> >> +module_virtio_driver(vtpm_driver);
> >> +
> >> +MODULE_AUTHOR("David Tolnay (dtolnay@gmail.com)");
> >> +MODULE_DESCRIPTION("Virtio vTPM Driver");
> >> +MODULE_VERSION("1.0");
> >> +MODULE_LICENSE("GPL");
> >> -- 
> >> 2.20.1
> > 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22 22:24       ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 22:24 UTC (permalink / raw)
  To: David Tolnay
  Cc: dgreid, Jarkko Sakkinen, virtualization, Jason Gunthorpe,
	Peter Huewe, apronin, linux-integrity

On Fri, Feb 22, 2019 at 01:40:25PM -0800, David Tolnay wrote:
> On 2/21/19 9:51 PM, Michael S. Tsirkin wrote:
> > On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> >> Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> >> kernel side of TPM over virtio.
> >>
> >> Use case: TPM support is needed for performing trusted work from within
> >> a virtual machine launched by Chrome OS.
> >>
> >> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> >> implementation of the virtio TPM device can be found in these two source
> >> files:
> >>
> >> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> >> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> >>
> >> and is currently backed by the libtpm2 TPM simulator:
> >>
> >> - https://chromium.googlesource.com/chromiumos/third_party/tpm2/
> >>
> >> Reviewed-on: https://chromium-review.googlesource.com/1387655
> >> Reviewed-by: Andrey Pronin <apronin@chromium.org>
> >> Tested-by: David Tolnay <dtolnay@gmail.com>
> >> Signed-off-by: David Tolnay <dtolnay@gmail.com>
> >> ---
> >> UNRESOLVED:
> >> The driver assumes virtio device number VIRTIO_ID_TPM=31. If there is
> >> interest in accepting a virtio TPM driver into the kernel, the Chrome OS
> >> team will coordinate with the OASIS virtio technical committee to secure
> >> an agreed upon device number and include it in a later patchset.
> > 
> > I am not a tpm expert but I don't see why not.
> > 
> > 
> >>  drivers/char/tpm/Kconfig      |   9 +
> >>  drivers/char/tpm/Makefile     |   1 +
> >>  drivers/char/tpm/tpm_virtio.c | 460 ++++++++++++++++++++++++++++++++++
> > 
> > Maintainer entry?
> 
> Good call, I will add one.
> 
> Could you let me know what workflow would work best for you? I can direct
> patches toward Chrome OS's kernel tree and only a Chrome OS list, then send them
> your way only once they have been reviewed and accepted there. Or I can list one
> or both of the linux-integrity@v.k.o and virtualization@l.l-f.o lists along with
> a list for Chrome OS reviewers.
> 
> Either way, we'll want eyes from people who know virtio and from people who know
> TPM on most changes.

Well if you are changing a host/guest interface then you also need to copy
virtio-dev. That one is subscriber only so that would
imply sending there after it's reviewed in chrome.
As an extra bonus reviewed code is hopefully higher quality
so less review work for me ;)
so the first option sounds like a better plan.

But I hope accepted above just means it goes on some branch,
such that we won't end up with production code that
is set in stone and needs to be maintained forever?

> 
> > 
> >>  3 files changed, 470 insertions(+)
> >>  create mode 100644 drivers/char/tpm/tpm_virtio.c
> >>
> >> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> >> index 536e55d3919f..8997060e248e 100644
> >> --- a/drivers/char/tpm/Kconfig
> >> +++ b/drivers/char/tpm/Kconfig
> >> @@ -164,6 +164,15 @@ config TCG_VTPM_PROXY
> >>  	  /dev/vtpmX and a server-side file descriptor on which the vTPM
> >>  	  can receive commands.
> >>  
> >> +config TCG_VIRTIO_VTPM
> >> +	tristate "Virtio vTPM"
> >> +	depends on TCG_TPM
> >> +	help
> >> +	  This driver provides the guest kernel side of TPM over Virtio. If
> >> +	  you are building Linux to run inside of a hypervisor that supports
> >> +	  TPM over Virtio, say Yes and the virtualized TPM will be
> >> +	  accessible from the guest.
> >> +
> >>  
> >>  source "drivers/char/tpm/st33zp24/Kconfig"
> >>  endif # TCG_TPM
> >> diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile
> >> index a01c4cab902a..4f5d1071257a 100644
> >> --- a/drivers/char/tpm/Makefile
> >> +++ b/drivers/char/tpm/Makefile
> >> @@ -33,3 +33,4 @@ obj-$(CONFIG_TCG_TIS_ST33ZP24) += st33zp24/
> >>  obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
> >>  obj-$(CONFIG_TCG_CRB) += tpm_crb.o
> >>  obj-$(CONFIG_TCG_VTPM_PROXY) += tpm_vtpm_proxy.o
> >> +obj-$(CONFIG_TCG_VIRTIO_VTPM) += tpm_virtio.o
> >> diff --git a/drivers/char/tpm/tpm_virtio.c b/drivers/char/tpm/tpm_virtio.c
> >> new file mode 100644
> >> index 000000000000..f3239d983f18
> >> --- /dev/null
> >> +++ b/drivers/char/tpm/tpm_virtio.c
> >> @@ -0,0 +1,460 @@
> >> +// SPDX-License-Identifier: GPL-2.0
> >> +/*
> >> + * Copyright 2019 Google Inc.
> >> + *
> >> + * Author: David Tolnay <dtolnay@gmail.com>
> >> + *
> >> + * ---
> >> + *
> >> + * Device driver for TPM over virtio.
> >> + *
> >> + * This driver employs a single virtio queue to handle both send and recv. TPM
> >> + * commands are sent over virtio to the hypervisor during a TPM send operation
> >> + * and responses are received over the same queue during a recv operation.
> >> + *
> >> + * The driver contains a single buffer that is the only buffer we ever place on
> >> + * the virtio queue. Commands are copied from the caller's command buffer into
> >> + * the driver's buffer before handing off to virtio, and responses are received
> >> + * into the driver's buffer then copied into the caller's response buffer. This
> >> + * allows us to be resilient to timeouts. When a send or recv operation times
> >> + * out, the caller is free to destroy their buffer; we don't want the hypervisor
> >> + * continuing to perform reads or writes against that destroyed buffer.
> >> + *
> >> + * This driver does not support concurrent send and recv operations. Mutual
> >> + * exclusion is upheld by the tpm_mutex lock held in tpm-interface.c around the
> >> + * calls to chip->ops->send and chip->ops->recv.
> >> + *
> >> + * The intended hypervisor-side implementation is as follows.
> >> + *
> >> + *     while true:
> >> + *         await next virtio buffer.
> >> + *         expect first descriptor in chain to be guest-to-host.
> >> + *         read tpm command from that buffer.
> >> + *         synchronously perform TPM work determined by the command.
> >> + *         expect second descriptor in chain to be host-to-guest.
> >> + *         write TPM response into that buffer.
> >> + *         place buffer on virtio used queue indicating how many bytes written.
> > 
> > That's fine I think except generally it should be legal for guest
> > to split each buffer to several segments.
> 
> Acknowledged. I will adjust this comment.
> 
> To clarify, this means the hypervisor will need to accept a single descriptor
> chain consisting of one or more guest-to-host descriptors which together form
> the command, followed by one or more host-to-guest descriptors into which the
> response may be written. Is that what you had in mind?

Exactly.

> TPM commands and responses are both capped at a moderate size (4096 bytes). With
> that in mind, is it still necessary to allow split buffers? We won't be dealing
> with multi-megabyte transfers.

This has been a general virtio rule, yes. See for example "2.6.4 Message
Framing" in v1.1 draft.

It's generally not hard to implement and generally easier than argue about
whether it's necessary. If you think it's a big problem, it's a good
idea to take it up with the virtio tc. Address this to virtio-comment
list.

> 
> >> + */
> >> +
> >> +#include <linux/virtio_config.h>
> >> +
> >> +#include "tpm.h"
> >> +
> >> +/*
> >> + * Timeout duration when waiting on the hypervisor to complete its end of the
> >> + * TPM operation. This timeout is relatively high because certain TPM operations
> >> + * can take dozens of seconds.
> >> + */
> >> +#define TPM_VIRTIO_TIMEOUT (120 * HZ)
> > 
> > Should we read this from device? E.g. a busy hypervisor might
> > take longer to respond ...
> 
> That's true. Do you have an example of an existing virtio driver that reads
> timeout from the hypervisor?
> 
> To understand your perspective, do you see one of the following as being the
> bigger advantage? -- either that we can use a timeout tighter than 120s for
> hypervisors that believe they can respond quickly to any command, or that we can
> relax the timeout beyond 120s for an especially overburdened hypervisor. The
> first is nice because the caller in the guest will receive their error code
> sooner in certain failure modes, but maybe that would be better addressed by
> some other way of poking the host to find out whether it is still alive and
> working. For the second, 120s is pretty generous and in our use case we would
> just want to give up and retry much later at that point; I don't know how much
> further it would be reasonable to grow a timeout.

I think the whole idea around timeout handling needs a bit more
thought. What kind of reasons for the timeout do you envision
that require the extra kicks?


> 
> >> +
> >> +struct vtpm_device {
> >> +	/*
> >> +	 * Data structure for integration with the common code of the TPM driver
> >> +	 * in tpm-chip.c.
> >> +	 */
> >> +	struct tpm_chip *chip;
> >> +
> >> +	/*
> >> +	 * Virtio queue for sending TPM commands out of the virtual machine and
> >> +	 * receiving TPM responses back from the hypervisor.
> >> +	 */
> >> +	struct virtqueue *vq;
> >> +
> >> +	/*
> >> +	 * Completion that is notified when a virtio operation has been
> >> +	 * fulfilled by the hypervisor.
> >> +	 */
> >> +	struct completion complete;
> >> +
> >> +	/*
> >> +	 * Whether driver currently holds ownership of the virtqueue buffer.
> >> +	 * When false, the hypervisor is in the process of reading or writing
> >> +	 * the buffer and the driver must not touch it.
> >> +	 */
> >> +	bool driver_has_buffer;
> >> +
> >> +	/*
> >> +	 * Whether during the most recent TPM operation, a virtqueue_kick failed
> >> +	 * or a wait timed out.
> >> +	 *
> >> +	 * The next send or recv operation will attempt a kick upon seeing this
> >> +	 * status. That should clear up the queue in the case that the
> >> +	 * hypervisor became temporarily nonresponsive, such as by resource
> >> +	 * exhaustion on the host. The extra kick enables recovery from kicks
> >> +	 * going unnoticed by the hypervisor as well as recovery from virtio
> >> +	 * callbacks going unnoticed by the guest kernel.
> > 
> > Well not necessarily. E.g. virtqueue_kick does not
> > kick if hypervisor didn't enable notifications
> > (e.g. with event idx they get disabled automatically).
> > So it won't recover if hypervisor can discard
> > kicks.
> > 
> > I think this comment needs clarification.
> 
> Makes sense. I didn't think to consider "hypervisor discards kicks" as a failure
> mode. :) Would virtqueue_kick return false in that case? If so, needs_kick will
> stay true and the kick will keep being retried on subsequent send/recv
> operations until the hypervisor has enabled notifications again.
> 
> I will clarify the comment to touch on that situation.

IIRC virtqueue_kick returns false when device is broken.

If you want to handle that you generally need to reset
the device. Kicking it won't fix it :)

> 
> >> +	 */
> >> +	bool needs_kick;
> >> +
> >> +	/* Number of bytes available to read from the virtqueue buffer. */
> >> +	unsigned int readable;
> >> +
> >> +	/*
> >> +	 * Buffer in which all virtio transfers take place. Buffer size is the
> >> +	 * maximum legal TPM command or response message size.
> >> +	 */
> >> +	u8 virtqueue_buffer[TPM_BUFSIZE];
> >> +};
> >> +
> >> +/*
> >> + * Wait for ownership of the virtqueue buffer.
> >> + *
> >> + * The why-string should begin with "waiting to..." or "waiting for..." with no
> >> + * trailing newline. It will appear in log output.
> >> + *
> >> + * Returns zero for success, otherwise negative error.
> >> + */
> >> +static int vtpm_wait_for_buffer(struct vtpm_device *dev, const char *why)
> >> +{
> >> +	int ret;
> >> +	bool did_kick;
> >> +	struct tpm_chip *chip = dev->chip;
> >> +	unsigned long deadline = jiffies + TPM_VIRTIO_TIMEOUT;
> >> +
> >> +	/* Kick queue if needed. */
> >> +	if (dev->needs_kick) {
> >> +		did_kick = virtqueue_kick(dev->vq);
> >> +		if (!did_kick) {
> >> +			dev_notice(&chip->dev, "kick failed; will retry\n");
> >> +			return -EBUSY;
> >> +		}
> >> +		dev->needs_kick = false;
> >> +	}
> >> +
> >> +	while (!dev->driver_has_buffer) {
> >> +		unsigned long now = jiffies;
> >> +
> >> +		/* Check timeout, otherwise `deadline - now` may underflow. */
> >> +		if time_after_eq(now, deadline) {
> >> +			dev_warn(&chip->dev, "timed out %s\n", why);
> >> +			dev->needs_kick = true;
> >> +			return -ETIMEDOUT;
> >> +		}
> >> +
> >> +		/*
> >> +		 * Wait to be signaled by virtio callback.
> >> +		 *
> >> +		 * Positive ret is jiffies remaining until timeout when the
> >> +		 * completion occurred, which means successful completion. Zero
> >> +		 * ret is timeout. Negative ret is error.
> >> +		 */
> >> +		ret = wait_for_completion_killable_timeout(
> >> +				&dev->complete, deadline - now);
> >> +
> >> +		/* Log if completion did not occur. */
> >> +		if (ret == -ERESTARTSYS) {
> >> +			/* Not a warning if it was simply interrupted. */
> >> +			dev_dbg(&chip->dev, "interrupted %s\n", why);
> >> +		} else if (ret == 0) {
> >> +			dev_warn(&chip->dev, "timed out %s\n", why);
> >> +			ret = -ETIMEDOUT;
> > 
> > Should we check NEEDS_RESET bit and try to reset the device?
> > Or is that too drastic?
> 
> I'll let you make the call on whether we need a reset implemented. This driver
> was initially based on the simple virtio-rng driver which doesn't reset (but nor
> does it detect timeout). Let me know and I can give it a shot.

I think yes - generally NEEDS_RESET is the only mechanism we have for recovering from
device errors. And yes no one implemented it yet :)



> 
> > 
> >> +		} else if (ret < 0) {
> >> +			dev_warn(&chip->dev, "failed while %s: error %d\n",
> >> +					why, -ret);
> >> +		}
> >> +
> >> +		/*
> >> +		 * Return error if completion did not occur. Schedule kick to be
> >> +		 * retried at the start of the next send/recv to help unblock
> >> +		 * the queue.
> >> +		 */
> >> +		if (ret < 0) {
> >> +			dev->needs_kick = true;
> >> +			return ret;
> >> +		}
> >> +
> >> +		/* Completion occurred. Expect response buffer back. */
> >> +		if (virtqueue_get_buf(dev->vq, &dev->readable)) {
> >> +			dev->driver_has_buffer = true;
> >> +
> >> +			if (dev->readable > TPM_BUFSIZE) {
> >> +				dev_crit(&chip->dev,
> >> +						"hypervisor bug: response exceeds max size, %u > %u\n",
> >> +						dev->readable,
> >> +						(unsigned int) TPM_BUFSIZE);
> >> +				dev->readable = TPM_BUFSIZE;
> >> +				return -EPROTO;
> >> +			}
> >> +		}
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int vtpm_op_send(struct tpm_chip *chip, u8 *caller_buf, size_t len)
> >> +{
> >> +	int ret;
> >> +	bool did_kick;
> >> +	struct scatterlist sg_outbuf, sg_inbuf;
> >> +	struct scatterlist *sgs[2] = { &sg_outbuf, &sg_inbuf };
> >> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
> >> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
> >> +
> >> +	dev_dbg(&chip->dev, __func__ " %zu bytes\n", len);
> >> +
> >> +	if (len > TPM_BUFSIZE) {
> >> +		dev_err(&chip->dev,
> >> +				"command is too long, %zu > %zu\n",
> >> +				len, (size_t) TPM_BUFSIZE);
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Wait until hypervisor relinquishes ownership of the virtqueue buffer.
> >> +	 *
> >> +	 * This may block if the previous recv operation timed out in the guest
> >> +	 * kernel but is still being processed by the hypervisor. Also may block
> >> +	 * if send operations are performed back-to-back, such as if something
> >> +	 * in the caller failed in between a send and recv.
> >> +	 *
> >> +	 * During normal operation absent of any errors or timeouts, this does
> >> +	 * not block.
> >> +	 */
> >> +	ret = vtpm_wait_for_buffer(dev, "waiting to begin send");
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	/* Driver owns virtqueue buffer and may now write into it. */
> >> +	memcpy(virtqueue_buf, caller_buf, len);
> >> +
> >> +	/*
> >> +	 * Enqueue the virtqueue buffer once as outgoing virtio data (written by
> >> +	 * the virtual machine and read by the hypervisor) and again as incoming
> >> +	 * data (written by the hypervisor and read by the virtual machine).
> >> +	 * This step moves ownership of the virtqueue buffer from driver to
> >> +	 * hypervisor.
> >> +	 *
> >> +	 * Note that we don't know here how big of a buffer the caller will use
> >> +	 * with their later call to recv. We allow the hypervisor to write up to
> >> +	 * the TPM max message size. If the caller ends up using a smaller
> >> +	 * buffer with recv that is too small to hold the entire response, the
> >> +	 * recv will return an error. This conceivably breaks TPM
> >> +	 * implementations that want to produce a different verbosity of
> >> +	 * response depending on the receiver's buffer size.
> >> +	 */
> >> +	sg_init_one(&sg_outbuf, virtqueue_buf, len);
> >> +	sg_init_one(&sg_inbuf, virtqueue_buf, TPM_BUFSIZE);
> >> +	ret = virtqueue_add_sgs(dev->vq, sgs, 1, 1, virtqueue_buf, GFP_KERNEL);
> >> +	if (ret) {
> >> +		dev_err(&chip->dev, "failed virtqueue_add_sgs\n");
> >> +		return ret;
> >> +	}
> >> +
> >> +	/* Kick the other end of the virtqueue after having added a buffer. */
> >> +	did_kick = virtqueue_kick(dev->vq);
> >> +	if (!did_kick) {
> >> +		dev->needs_kick = true;
> >> +		dev_notice(&chip->dev, "kick failed; will retry\n");
> >> +
> >> +		/*
> >> +		 * We return 0 anyway because what the caller doesn't know can't
> >> +		 * hurt them. They can call recv and it will retry the kick. If
> >> +		 * that works, everything is fine.
> >> +		 *
> >> +		 * If the retry in recv fails too, they will get -EBUSY from
> >> +		 * recv.
> >> +		 */
> >> +	}
> >> +
> >> +	/*
> >> +	 * Hypervisor is now processing the TPM command asynchronously. It will
> >> +	 * read the command from the output buffer and write the response into
> >> +	 * the input buffer (which are the same buffer). When done, it will send
> >> +	 * back the buffers over virtio and the driver's virtio callback will
> >> +	 * complete dev->complete so that we know the response is ready to be
> >> +	 * read.
> >> +	 *
> >> +	 * It is important to have copied data out of the caller's buffer into
> >> +	 * the driver's virtqueue buffer because the caller is free to destroy
> >> +	 * their buffer when this call returns. We can't avoid copying by
> >> +	 * waiting here for the hypervisor to finish reading, because if that
> >> +	 * wait times out, we return and the caller may destroy their buffer
> >> +	 * while the hypervisor is continuing to read from it.
> >> +	 */
> >> +	dev->driver_has_buffer = false;
> >> +	return 0;
> >> +}
> >> +
> >> +static int vtpm_op_recv(struct tpm_chip *chip, u8 *caller_buf, size_t len)
> >> +{
> >> +	int ret;
> >> +	struct vtpm_device *dev = dev_get_drvdata(&chip->dev);
> >> +	u8 *virtqueue_buf = dev->virtqueue_buffer;
> >> +
> >> +	dev_dbg(&chip->dev, __func__ "\n");
> >> +
> >> +	/*
> >> +	 * Wait until the virtqueue buffer is owned by the driver.
> >> +	 *
> >> +	 * This will usually block while the hypervisor finishes processing the
> >> +	 * most recent TPM command.
> >> +	 */
> >> +	ret = vtpm_wait_for_buffer(dev, "waiting for TPM response");
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	dev_dbg(&chip->dev, "received %u bytes\n", dev->readable);
> >> +
> >> +	if (dev->readable > len) {
> >> +		dev_notice(&chip->dev,
> >> +				"TPM response is bigger than receiver's buffer: %u > %zu\n",
> >> +				dev->readable, len);
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/* Copy response over to the caller. */
> >> +	memcpy(caller_buf, virtqueue_buf, dev->readable);
> >> +
> >> +	return dev->readable;
> >> +}
> >> +
> >> +static void vtpm_op_cancel(struct tpm_chip *chip)
> >> +{
> >> +	/*
> >> +	 * Cancel is not called in this driver's use of tpm-interface.c. It may
> >> +	 * be triggered through tpm-sysfs but that can be implemented as needed.
> >> +	 * Be aware that tpm-sysfs performs cancellation without holding the
> >> +	 * tpm_mutex that protects our send and recv operations, so a future
> >> +	 * implementation will need to consider thread safety of concurrent
> >> +	 * send/recv and cancel.
> >> +	 */
> >> +	dev_notice(&chip->dev, "cancellation is not implemented\n");
> >> +}
> >> +
> >> +static u8 vtpm_op_status(struct tpm_chip *chip)
> >> +{
> >> +	/*
> >> +	 * Status is for TPM drivers that want tpm-interface.c to poll for
> >> +	 * completion before calling recv. Usually this is when the hardware
> >> +	 * needs to be polled i.e. there is no other way for recv to block on
> >> +	 * the TPM command completion.
> >> +	 *
> >> +	 * Polling goes until `(status & complete_mask) == complete_val`. This
> >> +	 * driver defines both complete_mask and complete_val as 0 and blocks on
> >> +	 * our own completion object in recv instead.
> >> +	 */
> >> +	return 0;
> >> +}
> >> +
> >> +static const struct tpm_class_ops vtpm_ops = {
> >> +	.flags = TPM_OPS_AUTO_STARTUP,
> >> +	.send = vtpm_op_send,
> >> +	.recv = vtpm_op_recv,
> >> +	.cancel = vtpm_op_cancel,
> >> +	.status = vtpm_op_status,
> >> +	.req_complete_mask = 0,
> >> +	.req_complete_val = 0,
> >> +};
> >> +
> >> +static void vtpm_virtio_complete(struct virtqueue *vq)
> >> +{
> >> +	struct virtio_device *vdev = vq->vdev;
> >> +	struct vtpm_device *dev = vdev->priv;
> >> +
> >> +	complete(&dev->complete);
> >> +}
> >> +
> >> +static int vtpm_probe(struct virtio_device *vdev)
> >> +{
> >> +	int err;
> >> +	struct vtpm_device *dev;
> >> +	struct virtqueue *vq;
> >> +	struct tpm_chip *chip;
> >> +
> >> +	dev_dbg(&vdev->dev, __func__ "\n");
> >> +
> >> +	dev = kzalloc(sizeof(struct vtpm_device), GFP_KERNEL);
> >> +	if (!dev) {
> >> +		err = -ENOMEM;
> >> +		dev_err(&vdev->dev, "failed kzalloc\n");
> >> +		goto err_dev_alloc;
> >> +	}
> >> +	vdev->priv = dev;
> >> +
> >> +	vq = virtio_find_single_vq(vdev, vtpm_virtio_complete, "vtpm");
> >> +	if (IS_ERR(vq)) {
> >> +		err = PTR_ERR(vq);
> >> +		dev_err(&vdev->dev, "failed virtio_find_single_vq\n");
> >> +		goto err_virtio_find;
> >> +	}
> >> +	dev->vq = vq;
> >> +
> >> +	chip = tpm_chip_alloc(&vdev->dev, &vtpm_ops);
> >> +	if (IS_ERR(chip)) {
> >> +		err = PTR_ERR(chip);
> >> +		dev_err(&vdev->dev, "failed tpm_chip_alloc\n");
> >> +		goto err_chip_alloc;
> >> +	}
> >> +	dev_set_drvdata(&chip->dev, dev);
> >> +	chip->flags |= TPM_CHIP_FLAG_TPM2;
> >> +	dev->chip = chip;
> >> +
> >> +	init_completion(&dev->complete);
> >> +	dev->driver_has_buffer = true;
> >> +	dev->needs_kick = false;
> >> +	dev->readable = 0;
> >> +
> >> +	/*
> >> +	 * Required in order to enable vq use in probe function for auto
> >> +	 * startup.
> >> +	 */
> >> +	virtio_device_ready(vdev);
> >> +
> >> +	err = tpm_chip_register(dev->chip);
> >> +	if (err) {
> >> +		dev_err(&vdev->dev, "failed tpm_chip_register\n");
> >> +		goto err_chip_register;
> >> +	}
> >> +
> >> +	return 0;
> >> +
> >> +err_chip_register:
> >> +	put_device(&dev->chip->dev);
> >> +err_chip_alloc:
> >> +	vdev->config->del_vqs(vdev);
> >> +err_virtio_find:
> >> +	kfree(dev);
> >> +err_dev_alloc:
> >> +	return err;
> >> +}
> >> +
> >> +static void vtpm_remove(struct virtio_device *vdev)
> >> +{
> >> +	struct vtpm_device *dev = vdev->priv;
> >> +
> >> +	/* Undo tpm_chip_register. */
> >> +	tpm_chip_unregister(dev->chip);
> >> +
> >> +	/* Undo tpm_chip_alloc. */
> >> +	put_device(&dev->chip->dev);
> >> +
> >> +	vdev->config->reset(vdev);
> >> +	vdev->config->del_vqs(vdev);
> >> +
> >> +	kfree(dev);
> >> +}
> >> +
> >> +#define VIRTIO_ID_TPM 31
> >> +
> >> +static struct virtio_device_id id_table[] = {
> >> +	{
> >> +		.device = VIRTIO_ID_TPM,
> >> +		.vendor = VIRTIO_DEV_ANY_ID,
> >> +	},
> >> +	{},
> >> +};
> > 
> > Let's write
> > 
> > static struct virtio_device_id id_table[] = {
> >         { VIRTIO_ID_TPM, VIRTIO_DEV_ANY_ID },
> >         { 0 },
> > };
> > 
> > for consistency with other virtio devices.
> 
> Acknowledged, will do.
> 
> 
> > 
> >> +
> >> +static struct virtio_driver vtpm_driver = {
> >> +	.driver.name = KBUILD_MODNAME,
> >> +	.driver.owner = THIS_MODULE,
> >> +	.id_table = id_table,
> >> +	.probe = vtpm_probe,
> >> +	.remove = vtpm_remove,
> > 
> > Freeze/restore?
> 
> Good call. We don't need this for Chrome OS but I agree it should be in the
> driver. I will provide a basic freeze/restore in line with what I see in
> virtio-rng.
> 
> Thanks!
> 
> 
> > 
> > 
> >> +};
> >> +
> >> +module_virtio_driver(vtpm_driver);
> >> +
> >> +MODULE_AUTHOR("David Tolnay (dtolnay@gmail.com)");
> >> +MODULE_DESCRIPTION("Virtio vTPM Driver");
> >> +MODULE_VERSION("1.0");
> >> +MODULE_LICENSE("GPL");
> >> -- 
> >> 2.20.1
> > 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 21:50           ` Jarkko Sakkinen
@ 2019-02-22 22:24             ` David Tolnay
  2019-02-22 22:36               ` Jarkko Sakkinen
  0 siblings, 1 reply; 57+ messages in thread
From: David Tolnay @ 2019-02-22 22:24 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: Michael S. Tsirkin, Peter Huewe, Jason Gunthorpe,
	linux-integrity, Jason Wang, virtualization, dgreid, apronin

On 2/22/19 1:50 PM, Jarkko Sakkinen wrote:
> On Fri, Feb 22, 2019 at 04:25:08PM -0500, Michael S. Tsirkin wrote:
>> On Fri, Feb 22, 2019 at 09:33:05PM +0200, Jarkko Sakkinen wrote:
>>> On Fri, Feb 22, 2019 at 09:31:56PM +0200, Jarkko Sakkinen wrote:
>>>> On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
>>>>> On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
>>>>>> On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
>>>>>>> Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
>>>>>>> kernel side of TPM over virtio.
>>>>>>>
>>>>>>> Use case: TPM support is needed for performing trusted work from within
>>>>>>> a virtual machine launched by Chrome OS.
>>>>>>>
>>>>>>> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
>>>>>>> implementation of the virtio TPM device can be found in these two source
>>>>>>> files:
>>>>>>>
>>>>>>> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
>>>>>>> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
>>>>>>
>>>>>> These files/links do not make sense for kernel testing. Please remove
>>>>>> them from the next version.
>>>>>
>>>>> To clarify generally for a virtio device we want
>>>>> - guest support
>>>>> - device support
>>>>> - spec
>>>>>
>>>>> If the device is implemented in qemu and guest in linux kernel,
>>>>> then there are lots of people familiar with these
>>>>> programming environments, so sometimes we merge
>>>>> guest and host code even if spec isn't written up at all.
>>>>>
>>>>> If you don't want to do that there's a small number of people who can
>>>>> properly review code, e.g. I don't think lots of people on this list are
>>>>> familiar with crosvm.  One way to address this would be to build a QEMU
>>>>> implementation. Another would be to write up a spec.  You can do both
>>>>> too :)
>>>>
>>>> I don't really understand your arguments.
>>>
>>> ... and I did your response total three times and did not find any
>>> causality of any sort from anything.
>>>
>>> /Jarkko
>>
>> Thanks for spending the time reading my response.  What was included in
>> it was a general suggestion for a virtio based driver to be acceptable
>> in upstream Linux.
>>
>> You pointed out that a pointer to a prototype implementation in Rust
>> isn't relevant. However, FYI just posting guest code and asking for it
>> to be merged alone won't work for a virtio driver either. I am merely
>> trying to speed things up instead of having the contributor repost with
>> a tweaked commit log just to immediately get another set of nacks.
> 
> I did not say anything about relevance of any implementation. I tried to
> mainly point out that looking at random source files implemented with a
> relatively alien language to most does not really make a case.
> 
> Things that would help to move this forward:
> 
> - Documentation of the stack, nothing spectacular but more like how it
>   works in practical terms.
> - Sufficiently easy ways to test the code.
> - Explain in the commit message why we want this.
> 
> /Jarkko


Thanks Jarkko, I respect all of the points you've raised and you are absolutely
correct to ask for a spec and/or spec-quality documentation, detailed testing
steps for exercising this driver in qemu or crosvm, and strong justification for
the new driver.

- Regarding spec, at this point we'll follow up with OASIS to claim a device
  number and pin down the protocol in sufficient detail to make it clear when an
  implementation is conformant. As I noted in the patch message, at the very
  least the virtio device number will need to be resolved before this driver
  could be accepted.

  On the Chrome OS side we're comfortable pursuing review of the spec and review
  of the driver in parallel. At what point in the spec process the driver can be
  accepted is up to you folks.

- I will make sure to include detailed testing steps with the next patch.

- I'm glad we are hashing out why this driver is necessary (or why it is not
  necessary, if it turns out that way) in this discussion. You are right that it
  may have been better originally sent as an RFC. Once I understand there to be
  some agreement, I will write that up for the next patch.

Thanks,
David

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 22:24             ` David Tolnay
@ 2019-02-22 22:36               ` Jarkko Sakkinen
  2019-02-22 23:05                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-22 22:36 UTC (permalink / raw)
  To: David Tolnay
  Cc: Michael S. Tsirkin, Peter Huewe, Jason Gunthorpe,
	linux-integrity, Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 02:24:55PM -0800, David Tolnay wrote:
> On 2/22/19 1:50 PM, Jarkko Sakkinen wrote:
> > On Fri, Feb 22, 2019 at 04:25:08PM -0500, Michael S. Tsirkin wrote:
> >> On Fri, Feb 22, 2019 at 09:33:05PM +0200, Jarkko Sakkinen wrote:
> >>> On Fri, Feb 22, 2019 at 09:31:56PM +0200, Jarkko Sakkinen wrote:
> >>>> On Fri, Feb 22, 2019 at 10:23:02AM -0500, Michael S. Tsirkin wrote:
> >>>>> On Fri, Feb 22, 2019 at 12:26:10PM +0200, Jarkko Sakkinen wrote:
> >>>>>> On Thu, Feb 21, 2019 at 06:14:02PM -0800, David Tolnay wrote:
> >>>>>>> Add a config TCG_VIRTIO_VTPM which enables a driver providing the guest
> >>>>>>> kernel side of TPM over virtio.
> >>>>>>>
> >>>>>>> Use case: TPM support is needed for performing trusted work from within
> >>>>>>> a virtual machine launched by Chrome OS.
> >>>>>>>
> >>>>>>> Tested inside crosvm, the Chrome OS virtual machine monitor. Crosvm's
> >>>>>>> implementation of the virtio TPM device can be found in these two source
> >>>>>>> files:
> >>>>>>>
> >>>>>>> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/devices/src/virtio/tpm.rs
> >>>>>>> - https://chromium.googlesource.com/chromiumos/platform/crosvm/+/18ce5713e6cb99c40aafec52b67c28ba12a44f31/tpm2/src/lib.rs
> >>>>>>
> >>>>>> These files/links do not make sense for kernel testing. Please remove
> >>>>>> them from the next version.
> >>>>>
> >>>>> To clarify generally for a virtio device we want
> >>>>> - guest support
> >>>>> - device support
> >>>>> - spec
> >>>>>
> >>>>> If the device is implemented in qemu and guest in linux kernel,
> >>>>> then there are lots of people familiar with these
> >>>>> programming environments, so sometimes we merge
> >>>>> guest and host code even if spec isn't written up at all.
> >>>>>
> >>>>> If you don't want to do that there's a small number of people who can
> >>>>> properly review code, e.g. I don't think lots of people on this list are
> >>>>> familiar with crosvm.  One way to address this would be to build a QEMU
> >>>>> implementation. Another would be to write up a spec.  You can do both
> >>>>> too :)
> >>>>
> >>>> I don't really understand your arguments.
> >>>
> >>> ... and I did your response total three times and did not find any
> >>> causality of any sort from anything.
> >>>
> >>> /Jarkko
> >>
> >> Thanks for spending the time reading my response.  What was included in
> >> it was a general suggestion for a virtio based driver to be acceptable
> >> in upstream Linux.
> >>
> >> You pointed out that a pointer to a prototype implementation in Rust
> >> isn't relevant. However, FYI just posting guest code and asking for it
> >> to be merged alone won't work for a virtio driver either. I am merely
> >> trying to speed things up instead of having the contributor repost with
> >> a tweaked commit log just to immediately get another set of nacks.
> > 
> > I did not say anything about relevance of any implementation. I tried to
> > mainly point out that looking at random source files implemented with a
> > relatively alien language to most does not really make a case.
> > 
> > Things that would help to move this forward:
> > 
> > - Documentation of the stack, nothing spectacular but more like how it
> >   works in practical terms.
> > - Sufficiently easy ways to test the code.
> > - Explain in the commit message why we want this.
> > 
> > /Jarkko
> 
> 
> Thanks Jarkko, I respect all of the points you've raised and you are absolutely
> correct to ask for a spec and/or spec-quality documentation, detailed testing
> steps for exercising this driver in qemu or crosvm, and strong justification for
> the new driver.

I do not require spec quality documentation. Just a few paragraphs of
what is in crosvm, kernel etc. and something about inner workings to
get a rough idea. No need for TCG level spec for this :-) Something
in the spirit of what you could bundle under Documentation/ in the
kernel tree. Maybe you could consider putting something there and
make this a patch set?

To get started with testing this properly is there anywhere
documentation on "here's how you compile and run crosvm"? If I can
run it myself it will be easier for me to understand what happens even
without a spec.

Clone the repository and cargo build?

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 22:36               ` Jarkko Sakkinen
@ 2019-02-22 23:05                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 23:05 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Sat, Feb 23, 2019 at 12:36:34AM +0200, Jarkko Sakkinen wrote:
> I do not require spec quality documentation. Just a few paragraphs of
> what is in crosvm, kernel etc. and something about inner workings to
> get a rough idea. No need for TCG level spec for this :-)

OTOH for virtio if there's no implementation in a popular
hypervisor/language, we do ask for a somewhat detailed
spec.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
@ 2019-02-22 23:05                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 57+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22 23:05 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: dgreid, virtualization, Jason Gunthorpe, linux-integrity,
	Peter Huewe, apronin, David Tolnay

On Sat, Feb 23, 2019 at 12:36:34AM +0200, Jarkko Sakkinen wrote:
> I do not require spec quality documentation. Just a few paragraphs of
> what is in crosvm, kernel etc. and something about inner workings to
> get a rough idea. No need for TCG level spec for this :-)

OTOH for virtio if there's no implementation in a popular
hypervisor/language, we do ask for a somewhat detailed
spec.

-- 
MST

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 22:18     ` James Bottomley
@ 2019-02-23  0:45       ` David Tolnay
  2019-02-23  1:34         ` James Bottomley
  0 siblings, 1 reply; 57+ messages in thread
From: David Tolnay @ 2019-02-23  0:45 UTC (permalink / raw)
  To: James Bottomley
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On 2/22/19 2:18 PM, James Bottomley wrote:
> On Fri, 2019-02-22 at 14:00 -0800, David Tolnay wrote:
>> On 2/22/19 7:30 AM, James Bottomley wrote:
>>> On Thu, 2019-02-21 at 18:14 -0800, David Tolnay wrote:
>>>> Add a config TCG_VIRTIO_VTPM which enables a driver providing the
>>>> guest kernel side of TPM over virtio.
>>>
>>> What's the use case for using this over the current non-virtio
>>> vTPM?. I always thought virtio was about guest to host transport
>>> efficiency, but the phsical TPM, being connected over a very slow
>>> bus, is about as inefficient as you can get in that regard, so why
>>> do we need to use virtio to drive the virtual one?
>>>
>>>> Use case: TPM support is needed for performing trusted work from
>>>> within a virtual machine launched by Chrome OS.
>>>
>>> The current vTPM does this, what's the use case for your special
>>> one?
>>
>> Thanks James, these are important questions and the intention
>> certainly isn't to have another driver that does the same thing with
>> differences for no reason.
>>
>> I see three existing vTPM drivers already in drivers/char/tpm.
>>
>> - tpm_ibmvtpm, which is specific to PowerPC and implemented in terms
>> of PowerPC hcalls.
>>
>> - xen-tpmfront, which is specific to Xen.
>>
>> - tpm_vtpm_proxy, which as I understand it is intended to enable
>> userspace TPM.
>>   That is, if we are using this driver in a guest kernel, the TPM
>> implementation
>>   also needs to reside in the guest kernel rather than in the
>> hypervisor.
>>
>> For our use case which is not PowerPC and is running in our own
>> hypervisor with the TPM needing to be provided by the hypervisor,
>> none of the existing vTPM drivers seemed to fit the bill.
>>
>> Please let me know if I arrived at the wrong conclusion on this!
> 
> Actually, yes, your third statement is not wholly correct:  The in-
> kernel vTPM proxy can certainly be used to emulate a TPM within a guest
> for that guest to use without any support from the hypervisor. 
> However, when you have the correct qemu (requires a recent one), the
> vTPM emulator can run in the host (or hypervisor) and be passed through
> to the guest.  The best description of how to do that seems to be this
> blog entry:
> 
> https://s3hh.wordpress.com/2018/06/03/tpm-2-0-in-qemu/
> 
> So won't this mode of operation exactly work for you (obviously with
> necessary modifications to the crosvm hypervisor)?

I appreciate the explanation and link, James!

I had briefly investigated the existing support in QEMU before
pursuing a virtio based driver. At the time, I determined that QEMU
implements a register level emulation of a TPM rather than what our
team would consider a minimum viable vTPM. It implements the
TPM-specific TIS interface (QEMU's tpm_tis.c) as well as CRB
interface (QEMU's tpm_crb.c) which require Linux's TIS driver
(Linux's tpm_tis.c) and CRB driver (Linux's tpm_crb.c) respectively.
Both of those are based on ACPI.

As far as I can tell, QEMU does not provide a mode in which the
tpm_vtpm_proxy driver would be involved *in the guest*. Certainly
you could use a vtpm proxy driver *on the host* but would still need
some other TPM driver running in the guest for communication with
the host, possibly virtio. If this second approach is what you have
in mind, let me know but I don't think it is applicable to the
Chrome OS use case.

Clearly it's possible for us to go the QEMU route and implement ACPI
(which crosvm does not otherwise need) plus one or both of TIS and
CRB in crosvm, but since all we need is for TPM command buffers to
leave the VM and TPM response buffers to enter the VM, all of that
seems unnecessarily complicated. A virtio driver substantially
lowers the barrier to implementing a hypervisor vTPM.

Separately, I'd be curious whether you share Jason Gunthorpe's
opinion stated elsewhere in the thread, or whether you would
encourage the virtio TPM driver to be kept private if feasible
alternative drivers already exist. Jason's comment:

> We already have a xen 'virtioish' TPM driver, so I don't think there
> is a good reason to block a virtio driver if someone cares about
> it. There are enough good reasons to prefer virtio to other options,
> IMHO.

Best,
David

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 22:24       ` Michael S. Tsirkin
  (?)
@ 2019-02-23  1:23       ` David Tolnay
  -1 siblings, 0 replies; 57+ messages in thread
From: David Tolnay @ 2019-02-23  1:23 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On 2/22/19 2:24 PM, Michael S. Tsirkin wrote:
> On Fri, Feb 22, 2019 at 01:40:25PM -0800, David Tolnay wrote:
>> On 2/21/19 9:51 PM, Michael S. Tsirkin wrote:
>>>
>>> Maintainer entry?
>>
>> Good call, I will add one.
>>
>> Could you let me know what workflow would work best for you? I can direct
>> patches toward Chrome OS's kernel tree and only a Chrome OS list, then send them
>> your way only once they have been reviewed and accepted there. Or I can list one
>> or both of the linux-integrity@v.k.o and virtualization@l.l-f.o lists along with
>> a list for Chrome OS reviewers.
>>
>> Either way, we'll want eyes from people who know virtio and from people who know
>> TPM on most changes.
> 
> Well if you are changing a host/guest interface then you also need to copy
> virtio-dev. That one is subscriber only so that would
> imply sending there after it's reviewed in chrome.
> As an extra bonus reviewed code is hopefully higher quality
> so less review work for me ;)
> so the first option sounds like a better plan.
> 
> But I hope accepted above just means it goes on some branch,
> such that we won't end up with production code that
> is set in stone and needs to be maintained forever?

Understood. We will avoid considering anything canonical before
upstream has a chance to weigh in. Thanks for the caution.


>>>> + *
>>>> + * The intended hypervisor-side implementation is as follows.
>>>> + *
>>>> + *     while true:
>>>> + *         await next virtio buffer.
>>>> + *         expect first descriptor in chain to be guest-to-host.
>>>> + *         read tpm command from that buffer.
>>>> + *         synchronously perform TPM work determined by the command.
>>>> + *         expect second descriptor in chain to be host-to-guest.
>>>> + *         write TPM response into that buffer.
>>>> + *         place buffer on virtio used queue indicating how many bytes written.
>>>
>>> That's fine I think except generally it should be legal for guest
>>> to split each buffer to several segments.
>>
>> Acknowledged. I will adjust this comment.
>>
>> To clarify, this means the hypervisor will need to accept a single descriptor
>> chain consisting of one or more guest-to-host descriptors which together form
>> the command, followed by one or more host-to-guest descriptors into which the
>> response may be written. Is that what you had in mind?
> 
> Exactly.
> 
>> TPM commands and responses are both capped at a moderate size (4096 bytes). With
>> that in mind, is it still necessary to allow split buffers? We won't be dealing
>> with multi-megabyte transfers.
> 
> This has been a general virtio rule, yes. See for example "2.6.4 Message
> Framing" in v1.1 draft.
> 
> It's generally not hard to implement and generally easier than argue about
> whether it's necessary. If you think it's a big problem, it's a good
> idea to take it up with the virtio tc. Address this to virtio-comment
> list.

I appreciate the explanation. As you said, it is easy to implement
so I'll fix this comment to allow for buffers split into segments. I
didn't know about this general virtio rule but it makes sense to me!


>>>> +/*
>>>> + * Timeout duration when waiting on the hypervisor to complete its end of the
>>>> + * TPM operation. This timeout is relatively high because certain TPM operations
>>>> + * can take dozens of seconds.
>>>> + */
>>>> +#define TPM_VIRTIO_TIMEOUT (120 * HZ)
>>>
>>> Should we read this from device? E.g. a busy hypervisor might
>>> take longer to respond ...
>>
>> That's true. Do you have an example of an existing virtio driver that reads
>> timeout from the hypervisor?
>>
>> To understand your perspective, do you see one of the following as being the
>> bigger advantage? -- either that we can use a timeout tighter than 120s for
>> hypervisors that believe they can respond quickly to any command, or that we can
>> relax the timeout beyond 120s for an especially overburdened hypervisor. The
>> first is nice because the caller in the guest will receive their error code
>> sooner in certain failure modes, but maybe that would be better addressed by
>> some other way of poking the host to find out whether it is still alive and
>> working. For the second, 120s is pretty generous and in our use case we would
>> just want to give up and retry much later at that point; I don't know how much
>> further it would be reasonable to grow a timeout.
> 
> I think the whole idea around timeout handling needs a bit more
> thought. What kind of reasons for the timeout do you envision
> that require the extra kicks?

The hesitation I had with responding to timeouts more aggressively
than just by kicking again, was that we don't want to end up in a
situation where the guest driver performs some kind of reset while
the hypervisor is still reading or writing one of the driver's
buffers. If timeout triggers a full remove and probe, including
kfree on the vtpm_device containing the buffer, that seems likely to
lead to memory corruption in the guest.

As you commented elsewhere, it sounds like NEEDS_RESET is the way to
go. If the guest driver times out, and observes that NEEDS_RESET bit
is set, then we can assume the hypervisor is not continuing to write
memory so a reset is safe.

Do you have guidance for how to safely handle a timeout in which
NEEDS_RESET has not been set?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-23  0:45       ` David Tolnay
@ 2019-02-23  1:34         ` James Bottomley
  2019-02-23  2:41           ` David Tolnay
  0 siblings, 1 reply; 57+ messages in thread
From: James Bottomley @ 2019-02-23  1:34 UTC (permalink / raw)
  To: David Tolnay
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On Fri, 2019-02-22 at 16:45 -0800, David Tolnay wrote:
[...]
> I appreciate the explanation and link, James!
> 
> I had briefly investigated the existing support in QEMU before
> pursuing a virtio based driver. At the time, I determined that QEMU
> implements a register level emulation of a TPM rather than what our
> team would consider a minimum viable vTPM.

Actually, no, it doesn't at all.  QEMU implements nothing about a TPM. 
You have to set up a software TPM outside of qemu which talks over a
socket and then use the vTPM socket to pass that TPM through to qemu. 
Effectively QEMU is TPM implementation blind (which is why it can do
both 1.2 and 2.0) all it provides is discovery of the virtual hardware.

>  It implements the TPM-specific TIS interface (QEMU's tpm_tis.c) as
> well as CRB interface (QEMU's tpm_crb.c) which require Linux's TIS
> driver (Linux's tpm_tis.c) and CRB driver (Linux's tpm_crb.c)
> respectively. Both of those are based on ACPI.

That's right, QEMU implements the device interface emulation, but it
passes the actual TPM communication packets to the vTPM outside QEMU.

> As far as I can tell, QEMU does not provide a mode in which the
> tpm_vtpm_proxy driver would be involved *in the guest*.

It doesn't need to.  the vTPM proxy can itself do all of that using the
guest Linux kernel.  There's no hypervisor or host involvement.  This
is analagous to the vTPM for container use case, except that to get
both running in a guest you'd use no containment, so the vtpm client
and server run in the guest together:

https://www.kernel.org/doc/html/v4.16/security/tpm/tpm_vtpm_proxy.html

>  Certainly you could use a vtpm proxy driver *on the host* but would
> still need some other TPM driver running in the guest for
> communication with the host, possibly virtio. If this second approach
> is what you have in mind, let me know but I don't think it is
> applicable to the Chrome OS use case.

Actually, the vTPM on-host use case doesn't use the in kernel vtpm
proxy driver, it uses a plain unix socket.  That's what the original
website tried to explain: you set up swtpm in socket mode, you point
the qemu tpm emulation at the socket and you boot up your guest.

> Clearly it's possible for us to go the QEMU route and implement ACPI
> (which crosvm does not otherwise need) plus one or both of TIS and
> CRB in crosvm, but since all we need is for TPM command buffers to
> leave the VM and TPM response buffers to enter the VM, all of that
> seems unnecessarily complicated. A virtio driver substantially
> lowers the barrier to implementing a hypervisor vTPM.

I don't believe it requires ACPI, that's just one common way of
enumerating TPMs and it's how the guest finds it.  If you implemented
the QEMU passthrough in crosvm, you could use whatever mechanism that's
convenient to you and would cause a TPM driver to bind.  It's the QEMU
layer that provides the virtual hardware emulation for the device and
the external vTPM that provides the TPM implementation.  The two are
completely decoupled.

Are you saying crosvm has no ability at all to emulate the discovery
that we use in the kernel to find TPMs?  Is it some type of firecracker
like think that only supports fully emulated devices?

> Separately, I'd be curious whether you share Jason Gunthorpe's
> opinion stated elsewhere in the thread, or whether you would
> encourage the virtio TPM driver to be kept private if feasible
> alternative drivers already exist. Jason's comment:
> 
> > We already have a xen 'virtioish' TPM driver, so I don't think
> > there is a good reason to block a virtio driver if someone cares
> > about it. There are enough good reasons to prefer virtio to other
> > options, IMHO.

I've no real opinion on that one until I understand why you went down
this path instead of using existing implementations.  Right at the
moment I do get the impression its because you didn't know how the
existing implementations worked.

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-23  1:34         ` James Bottomley
@ 2019-02-23  2:41           ` David Tolnay
  2019-02-24 16:30             ` James Bottomley
  0 siblings, 1 reply; 57+ messages in thread
From: David Tolnay @ 2019-02-23  2:41 UTC (permalink / raw)
  To: James Bottomley
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On 2/22/19 5:34 PM, James Bottomley wrote:
> On Fri, 2019-02-22 at 16:45 -0800, David Tolnay wrote:
> [...]
>> I appreciate the explanation and link, James!
>>
>> I had briefly investigated the existing support in QEMU before
>> pursuing a virtio based driver. At the time, I determined that QEMU
>> implements a register level emulation of a TPM rather than what our
>> team would consider a minimum viable vTPM.
> 
> Actually, no, it doesn't at all.  QEMU implements nothing about a TPM. 
> You have to set up a software TPM outside of qemu which talks over a
> socket and then use the vTPM socket to pass that TPM through to qemu. 
> Effectively QEMU is TPM implementation blind (which is why it can do
> both 1.2 and 2.0) all it provides is discovery of the virtual hardware.

Thanks, this sounds very similar to our use case. We'd like for crosvm
to be TPM implementation blind as well, with the TPM implementation
running in the host and attached via socket or D-Bus to the hypervisor.
The TPM implementation may be purely software or may be a daemon backed
by hardware TPM.

Sounds like there is a lot of overlap.


>>  It implements the TPM-specific TIS interface (QEMU's tpm_tis.c) as
>> well as CRB interface (QEMU's tpm_crb.c) which require Linux's TIS
>> driver (Linux's tpm_tis.c) and CRB driver (Linux's tpm_crb.c)
>> respectively. Both of those are based on ACPI.
> 
> That's right, QEMU implements the device interface emulation, but it
> passes the actual TPM communication packets to the vTPM outside QEMU.

Could you clarify what you mean by a TPM communication packet since I am
less familiar with TPM and QEMU? I don't see "packet" terminology being
used in drivers/char/tpm. Is a packet equivalent to a fully formed TPM
command / response or is it a lower level aspect of the device interface
than that?

More concretely, would you say that a hypervisor necessarily needs to
implement TPM device interface emulation (TIS and/or CRB) in order to
expose a TPM running on the host to its guest OS? I can see QEMU has
those things.


>> As far as I can tell, QEMU does not provide a mode in which the
>> tpm_vtpm_proxy driver would be involved *in the guest*.
> 
> It doesn't need to.  the vTPM proxy can itself do all of that using the
> guest Linux kernel.  There's no hypervisor or host involvement.  This
> is analagous to the vTPM for container use case, except that to get
> both running in a guest you'd use no containment, so the vtpm client
> and server run in the guest together:
> 
> https://www.kernel.org/doc/html/v4.16/security/tpm/tpm_vtpm_proxy.html

I apologize for still not grasping how this would apply. You bring up a
vtpm proxy that runs in the guest Linux kernel with no hypervisor or
host involvement, with the vtpm client and server running in the guest
together. But host involvement is specifically what we want since only
the host is trusted to run the software TPM implementation or interact
with a hardware TPM. I am missing a link in the chain:

- guest userspace makes TPM call (through tpm2-tss or however else);
- guest kernel receives the call in tpm-dev-common / tpm-interface;
- tpm-interface delegates to a tpm-chip implementation (which one?
  vtpm_proxy_tpm_ops?);
- ???
- a host daemon triages and eventually performs the TPM operation.


>> Certainly you could use a vtpm proxy driver *on the host* but would
>> still need some other TPM driver running in the guest for
>> communication with the host, possibly virtio. If this second approach
>> is what you have in mind, let me know but I don't think it is
>> applicable to the Chrome OS use case.
> 
> Actually, the vTPM on-host use case doesn't use the in kernel vtpm
> proxy driver, it uses a plain unix socket.  That's what the original
> website tried to explain: you set up swtpm in socket mode, you point
> the qemu tpm emulation at the socket and you boot up your guest.

Okay. If I understand correctly, the vTPM on-host use case operates
through TIS and/or CRB implemented in QEMU and the tpm_tis / tpm_crb
driver in the guest. Do I have it right?

All of this is what I would like to avoid by using a virtio driver.


>> Clearly it's possible for us to go the QEMU route and implement ACPI
>> (which crosvm does not otherwise need) plus one or both of TIS and
>> CRB in crosvm, but since all we need is for TPM command buffers to
>> leave the VM and TPM response buffers to enter the VM, all of that
>> seems unnecessarily complicated. A virtio driver substantially
>> lowers the barrier to implementing a hypervisor vTPM.
> 
> I don't believe it requires ACPI, that's just one common way of
> enumerating TPMs and it's how the guest finds it.  If you implemented
> the QEMU passthrough in crosvm, you could use whatever mechanism that's
> convenient to you and would cause a TPM driver to bind.  It's the QEMU
> layer that provides the virtual hardware emulation for the device and
> the external vTPM that provides the TPM implementation.  The two are
> completely decoupled.
> 
> Are you saying crosvm has no ability at all to emulate the discovery
> that we use in the kernel to find TPMs?  Is it some type of firecracker
> like think that only supports fully emulated devices?

I am still digesting the rest of your comment, but yes, Firecracker is a
fork of crosvm so they are similar in this regard.

Thanks for your guidance and patience!


>> Separately, I'd be curious whether you share Jason Gunthorpe's
>> opinion stated elsewhere in the thread, or whether you would
>> encourage the virtio TPM driver to be kept private if feasible
>> alternative drivers already exist. Jason's comment:
>>
>>> We already have a xen 'virtioish' TPM driver, so I don't think
>>> there is a good reason to block a virtio driver if someone cares
>>> about it. There are enough good reasons to prefer virtio to other
>>> options, IMHO.
> 
> I've no real opinion on that one until I understand why you went down
> this path instead of using existing implementations.  Right at the
> moment I do get the impression its because you didn't know how the
> existing implementations worked.
> 
> James
> 
> 


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22 23:05                   ` Michael S. Tsirkin
  (?)
@ 2019-02-24  9:33                   ` Jarkko Sakkinen
  -1 siblings, 0 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-24  9:33 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 06:05:01PM -0500, Michael S. Tsirkin wrote:
> On Sat, Feb 23, 2019 at 12:36:34AM +0200, Jarkko Sakkinen wrote:
> > I do not require spec quality documentation. Just a few paragraphs of
> > what is in crosvm, kernel etc. and something about inner workings to
> > get a rough idea. No need for TCG level spec for this :-)
> 
> OTOH for virtio if there's no implementation in a popular
> hypervisor/language, we do ask for a somewhat detailed
> spec.

I don't really sure what you mean by that at all but for me a
sufficient would be something that describes the stack in few paragraphs
for virtio-TPM.

Could potentially be a patch for Documentation/tpm/.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-23  2:41           ` David Tolnay
@ 2019-02-24 16:30             ` James Bottomley
  2019-02-24 17:51               ` Jarkko Sakkinen
  2019-02-24 22:12               ` David Tolnay
  0 siblings, 2 replies; 57+ messages in thread
From: James Bottomley @ 2019-02-24 16:30 UTC (permalink / raw)
  To: David Tolnay
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On Fri, 2019-02-22 at 18:41 -0800, David Tolnay wrote:
> On 2/22/19 5:34 PM, James Bottomley wrote:
> > On Fri, 2019-02-22 at 16:45 -0800, David Tolnay wrote:
[...]
> > >  It implements the TPM-specific TIS interface (QEMU's tpm_tis.c)
> > > as well as CRB interface (QEMU's tpm_crb.c) which require Linux's
> > > TIS driver (Linux's tpm_tis.c) and CRB driver (Linux's tpm_crb.c)
> > > respectively. Both of those are based on ACPI.
> > 
> > That's right, QEMU implements the device interface emulation, but
> > it passes the actual TPM communication packets to the vTPM outside
> > QEMU.
> 
> Could you clarify what you mean by a TPM communication packet since I
> am less familiar with TPM and QEMU?

Like most standards defined devices, TPMs have a defined protocol, in
this case defined by the trusted computing group.  It's a
request/response model.  The job of the kernel is to expose this
request response packet interface.  The device manufacturers don't get
any flexibility, so their devices have to implement it and the only
freedom they get is how the device is attached to the hardware.

>  I don't see "packet" terminology being used in drivers/char/tpm. Is
> a packet equivalent to a fully formed TPM command / response or is it
> a lower level aspect of the device interface than that?

It's a request/response corresponding to a command and its completion
or error.

> More concretely, would you say that a hypervisor necessarily needs to
> implement TPM device interface emulation (TIS and/or CRB) in order to
> expose a TPM running on the host to its guest OS? I can see QEMU has
> those things.

A hypervisor is needed to implement discovery, and whether its
discovery over a virtual or physical bus, that part is required.

> > > As far as I can tell, QEMU does not provide a mode in which the
> > > tpm_vtpm_proxy driver would be involved *in the guest*.
> > 
> > It doesn't need to.  the vTPM proxy can itself do all of that using
> > the guest Linux kernel.  There's no hypervisor or host
> > involvement.  This is analagous to the vTPM for container use case,
> > except that to get both running in a guest you'd use no
> > containment, so the vtpm client and server run in the guest
> > together:
> > 
> > https://www.kernel.org/doc/html/v4.16/security/tpm/tpm_vtpm_proxy.h
> > tml
> 
> I apologize for still not grasping how this would apply. You bring up
> a vtpm proxy that runs in the guest Linux kernel with no hypervisor
> or host involvement, with the vtpm client and server running in the
> guest together. But host involvement is specifically what we want
> since only the host is trusted to run the software TPM implementation
> or interact with a hardware TPM. I am missing a link in the chain:

Well, in your previous email you asked how you would run the emulator
in the guest.  This is how.  If you're actually not interested in that
use case we don't need to discuss it further.

> - guest userspace makes TPM call (through tpm2-tss or however else);
> - guest kernel receives the call in tpm-dev-common / tpm-interface;
> - tpm-interface delegates to a tpm-chip implementation (which one?
>   vtpm_proxy_tpm_ops?);
> - ???
> - a host daemon triages and eventually performs the TPM operation.
> 
> 
> > > Certainly you could use a vtpm proxy driver *on the host* but
> > > would still need some other TPM driver running in the guest for
> > > communication with the host, possibly virtio. If this second
> > > approach is what you have in mind, let me know but I don't think
> > > it is applicable to the Chrome OS use case.
> > 
> > Actually, the vTPM on-host use case doesn't use the in kernel vtpm
> > proxy driver, it uses a plain unix socket.  That's what the
> > original website tried to explain: you set up swtpm in socket mode,
> > you point the qemu tpm emulation at the socket and you boot up your
> > guest.
> 
> Okay. If I understand correctly, the vTPM on-host use case operates
> through TIS and/or CRB implemented in QEMU and the tpm_tis / tpm_crb
> driver in the guest. Do I have it right?

No, vTPM operates purely at the packet level over various interfaces. 
Microsoft defines an actual network packet interface called socsim, but
this can also run over unix sockets, which is what the current QEMU
uses..

QEMU implements a virtual hardware emulation for discovery, but once
discovered all the packet communication is handed off to the vTPM
socket.

The virtual hardware emulation can be anything we have a driver for. 
TIS is the simplest, which is why I think they used it.  TIS is
actually a simple interface specification, it supports discovery over
anything, but the discovery implemented in standard guest drivers is
over ACPI, OF and PNP.  If you want more esoteric discovery methods, we
also support i2c.  However, that latter is really only for embedded.  I
think QEMU chose TIS because it works seamlessly on both Linux and
Windows guests.


> All of this is what I would like to avoid by using a virtio driver.

How? Discovery is the part that you have to do, whether it's using
emulated physical mechanisms or virtual bus discovery.

If you want to make this more concrete: I once wrote a pure socsim
packet TPM driver:

https://patchwork.ozlabs.org/patch/712465/

Since you just point it at the network socket, it does no discovery at
all and works in any Linux environment that has net.  I actually still
use it because a socsim TPM is easier to debug from the outside. 
However it was 230 lines.  Your device is 460 so that means about half
your driver is actually about discovery.

The only reasons I can see to use a virtual bus is either because its
way more efficient (the storage/network use case) or because you've
stripped down the hypervisor so far that it's incapable of emulating
any physical device (the firecracker use case).

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-24 16:30             ` James Bottomley
@ 2019-02-24 17:51               ` Jarkko Sakkinen
  2019-02-24 22:12               ` David Tolnay
  1 sibling, 0 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-24 17:51 UTC (permalink / raw)
  To: James Bottomley
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On Sun, Feb 24, 2019 at 08:30:19AM -0800, James Bottomley wrote:
> On Fri, 2019-02-22 at 18:41 -0800, David Tolnay wrote:
> > On 2/22/19 5:34 PM, James Bottomley wrote:
> > > On Fri, 2019-02-22 at 16:45 -0800, David Tolnay wrote:
> [...]
> > > >  It implements the TPM-specific TIS interface (QEMU's tpm_tis.c)
> > > > as well as CRB interface (QEMU's tpm_crb.c) which require Linux's
> > > > TIS driver (Linux's tpm_tis.c) and CRB driver (Linux's tpm_crb.c)
> > > > respectively. Both of those are based on ACPI.
> > > 
> > > That's right, QEMU implements the device interface emulation, but
> > > it passes the actual TPM communication packets to the vTPM outside
> > > QEMU.
> > 
> > Could you clarify what you mean by a TPM communication packet since I
> > am less familiar with TPM and QEMU?
> 
> Like most standards defined devices, TPMs have a defined protocol, in
> this case defined by the trusted computing group.  It's a
> request/response model.  The job of the kernel is to expose this
> request response packet interface.  The device manufacturers don't get
> any flexibility, so their devices have to implement it and the only
> freedom they get is how the device is attached to the hardware.
> 
> >  I don't see "packet" terminology being used in drivers/char/tpm. Is
> > a packet equivalent to a fully formed TPM command / response or is it
> > a lower level aspect of the device interface than that?
> 
> It's a request/response corresponding to a command and its completion
> or error.
> 
> > More concretely, would you say that a hypervisor necessarily needs to
> > implement TPM device interface emulation (TIS and/or CRB) in order to
> > expose a TPM running on the host to its guest OS? I can see QEMU has
> > those things.
> 
> A hypervisor is needed to implement discovery, and whether its
> discovery over a virtual or physical bus, that part is required.
> 
> > > > As far as I can tell, QEMU does not provide a mode in which the
> > > > tpm_vtpm_proxy driver would be involved *in the guest*.
> > > 
> > > It doesn't need to.  the vTPM proxy can itself do all of that using
> > > the guest Linux kernel.  There's no hypervisor or host
> > > involvement.  This is analagous to the vTPM for container use case,
> > > except that to get both running in a guest you'd use no
> > > containment, so the vtpm client and server run in the guest
> > > together:
> > > 
> > > https://www.kernel.org/doc/html/v4.16/security/tpm/tpm_vtpm_proxy.h
> > > tml
> > 
> > I apologize for still not grasping how this would apply. You bring up
> > a vtpm proxy that runs in the guest Linux kernel with no hypervisor
> > or host involvement, with the vtpm client and server running in the
> > guest together. But host involvement is specifically what we want
> > since only the host is trusted to run the software TPM implementation
> > or interact with a hardware TPM. I am missing a link in the chain:
> 
> Well, in your previous email you asked how you would run the emulator
> in the guest.  This is how.  If you're actually not interested in that
> use case we don't need to discuss it further.
> 
> > - guest userspace makes TPM call (through tpm2-tss or however else);
> > - guest kernel receives the call in tpm-dev-common / tpm-interface;
> > - tpm-interface delegates to a tpm-chip implementation (which one?
> >   vtpm_proxy_tpm_ops?);
> > - ???
> > - a host daemon triages and eventually performs the TPM operation.
> > 
> > 
> > > > Certainly you could use a vtpm proxy driver *on the host* but
> > > > would still need some other TPM driver running in the guest for
> > > > communication with the host, possibly virtio. If this second
> > > > approach is what you have in mind, let me know but I don't think
> > > > it is applicable to the Chrome OS use case.
> > > 
> > > Actually, the vTPM on-host use case doesn't use the in kernel vtpm
> > > proxy driver, it uses a plain unix socket.  That's what the
> > > original website tried to explain: you set up swtpm in socket mode,
> > > you point the qemu tpm emulation at the socket and you boot up your
> > > guest.
> > 
> > Okay. If I understand correctly, the vTPM on-host use case operates
> > through TIS and/or CRB implemented in QEMU and the tpm_tis / tpm_crb
> > driver in the guest. Do I have it right?
> 
> No, vTPM operates purely at the packet level over various interfaces. 
> Microsoft defines an actual network packet interface called socsim, but
> this can also run over unix sockets, which is what the current QEMU
> uses..
> 
> QEMU implements a virtual hardware emulation for discovery, but once
> discovered all the packet communication is handed off to the vTPM
> socket.
> 
> The virtual hardware emulation can be anything we have a driver for. 
> TIS is the simplest, which is why I think they used it.  TIS is
> actually a simple interface specification, it supports discovery over
> anything, but the discovery implemented in standard guest drivers is
> over ACPI, OF and PNP.  If you want more esoteric discovery methods, we
> also support i2c.  However, that latter is really only for embedded.  I
> think QEMU chose TIS because it works seamlessly on both Linux and
> Windows guests.
> 
> 
> > All of this is what I would like to avoid by using a virtio driver.
> 
> How? Discovery is the part that you have to do, whether it's using
> emulated physical mechanisms or virtual bus discovery.
> 
> If you want to make this more concrete: I once wrote a pure socsim
> packet TPM driver:
> 
> https://patchwork.ozlabs.org/patch/712465/
> 
> Since you just point it at the network socket, it does no discovery at
> all and works in any Linux environment that has net.  I actually still
> use it because a socsim TPM is easier to debug from the outside. 
> However it was 230 lines.  Your device is 460 so that means about half
> your driver is actually about discovery.
> 
> The only reasons I can see to use a virtual bus is either because its
> way more efficient (the storage/network use case) or because you've
> stripped down the hypervisor so far that it's incapable of emulating
> any physical device (the firecracker use case).

Thanks for the feedback James. It has been really useful and in-depth,

The yes/no condition boils down to what is or is there any hard reason
that the virtio driver is absolutely required rather than crosvm
implementing the same emulation model as QEMU does.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-24 16:30             ` James Bottomley
  2019-02-24 17:51               ` Jarkko Sakkinen
@ 2019-02-24 22:12               ` David Tolnay
  2019-02-25  9:55                 ` Jarkko Sakkinen
  2019-02-25 15:36                 ` James Bottomley
  1 sibling, 2 replies; 57+ messages in thread
From: David Tolnay @ 2019-02-24 22:12 UTC (permalink / raw)
  To: James Bottomley
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On 2/24/19 8:30 AM, James Bottomley wrote:
> QEMU implements a virtual hardware emulation for discovery, but once
> discovered all the packet communication is handed off to the vTPM
> socket.
> 
> The virtual hardware emulation can be anything we have a driver for. 
> TIS is the simplest, which is why I think they used it.  TIS is
> actually a simple interface specification, it supports discovery over
> anything, but the discovery implemented in standard guest drivers is
> over ACPI, OF and PNP.  If you want more esoteric discovery methods, we
> also support i2c.  However, that latter is really only for embedded.  I
> think QEMU chose TIS because it works seamlessly on both Linux and
> Windows guests.
> 
> 
>> All of this is what I would like to avoid by using a virtio driver.
> 
> How? Discovery is the part that you have to do, whether it's using
> emulated physical mechanisms or virtual bus discovery.

What I mean is that we avoid the need for *TPM-specific virtual hardware
emulation* for discovery by using a virtio driver.

We avoid implementing TIS or any other TPM-specific interface mechanism,
and we avoid implementing ACPI or OF or PNP or I2C or any other
additional bus necessitated by the existing set of Linux TPM drivers
which we wouldn't otherwise need.

The virtio driver performs discovery via virtio, which crosvm implements
already for all of its supported devices. This substantially reduces the
amount of TPM-specific code compared to your suggestions, and lowers the
barrier to entry for implementing TPM support in other hypervisors which
I hope we agree is beneficial.

Turning this around a different way, suppose that there already was a
virtio TPM driver in the kernel. For me as a hypervisor implementer,
what advantages do you see that would make me decide to implement
TPM-specific virtual hardware emulation in the form of TIS rather than
simply leveraging a virtio driver like for other virtual devices?


> If you want to make this more concrete: I once wrote a pure socsim
> packet TPM driver:
> 
> https://patchwork.ozlabs.org/patch/712465/
> 
> Since you just point it at the network socket, it does no discovery at
> all and works in any Linux environment that has net.  I actually still
> use it because a socsim TPM is easier to debug from the outside. 
> However it was 230 lines.  Your device is 460 so that means about half
> your driver is actually about discovery.

It looks like all the comments in the virtio driver account for the
difference, not necessarily discovery.

But to put this in perspective, what we save is the 1000+ lines I see in
QEMU dedicated to TIS. Without a virtio driver (or socsim, or something
else that avoids TPM-specific hardware emulation for device discovery),
QEMU and crosvm and other hypervisors need to reproduce a TIS
implementation. Conceptually this is simple but it is still a quite
substantial barrier compared to not needing any TPM-specific discovery.


> The only reasons I can see to use a virtual bus is either because its
> way more efficient (the storage/network use case) or because you've
> stripped down the hypervisor so far that it's incapable of emulating
> any physical device (the firecracker use case).

Crosvm does fall under the Firecracker use case, I believe.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-24 22:12               ` David Tolnay
@ 2019-02-25  9:55                 ` Jarkko Sakkinen
  2019-02-25 15:36                 ` James Bottomley
  1 sibling, 0 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-25  9:55 UTC (permalink / raw)
  To: David Tolnay
  Cc: James Bottomley, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On Sun, Feb 24, 2019 at 02:12:03PM -0800, David Tolnay wrote:
> We avoid implementing TIS or any other TPM-specific interface mechanism,
> and we avoid implementing ACPI or OF or PNP or I2C or any other
> additional bus necessitated by the existing set of Linux TPM drivers
> which we wouldn't otherwise need.
> 
> The virtio driver performs discovery via virtio, which crosvm implements
> already for all of its supported devices. This substantially reduces the
> amount of TPM-specific code compared to your suggestions, and lowers the
> barrier to entry for implementing TPM support in other hypervisors which
> I hope we agree is beneficial.

I don't necessarily because it adds to our codebase.

> Turning this around a different way, suppose that there already was a
> virtio TPM driver in the kernel. For me as a hypervisor implementer,
> what advantages do you see that would make me decide to implement
> TPM-specific virtual hardware emulation in the form of TIS rather than
> simply leveraging a virtio driver like for other virtual devices?

Well the way QEMU does it is already proven model. We don't need
anything new to kernel. You can take the ideas from there how to
implement the user space.

> QEMU dedicated to TIS. Without a virtio driver (or socsim, or something
> else that avoids TPM-specific hardware emulation for device discovery),
> QEMU and crosvm and other hypervisors need to reproduce a TIS
> implementation. Conceptually this is simple but it is still a quite
> substantial barrier compared to not needing any TPM-specific discovery.

If we are speaking about user space code, 1000+ lines is nothing. Adding
more complexity to the kernel is much more expensive .

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-22  5:51   ` Michael S. Tsirkin
  (?)
  (?)
@ 2019-02-25  9:58   ` Jarkko Sakkinen
  -1 siblings, 0 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-25  9:58 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: David Tolnay, Peter Huewe, Jason Gunthorpe, linux-integrity,
	Jason Wang, virtualization, dgreid, apronin

On Fri, Feb 22, 2019 at 12:51:18AM -0500, Michael S. Tsirkin wrote:
> Maintainer entry?

Even the tags are completely wrong and you are suggesting a maintainer
entry (e.g tested-by from the author) for this?

Like everything has been done in wrong order e.g. no RFC tag even
though this is fairly intrusive change.

For me it looks like that we are not even near landing anything to
upstream so lets not bring up this kind of proposals at this point.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-24 22:12               ` David Tolnay
  2019-02-25  9:55                 ` Jarkko Sakkinen
@ 2019-02-25 15:36                 ` James Bottomley
  2019-02-25 19:17                   ` Matthew Garrett
  1 sibling, 1 reply; 57+ messages in thread
From: James Bottomley @ 2019-02-25 15:36 UTC (permalink / raw)
  To: David Tolnay
  Cc: Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe, linux-integrity,
	Michael S. Tsirkin, Jason Wang, virtualization, dgreid, apronin

On Sun, 2019-02-24 at 14:12 -0800, David Tolnay wrote:
> On 2/24/19 8:30 AM, James Bottomley wrote:
> > QEMU implements a virtual hardware emulation for discovery, but
> > once discovered all the packet communication is handed off to the
> > vTPM socket.
> > 
> > The virtual hardware emulation can be anything we have a driver
> > for. TIS is the simplest, which is why I think they used it.  TIS
> > is actually a simple interface specification, it supports discovery
> > over anything, but the discovery implemented in standard guest
> > drivers is over ACPI, OF and PNP.  If you want more esoteric
> > discovery methods, we also support i2c.  However, that latter is
> > really only for embedded.  I think QEMU chose TIS because it works
> > seamlessly on both Linux and Windows guests.
> > 
> > 
> > > All of this is what I would like to avoid by using a virtio
> > > driver.
> > 
> > How? Discovery is the part that you have to do, whether it's using
> > emulated physical mechanisms or virtual bus discovery.
> 
> What I mean is that we avoid the need for *TPM-specific virtual
> hardware emulation* for discovery by using a virtio driver.
> 
> We avoid implementing TIS or any other TPM-specific interface
> mechanism, and we avoid implementing ACPI or OF or PNP or I2C or any
> other additional bus necessitated by the existing set of Linux TPM
> drivers which we wouldn't otherwise need.
> 
> The virtio driver performs discovery via virtio, which crosvm
> implements already for all of its supported devices. This
> substantially reduces the amount of TPM-specific code compared to
> your suggestions, and lowers the barrier to entry for implementing
> TPM support in other hypervisors which I hope we agree is beneficial.

Well, that's somewhat misleading:  The reason we already have two
hypervisor specific drivers already is because every hypervisor has a
different  virtual discovery mechanism. You didn't find the other two
hypervisor drivers remotely useful, so why would another hypervisor
find yours useful?

> Turning this around a different way, suppose that there already was a
> virtio TPM driver in the kernel.

There already are two paravirt TPM drivers: xen-tpmfront and
tpm_ibmvtpm.

The reason we have so many is that every hypervisor implements a
different virtual bus mechanism.  So if we add this for you all we need
is an ESX driver to have the full complement; or at least for the four
main hypervisors, there are probably a huge number of minor ones, like
the parallels hypervisor, virtual box etc. ... by the time we're done
we'll have ten or so paravirt TPM drivers.

>  For me as a hypervisor implementer, what advantages do you see that
> would make me decide to implement TPM-specific virtual hardware
> emulation in the form of TIS rather than simply leveraging a virtio
> driver like for other virtual devices?

So your argument is that for every device we have in the Linux kernel,
we should have the N hypervisor paravirt variants for the same thing? 
I assure you that's not going to fly because paravirt drivers would
then outnumber real drivers by an order of magnitude.

> > If you want to make this more concrete: I once wrote a pure socsim
> > packet TPM driver:
> > 
> > https://patchwork.ozlabs.org/patch/712465/
> > 
> > Since you just point it at the network socket, it does no discovery
> > at all and works in any Linux environment that has net.  I actually
> > still use it because a socsim TPM is easier to debug from the
> > outside. However it was 230 lines.  Your device is 460 so that
> > means about half your driver is actually about discovery.
> 
> It looks like all the comments in the virtio driver account for the
> difference, not necessarily discovery.
> 
> But to put this in perspective, what we save is the 1000+ lines I see
> in QEMU dedicated to TIS. Without a virtio driver (or socsim, or
> something else that avoids TPM-specific hardware emulation for device
> discovery), QEMU and crosvm and other hypervisors need to reproduce a
> TIS implementation. Conceptually this is simple but it is still a
> quite substantial barrier compared to not needing any TPM-specific
> discovery.

Paravirt drivers are something we add when there's a pragmatic use
case.  Paravirt is not a panacea because it costs us in terms of
additional maintenance burden.  You also still need a receiver in the
hypervisor even for a paravirt driver.  We can argue about the amount
of code you need for the receiver, but without adding some code another
hypervisor can't make use of your paravirt driver.  And, of course, if
they use a different virtual bus implementation, as every hypervisor
seems to, it's quite an enormous amount of code to emulate your bus
implementation.

> > The only reasons I can see to use a virtual bus is either because
> > its way more efficient (the storage/network use case) or because
> > you've stripped down the hypervisor so far that it's incapable of
> > emulating any physical device (the firecracker use case).
> 
> Crosvm does fall under the Firecracker use case, I believe.

Well you just added USB emulation:

https://www.aboutchromebooks.com/news/project-crostini-usb-support-linux-chrome-os/

You didn't tell the kernel USB subsystem to add virtio USB drivers ...

What I've been fishing for for several emails is the pragmatic use case
... do you have one?

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 15:36                 ` James Bottomley
@ 2019-02-25 19:17                   ` Matthew Garrett
  2019-02-25 19:54                     ` Jarkko Sakkinen
  2019-02-25 20:20                     ` James Bottomley
  0 siblings, 2 replies; 57+ messages in thread
From: Matthew Garrett @ 2019-02-25 19:17 UTC (permalink / raw)
  To: James Bottomley
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, Feb 25, 2019 at 7:36 AM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> > The virtio driver performs discovery via virtio, which crosvm
> > implements already for all of its supported devices. This
> > substantially reduces the amount of TPM-specific code compared to
> > your suggestions, and lowers the barrier to entry for implementing
> > TPM support in other hypervisors which I hope we agree is beneficial.
>
> Well, that's somewhat misleading:  The reason we already have two
> hypervisor specific drivers already is because every hypervisor has a
> different  virtual discovery mechanism. You didn't find the other two
> hypervisor drivers remotely useful, so why would another hypervisor
> find yours useful?

The existing hypervisor drivers expose hypervisor-specific details.
This proposed driver provides an abstract interface that is usable by
other hypervisors. It allows building a VM that exposes TPM
functionality without requiring additional hardware emulation,
reducing the hypervisor attack surface.

> >  For me as a hypervisor implementer, what advantages do you see that
> > would make me decide to implement TPM-specific virtual hardware
> > emulation in the form of TIS rather than simply leveraging a virtio
> > driver like for other virtual devices?
>
> So your argument is that for every device we have in the Linux kernel,
> we should have the N hypervisor paravirt variants for the same thing?
> I assure you that's not going to fly because paravirt drivers would
> then outnumber real drivers by an order of magnitude.

Well, no - in general there's no need to have more than one virtio
driver for any /class/ of hardware. For various unfortunate accidents
of history we've ended up with multiple cases where we have
hypervisor-specific drivers. Using the more generic virtio
infrastructure reduces the need for that, since any hypervisor should
be able to implement the backend (eg, in this case it'd be very easy
to add support for this driver to qemu, which would allow the use of
TPMs without needing to enable a whole bunch of additional qemu
features). This isn't a discussion we'd be having if we'd pushed back
more strongly against hypervisor-specific solutions in the past.

(While I work for Google, I'm not involved in crosvm development and
this shouldn't be interpreted as the position of anyone on that team)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 19:17                   ` Matthew Garrett
@ 2019-02-25 19:54                     ` Jarkko Sakkinen
  2019-02-25 20:20                     ` James Bottomley
  1 sibling, 0 replies; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-25 19:54 UTC (permalink / raw)
  To: Matthew Garrett
  Cc: James Bottomley, David Tolnay, Peter Huewe, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, Feb 25, 2019 at 11:17:20AM -0800, Matthew Garrett wrote:
> On Mon, Feb 25, 2019 at 7:36 AM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > > The virtio driver performs discovery via virtio, which crosvm
> > > implements already for all of its supported devices. This
> > > substantially reduces the amount of TPM-specific code compared to
> > > your suggestions, and lowers the barrier to entry for implementing
> > > TPM support in other hypervisors which I hope we agree is beneficial.
> >
> > Well, that's somewhat misleading:  The reason we already have two
> > hypervisor specific drivers already is because every hypervisor has a
> > different  virtual discovery mechanism. You didn't find the other two
> > hypervisor drivers remotely useful, so why would another hypervisor
> > find yours useful?
> 
> The existing hypervisor drivers expose hypervisor-specific details.
> This proposed driver provides an abstract interface that is usable by
> other hypervisors. It allows building a VM that exposes TPM
> functionality without requiring additional hardware emulation,
> reducing the hypervisor attack surface.
> 
> > >  For me as a hypervisor implementer, what advantages do you see that
> > > would make me decide to implement TPM-specific virtual hardware
> > > emulation in the form of TIS rather than simply leveraging a virtio
> > > driver like for other virtual devices?
> >
> > So your argument is that for every device we have in the Linux kernel,
> > we should have the N hypervisor paravirt variants for the same thing?
> > I assure you that's not going to fly because paravirt drivers would
> > then outnumber real drivers by an order of magnitude.
> 
> Well, no - in general there's no need to have more than one virtio
> driver for any /class/ of hardware. For various unfortunate accidents
> of history we've ended up with multiple cases where we have
> hypervisor-specific drivers. Using the more generic virtio
> infrastructure reduces the need for that, since any hypervisor should
> be able to implement the backend (eg, in this case it'd be very easy
> to add support for this driver to qemu, which would allow the use of
> TPMs without needing to enable a whole bunch of additional qemu
> features). This isn't a discussion we'd be having if we'd pushed back
> more strongly against hypervisor-specific solutions in the past.
> 
> (While I work for Google, I'm not involved in crosvm development and
> this shouldn't be interpreted as the position of anyone on that team)

OK, this clears a lot of misconceptions thank you. The commit message
itself fails to explain almost all of these details that you just wrote.

The commit is also pretty badly formed so I grew a lot of initial
mistrust, which I think is perfectly understandable.  Putting links to
random source files to a commit message is a sign of unwanted rush (i.e.
time has not been taken to write down in plain English what has been
done).

Once the commit is in a better condition and some documentation how
the stack is implemented from end to end we can better evaluate this.

Some instructions how to compile and run crosvm would be also nice.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 19:17                   ` Matthew Garrett
  2019-02-25 19:54                     ` Jarkko Sakkinen
@ 2019-02-25 20:20                     ` James Bottomley
  2019-02-25 21:00                       ` Matthew Garrett
  2019-02-25 21:05                       ` Jarkko Sakkinen
  1 sibling, 2 replies; 57+ messages in thread
From: James Bottomley @ 2019-02-25 20:20 UTC (permalink / raw)
  To: Matthew Garrett
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, 2019-02-25 at 11:17 -0800, Matthew Garrett wrote:
> On Mon, Feb 25, 2019 at 7:36 AM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > > The virtio driver performs discovery via virtio, which crosvm
> > > implements already for all of its supported devices. This
> > > substantially reduces the amount of TPM-specific code compared to
> > > your suggestions, and lowers the barrier to entry for
> > > implementing TPM support in other hypervisors which I hope we
> > > agree is beneficial.
> > 
> > Well, that's somewhat misleading:  The reason we already have two
> > hypervisor specific drivers already is because every hypervisor has
> > a different  virtual discovery mechanism. You didn't find the other
> > two hypervisor drivers remotely useful, so why would another
> > hypervisor find yours useful?
>  
> The existing hypervisor drivers expose hypervisor-specific details.
> This proposed driver provides an abstract interface that is usable by
> other hypervisors. It allows building a VM that exposes TPM
> functionality without requiring additional hardware emulation,
> reducing the hypervisor attack surface.

Well, that depends whether you think a virtio bus is an abstract
concept or a hypervisor specific detail.  There are currently four
major hypervisors: xen, kvm, hyper-v and ESX.  Of those, only one
implements virtio: kvm.  I agree virtio is a standard and certainly a
slew of minor hypervisors implement it because they need paravirt
support on Linux so they piggyback off kvm, but I don't see any of the
other major hypervisors jumping on the bandwagon.

I certainly agree our lives would be easier if all the major hypervisor
vendors would just agree a single paravirt driver standard.

> > >  For me as a hypervisor implementer, what advantages do you see
> > > that would make me decide to implement TPM-specific virtual
> > > hardware emulation in the form of TIS rather than simply
> > > leveraging a virtio driver like for other virtual devices?
> > 
> > So your argument is that for every device we have in the Linux
> > kernel, we should have the N hypervisor paravirt variants for the
> > same thing? I assure you that's not going to fly because paravirt
> > drivers would then outnumber real drivers by an order of magnitude.
> 
> Well, no - in general there's no need to have more than one virtio
> driver for any /class/ of hardware. For various unfortunate accidents
> of history we've ended up with multiple cases where we have
> hypervisor-specific drivers.

Fully agree, that's why I'm doing so now.

>  Using the more generic virtio
> infrastructure reduces the need for that, since any hypervisor should
> be able to implement the backend (eg, in this case it'd be very easy
> to add support for this driver to qemu,

I certainly agree there ... is there a plan for this?

>  which would allow the use of TPMs without needing to enable a whole
> bunch of additional qemu features). This isn't a discussion we'd be
> having if we'd pushed back more strongly against hypervisor-specific
> solutions in the past.

I'm still looking for the pragmatic use case.  I think yours is attack
surface reduction, because the virtio discovery and operation is less
code and therefore more secure than physical hardware discovery and
operation?  I'm not entirely sure I buy that because the TPM
communication interface is pretty simple and it's fairly deep down in
the kernel internal stack making it difficult to exploit.

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 20:20                     ` James Bottomley
@ 2019-02-25 21:00                       ` Matthew Garrett
  2019-02-25 21:02                         ` Matthew Garrett
  2019-02-25 22:14                         ` James Bottomley
  2019-02-25 21:05                       ` Jarkko Sakkinen
  1 sibling, 2 replies; 57+ messages in thread
From: Matthew Garrett @ 2019-02-25 21:00 UTC (permalink / raw)
  To: James Bottomley
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, Feb 25, 2019 at 12:20 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Mon, 2019-02-25 at 11:17 -0800, Matthew Garrett wrote:
> > The existing hypervisor drivers expose hypervisor-specific details.
> > This proposed driver provides an abstract interface that is usable by
> > other hypervisors. It allows building a VM that exposes TPM
> > functionality without requiring additional hardware emulation,
> > reducing the hypervisor attack surface.
>
> Well, that depends whether you think a virtio bus is an abstract
> concept or a hypervisor specific detail.  There are currently four
> major hypervisors: xen, kvm, hyper-v and ESX.  Of those, only one
> implements virtio: kvm.  I agree virtio is a standard and certainly a
> slew of minor hypervisors implement it because they need paravirt
> support on Linux so they piggyback off kvm, but I don't see any of the
> other major hypervisors jumping on the bandwagon.

Is there any technical issue preventing virtio working with Xen?
Running HVM guests under qemu ought to allow virtio to work.

> > Well, no - in general there's no need to have more than one virtio
> > driver for any /class/ of hardware. For various unfortunate accidents
> > of history we've ended up with multiple cases where we have
> > hypervisor-specific drivers.
>
> Fully agree, that's why I'm doing so now.
>
> >  Using the more generic virtio
> > infrastructure reduces the need for that, since any hypervisor should
> > be able to implement the backend (eg, in this case it'd be very easy
> > to add support for this driver to qemu,
>
> I certainly agree there ... is there a plan for this?

I don't know, but I can see the value in making testing easier.

> >  which would allow the use of TPMs without needing to enable a whole
> > bunch of additional qemu features). This isn't a discussion we'd be
> > having if we'd pushed back more strongly against hypervisor-specific
> > solutions in the past.
>
> I'm still looking for the pragmatic use case.  I think yours is attack
> surface reduction, because the virtio discovery and operation is less
> code and therefore more secure than physical hardware discovery and
> operation?  I'm not entirely sure I buy that because the TPM
> communication interface is pretty simple and it's fairly deep down in
> the kernel internal stack making it difficult to exploit.

Being able to get away without any LPC support code at all seems like
a win, as does not having any ACPI or DeviceTree parsing code.
Injecting the hardware information via the kernel command line isn't
impossible, but it's not an attractive solution.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 21:00                       ` Matthew Garrett
@ 2019-02-25 21:02                         ` Matthew Garrett
  2019-02-25 22:14                         ` James Bottomley
  1 sibling, 0 replies; 57+ messages in thread
From: Matthew Garrett @ 2019-02-25 21:02 UTC (permalink / raw)
  To: James Bottomley
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, Feb 25, 2019 at 1:00 PM Matthew Garrett <mjg59@google.com> wrote:
> > I'm still looking for the pragmatic use case.  I think yours is attack
> > surface reduction, because the virtio discovery and operation is less
> > code and therefore more secure than physical hardware discovery and
> > operation?  I'm not entirely sure I buy that because the TPM
> > communication interface is pretty simple and it's fairly deep down in
> > the kernel internal stack making it difficult to exploit.
>
> Being able to get away without any LPC support code at all seems like
> a win, as does not having any ACPI or DeviceTree parsing code.
> Injecting the hardware information via the kernel command line isn't
> impossible, but it's not an attractive solution.

Oh, but to be clear - I think the bigger win is having a reduced
surface in the *hypervisor*, not the kernel. Just having enough
support to pass commands through to the vTPM is a lot easier than
emulating the whole hardware interface to the TPM (and see the number
of bugs in things like qemu's floppy drive emulation as an example of
how hard this can be to get right)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 20:20                     ` James Bottomley
  2019-02-25 21:00                       ` Matthew Garrett
@ 2019-02-25 21:05                       ` Jarkko Sakkinen
  2019-02-25 22:24                         ` James Bottomley
  1 sibling, 1 reply; 57+ messages in thread
From: Jarkko Sakkinen @ 2019-02-25 21:05 UTC (permalink / raw)
  To: James Bottomley
  Cc: Matthew Garrett, David Tolnay, Peter Huewe, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, Feb 25, 2019 at 12:20:43PM -0800, James Bottomley wrote:
> On Mon, 2019-02-25 at 11:17 -0800, Matthew Garrett wrote:
> > On Mon, Feb 25, 2019 at 7:36 AM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:
> > > > The virtio driver performs discovery via virtio, which crosvm
> > > > implements already for all of its supported devices. This
> > > > substantially reduces the amount of TPM-specific code compared to
> > > > your suggestions, and lowers the barrier to entry for
> > > > implementing TPM support in other hypervisors which I hope we
> > > > agree is beneficial.
> > > 
> > > Well, that's somewhat misleading:  The reason we already have two
> > > hypervisor specific drivers already is because every hypervisor has
> > > a different  virtual discovery mechanism. You didn't find the other
> > > two hypervisor drivers remotely useful, so why would another
> > > hypervisor find yours useful?
> >  
> > The existing hypervisor drivers expose hypervisor-specific details.
> > This proposed driver provides an abstract interface that is usable by
> > other hypervisors. It allows building a VM that exposes TPM
> > functionality without requiring additional hardware emulation,
> > reducing the hypervisor attack surface.
> 
> Well, that depends whether you think a virtio bus is an abstract
> concept or a hypervisor specific detail.  There are currently four
> major hypervisors: xen, kvm, hyper-v and ESX.  Of those, only one
> implements virtio: kvm.  I agree virtio is a standard and certainly a
> slew of minor hypervisors implement it because they need paravirt
> support on Linux so they piggyback off kvm, but I don't see any of the
> other major hypervisors jumping on the bandwagon.
> 
> I certainly agree our lives would be easier if all the major hypervisor
> vendors would just agree a single paravirt driver standard.

I think that a Windows hypervisor (Hyper-V) and a closed hypervisor
(VMWare) are out of context for this discussion. I think it is a good
thing that there exist a fully open alternative to closed solutions such
as VMBus. It is not only good for Linux but also for other open source
operating systems (*BSD, Fuchsia etc.). I won't disregard virtio-TPM
based on that.

The main interest lies in these:

- QEMU
- KVM
- Xen

> >  Using the more generic virtio
> > infrastructure reduces the need for that, since any hypervisor should
> > be able to implement the backend (eg, in this case it'd be very easy
> > to add support for this driver to qemu,
> 
> I certainly agree there ... is there a plan for this?

I don't *necessarily* require QEMU to support this in the implementation
level in order to accept the change. What I do require is a buy-in from
the QEMU and Xen community that this is the right path.

/Jarkko

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 21:00                       ` Matthew Garrett
  2019-02-25 21:02                         ` Matthew Garrett
@ 2019-02-25 22:14                         ` James Bottomley
  2019-02-25 22:24                           ` Matthew Garrett
  1 sibling, 1 reply; 57+ messages in thread
From: James Bottomley @ 2019-02-25 22:14 UTC (permalink / raw)
  To: Matthew Garrett
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, 2019-02-25 at 13:00 -0800, Matthew Garrett wrote:
> On Mon, Feb 25, 2019 at 12:20 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > On Mon, 2019-02-25 at 11:17 -0800, Matthew Garrett wrote:
> > > The existing hypervisor drivers expose hypervisor-specific
> > > details.  This proposed driver provides an abstract interface
> > > that is usable by other hypervisors. It allows building a VM that
> > > exposes TPM functionality without requiring additional hardware
> > > emulation, reducing the hypervisor attack surface.
> > 
> > Well, that depends whether you think a virtio bus is an abstract
> > concept or a hypervisor specific detail.  There are currently four
> > major hypervisors: xen, kvm, hyper-v and ESX.  Of those, only one
> > implements virtio: kvm.  I agree virtio is a standard and certainly
> > a slew of minor hypervisors implement it because they need paravirt
> > support on Linux so they piggyback off kvm, but I don't see any of
> > the other major hypervisors jumping on the bandwagon.
> 
> Is there any technical issue preventing virtio working with Xen?
> Running HVM guests under qemu ought to allow virtio to work.

Other than developer resistance because xenbus was "first" not really,
I think.  There was even a GSOC student who tried it:

https://wiki.xenproject.org/wiki/Virtio_On_Xen

> > > Well, no - in general there's no need to have more than one
> > > virtio driver for any /class/ of hardware. For various
> > > unfortunate accidents of history we've ended up with multiple
> > > cases where we have hypervisor-specific drivers.
> > 
> > Fully agree, that's why I'm doing so now.
> > 
> > >  Using the more generic virtio infrastructure reduces the need
> > > for that, since any hypervisor should be able to implement the
> > > backend (eg, in this case it'd be very easy to add support for
> > > this driver to qemu,
> > 
> > I certainly agree there ... is there a plan for this?
> 
> I don't know, but I can see the value in making testing easier.

Having at least a commitment to qemu support does make the case
stronger.

> > >  which would allow the use of TPMs without needing to enable a
> > > whole bunch of additional qemu features). This isn't a discussion
> > > we'd be having if we'd pushed back more strongly against
> > > hypervisor-specific solutions in the past.
> > 
> > I'm still looking for the pragmatic use case.  I think yours is
> > attack surface reduction, because the virtio discovery and
> > operation is less code and therefore more secure than physical
> > hardware discovery and operation?  I'm not entirely sure I buy that
> > because the TPM communication interface is pretty simple and it's
> > fairly deep down in the kernel internal stack making it difficult
> > to exploit.
> 
> Being able to get away without any LPC support code at all seems like
> a win, as does not having any ACPI or DeviceTree parsing code.
> Injecting the hardware information via the kernel command line isn't
> impossible, but it's not an attractive solution.

Heh, but isn't that exactly what crosvm did for usb:

https://chromium.googlesource.com/chromiumos/overlays/chromiumos-overlay/+/add5a4c3751778e5380f00b2ee6cebcb6bda48fc

Effectively it bypasses the hypervisor altogether and simply makes a
direct connection to the host devices.  The TPM could actually work in
exactly the same way, except you'd have to use the socsim IP connection
(which all TSSs support) rather than a file descriptor.

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 22:14                         ` James Bottomley
@ 2019-02-25 22:24                           ` Matthew Garrett
  2019-02-25 22:32                             ` James Bottomley
  0 siblings, 1 reply; 57+ messages in thread
From: Matthew Garrett @ 2019-02-25 22:24 UTC (permalink / raw)
  To: James Bottomley
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, Feb 25, 2019 at 2:14 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> > Being able to get away without any LPC support code at all seems like
> > a win, as does not having any ACPI or DeviceTree parsing code.
> > Injecting the hardware information via the kernel command line isn't
> > impossible, but it's not an attractive solution.
>
> Heh, but isn't that exactly what crosvm did for usb:
>
> https://chromium.googlesource.com/chromiumos/overlays/chromiumos-overlay/+/add5a4c3751778e5380f00b2ee6cebcb6bda48fc

My understanding is that the crosvm USB code is intended to allow
arbitrary USB hardware to be passed through to the guest - doing this
via virtio sounds complicated (you'd need a virtio driver that covered
every USB class, and how would you manage that for devices that are
handled in userland at the moment), whereas the virtio TPM support is
intended to pass through a software TPM rather than grant access to
the host TPM.

> Effectively it bypasses the hypervisor altogether and simply makes a
> direct connection to the host devices.  The TPM could actually work in
> exactly the same way, except you'd have to use the socsim IP connection
> (which all TSSs support) rather than a file descriptor.

I don't really follow - how would in-kernel TPM features work then?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 21:05                       ` Jarkko Sakkinen
@ 2019-02-25 22:24                         ` James Bottomley
  0 siblings, 0 replies; 57+ messages in thread
From: James Bottomley @ 2019-02-25 22:24 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: Matthew Garrett, David Tolnay, Peter Huewe, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, 2019-02-25 at 23:05 +0200, Jarkko Sakkinen wrote:
> On Mon, Feb 25, 2019 at 12:20:43PM -0800, James Bottomley wrote:
> > On Mon, 2019-02-25 at 11:17 -0800, Matthew Garrett wrote:
> > > On Mon, Feb 25, 2019 at 7:36 AM James Bottomley
> > > <James.Bottomley@hansenpartnership.com> wrote:
> > > > > The virtio driver performs discovery via virtio, which crosvm
> > > > > implements already for all of its supported devices. This
> > > > > substantially reduces the amount of TPM-specific code
> > > > > compared to your suggestions, and lowers the barrier to entry
> > > > > for implementing TPM support in other hypervisors which I
> > > > > hope we agree is beneficial.
> > > > 
> > > > Well, that's somewhat misleading:  The reason we already have
> > > > two hypervisor specific drivers already is because every
> > > > hypervisor has a different  virtual discovery mechanism. You
> > > > didn't find the other two hypervisor drivers remotely useful,
> > > > so why would another hypervisor find yours useful?
> > > 
> > >  
> > > The existing hypervisor drivers expose hypervisor-specific
> > > details. This proposed driver provides an abstract interface that
> > > is usable by other hypervisors. It allows building a VM that
> > > exposes TPM functionality without requiring additional hardware
> > > emulation, reducing the hypervisor attack surface.
> > 
> > Well, that depends whether you think a virtio bus is an abstract
> > concept or a hypervisor specific detail.  There are currently four
> > major hypervisors: xen, kvm, hyper-v and ESX.  Of those, only one
> > implements virtio: kvm.  I agree virtio is a standard and certainly
> > a slew of minor hypervisors implement it because they need paravirt
> > support on Linux so they piggyback off kvm, but I don't see any of
> > the other major hypervisors jumping on the bandwagon.
> > 
> > I certainly agree our lives would be easier if all the major
> > hypervisor vendors would just agree a single paravirt driver
> > standard.
> 
> I think that a Windows hypervisor (Hyper-V) and a closed hypervisor
> (VMWare) are out of context for this discussion.

But why?  We already have both in various Linux subsystems; for
instance SCSI has storvsc (hyper-v paravirt storage driver) and
vmw_pvscsi (VMWare paravirt storage dirver).  The only real requirement
is a willingness to open source the driver and publish the
communication spec.  If another paravirt TPM driver is a good idea, why
wouldn't we allow these guys to play in our sandpit too (under the
right open source conditions, of course)?

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 22:24                           ` Matthew Garrett
@ 2019-02-25 22:32                             ` James Bottomley
  2019-02-25 22:43                               ` Matthew Garrett
  0 siblings, 1 reply; 57+ messages in thread
From: James Bottomley @ 2019-02-25 22:32 UTC (permalink / raw)
  To: Matthew Garrett
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, 2019-02-25 at 14:24 -0800, Matthew Garrett wrote:
> On Mon, Feb 25, 2019 at 2:14 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > > Being able to get away without any LPC support code at all seems
> > > like
> > > a win, as does not having any ACPI or DeviceTree parsing code.
> > > Injecting the hardware information via the kernel command line
> > > isn't
> > > impossible, but it's not an attractive solution.
> > 
> > Heh, but isn't that exactly what crosvm did for usb:
> > 
> > https://chromium.googlesource.com/chromiumos/overlays/chromiumos-ov
> > erlay/+/add5a4c3751778e5380f00b2ee6cebcb6bda48fc
> 
> My understanding is that the crosvm USB code is intended to allow
> arbitrary USB hardware to be passed through to the guest - doing this
> via virtio sounds complicated (you'd need a virtio driver that
> covered every USB class, and how would you manage that for devices
> that are handled in userland at the moment),

I think you'd need a virtio equivalent of the host driver, say
xhci_virtio ... you could still use the in-kernel USB class drivers

>  whereas the virtio TPM support is intended to pass through a
> software TPM rather than grant access to the host TPM.
> 
> > Effectively it bypasses the hypervisor altogether and simply makes
> > a direct connection to the host devices.  The TPM could actually
> > work in exactly the same way, except you'd have to use the socsim
> > IP connection (which all TSSs support) rather than a file
> > descriptor.
> 
> I don't really follow - how would in-kernel TPM features work then?

If you do it at the TSS layer, then, of course, the kernel wouldn't
participate.  If you used the proposed in-kernel socsim driver, I
suppose it could ... not that I'm advocating this, I'm saying if you
want to minimise hypervisor code for attack surface reduction, this
would be the way to do it because this solution requires no in-
hypervisor code at all.

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 22:32                             ` James Bottomley
@ 2019-02-25 22:43                               ` Matthew Garrett
  2019-02-25 22:51                                 ` James Bottomley
  0 siblings, 1 reply; 57+ messages in thread
From: Matthew Garrett @ 2019-02-25 22:43 UTC (permalink / raw)
  To: James Bottomley
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, Feb 25, 2019 at 2:32 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Mon, 2019-02-25 at 14:24 -0800, Matthew Garrett wrote:
> > My understanding is that the crosvm USB code is intended to allow
> > arbitrary USB hardware to be passed through to the guest - doing this
> > via virtio sounds complicated (you'd need a virtio driver that
> > covered every USB class, and how would you manage that for devices
> > that are handled in userland at the moment),
>
> I think you'd need a virtio equivalent of the host driver, say
> xhci_virtio ... you could still use the in-kernel USB class drivers

Mm. I honestly don't know enough about the desired use case for USB to
be able to provide meaningful input here.

> > > Effectively it bypasses the hypervisor altogether and simply makes
> > > a direct connection to the host devices.  The TPM could actually
> > > work in exactly the same way, except you'd have to use the socsim
> > > IP connection (which all TSSs support) rather than a file
> > > descriptor.
> >
> > I don't really follow - how would in-kernel TPM features work then?
>
> If you do it at the TSS layer, then, of course, the kernel wouldn't
> participate.  If you used the proposed in-kernel socsim driver, I
> suppose it could ... not that I'm advocating this, I'm saying if you
> want to minimise hypervisor code for attack surface reduction, this
> would be the way to do it because this solution requires no in-
> hypervisor code at all.

You still need a transport mechanism through the hypervisor to
communicate with the host - what would you be using in that case
instead of virtio?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 22:43                               ` Matthew Garrett
@ 2019-02-25 22:51                                 ` James Bottomley
  2019-02-25 23:02                                   ` Matthew Garrett
  0 siblings, 1 reply; 57+ messages in thread
From: James Bottomley @ 2019-02-25 22:51 UTC (permalink / raw)
  To: Matthew Garrett
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, 2019-02-25 at 14:43 -0800, Matthew Garrett wrote:
> On Mon, Feb 25, 2019 at 2:32 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > On Mon, 2019-02-25 at 14:24 -0800, Matthew Garrett wrote:
> > > My understanding is that the crosvm USB code is intended to allow
> > > arbitrary USB hardware to be passed through to the guest - doing
> > > this
> > > via virtio sounds complicated (you'd need a virtio driver that
> > > covered every USB class, and how would you manage that for
> > > devices
> > > that are handled in userland at the moment),
> > 
> > I think you'd need a virtio equivalent of the host driver, say
> > xhci_virtio ... you could still use the in-kernel USB class drivers
> 
> Mm. I honestly don't know enough about the desired use case for USB
> to be able to provide meaningful input here.
> 
> > > > Effectively it bypasses the hypervisor altogether and simply
> > > > makes a direct connection to the host devices.  The TPM could
> > > > actually work in exactly the same way, except you'd have to use
> > > > the socsim IP connection (which all TSSs support) rather than a
> > > > file descriptor.
> > > 
> > > I don't really follow - how would in-kernel TPM features work
> > > then?
> > 
> > If you do it at the TSS layer, then, of course, the kernel wouldn't
> > participate.  If you used the proposed in-kernel socsim driver, I
> > suppose it could ... not that I'm advocating this, I'm saying if
> > you want to minimise hypervisor code for attack surface reduction,
> > this would be the way to do it because this solution requires no
> > in-hypervisor code at all.
> 
> You still need a transport mechanism through the hypervisor to
> communicate with the host - what would you be using in that case
> instead of virtio?

Socsim is net transported; it's sort of the TPM equivalent of NFS or
iSCSI storage for guests.

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 22:51                                 ` James Bottomley
@ 2019-02-25 23:02                                   ` Matthew Garrett
  2019-02-25 23:09                                     ` James Bottomley
  0 siblings, 1 reply; 57+ messages in thread
From: Matthew Garrett @ 2019-02-25 23:02 UTC (permalink / raw)
  To: James Bottomley
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, Feb 25, 2019 at 2:51 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Mon, 2019-02-25 at 14:43 -0800, Matthew Garrett wrote:
> > You still need a transport mechanism through the hypervisor to
> > communicate with the host - what would you be using in that case
> > instead of virtio?
>
> Socsim is net transported; it's sort of the TPM equivalent of NFS or
> iSCSI storage for guests.

Oh, so it relies on the guest being able to reach the host via
network? Hmm. That's a different security tradeoff. It presumably also
prevents any in-kernel use before networking is up?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH] tpm: Add driver for TPM over virtio
  2019-02-25 23:02                                   ` Matthew Garrett
@ 2019-02-25 23:09                                     ` James Bottomley
  0 siblings, 0 replies; 57+ messages in thread
From: James Bottomley @ 2019-02-25 23:09 UTC (permalink / raw)
  To: Matthew Garrett
  Cc: David Tolnay, Peter Huewe, Jarkko Sakkinen, Jason Gunthorpe,
	linux-integrity, Michael S. Tsirkin, Jason Wang, virtualization,
	dgreid, apronin

On Mon, 2019-02-25 at 15:02 -0800, Matthew Garrett wrote:
> On Mon, Feb 25, 2019 at 2:51 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > On Mon, 2019-02-25 at 14:43 -0800, Matthew Garrett wrote:
> > > You still need a transport mechanism through the hypervisor to
> > > communicate with the host - what would you be using in that case
> > > instead of virtio?
> > 
> > Socsim is net transported; it's sort of the TPM equivalent of NFS
> > or iSCSI storage for guests.
> 
> Oh, so it relies on the guest being able to reach the host via
> network? Hmm. That's a different security tradeoff. It presumably
> also prevents any in-kernel use before networking is up?

Exactly, that's why I'm not recommending it.  However, the initrdless
early net nastiness path has already been trodden by the NFS/iSCSI
root people ...

James


^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2019-02-25 23:09 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-22  2:14 [PATCH] tpm: Add driver for TPM over virtio David Tolnay
2019-02-22  5:51 ` Michael S. Tsirkin
2019-02-22  5:51   ` Michael S. Tsirkin
2019-02-22 21:40   ` David Tolnay
2019-02-22 22:24     ` Michael S. Tsirkin
2019-02-22 22:24       ` Michael S. Tsirkin
2019-02-23  1:23       ` David Tolnay
2019-02-25  9:58   ` Jarkko Sakkinen
2019-02-22 10:26 ` Jarkko Sakkinen
2019-02-22 15:23   ` Michael S. Tsirkin
2019-02-22 15:23     ` Michael S. Tsirkin
2019-02-22 19:31     ` Jarkko Sakkinen
2019-02-22 19:33       ` Jarkko Sakkinen
2019-02-22 21:25         ` Michael S. Tsirkin
2019-02-22 21:25           ` Michael S. Tsirkin
2019-02-22 21:50           ` Jarkko Sakkinen
2019-02-22 22:24             ` David Tolnay
2019-02-22 22:36               ` Jarkko Sakkinen
2019-02-22 23:05                 ` Michael S. Tsirkin
2019-02-22 23:05                   ` Michael S. Tsirkin
2019-02-24  9:33                   ` Jarkko Sakkinen
2019-02-22 20:55       ` Michael S. Tsirkin
2019-02-22 20:55         ` Michael S. Tsirkin
2019-02-22 21:30         ` Jarkko Sakkinen
2019-02-22 10:30 ` Jarkko Sakkinen
2019-02-22 15:30 ` James Bottomley
2019-02-22 21:16   ` Michael S. Tsirkin
2019-02-22 21:16     ` Michael S. Tsirkin
2019-02-22 21:31     ` Jason Gunthorpe
2019-02-22 21:59       ` Jarkko Sakkinen
2019-02-22 22:07         ` Michael S. Tsirkin
2019-02-22 22:07           ` Michael S. Tsirkin
2019-02-22 22:14           ` Jarkko Sakkinen
2019-02-22 22:00   ` David Tolnay
2019-02-22 22:18     ` James Bottomley
2019-02-23  0:45       ` David Tolnay
2019-02-23  1:34         ` James Bottomley
2019-02-23  2:41           ` David Tolnay
2019-02-24 16:30             ` James Bottomley
2019-02-24 17:51               ` Jarkko Sakkinen
2019-02-24 22:12               ` David Tolnay
2019-02-25  9:55                 ` Jarkko Sakkinen
2019-02-25 15:36                 ` James Bottomley
2019-02-25 19:17                   ` Matthew Garrett
2019-02-25 19:54                     ` Jarkko Sakkinen
2019-02-25 20:20                     ` James Bottomley
2019-02-25 21:00                       ` Matthew Garrett
2019-02-25 21:02                         ` Matthew Garrett
2019-02-25 22:14                         ` James Bottomley
2019-02-25 22:24                           ` Matthew Garrett
2019-02-25 22:32                             ` James Bottomley
2019-02-25 22:43                               ` Matthew Garrett
2019-02-25 22:51                                 ` James Bottomley
2019-02-25 23:02                                   ` Matthew Garrett
2019-02-25 23:09                                     ` James Bottomley
2019-02-25 21:05                       ` Jarkko Sakkinen
2019-02-25 22:24                         ` James Bottomley

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.