All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-02 15:23 ` Tiwei Bie
  0 siblings, 0 replies; 55+ messages in thread
From: Tiwei Bie @ 2018-04-02 15:23 UTC (permalink / raw)
  To: mst, jasowang, alex.williamson, ddutile, alexander.h.duyck
  Cc: virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang,
	tiwei.bie

This patch introduces a mdev (mediated device) based hardware
vhost backend. This backend is an abstraction of the various
hardware vhost accelerators (potentially any device that uses
virtio ring can be used as a vhost accelerator). Some generic
mdev parent ops are provided for accelerator drivers to support
generating mdev instances.

What's this
===========

The idea is that we can setup a virtio ring compatible device
with the messages available at the vhost-backend. Originally,
these messages are used to implement a software vhost backend,
but now we will use these messages to setup a virtio ring
compatible hardware device. Then the hardware device will be
able to work with the guest virtio driver in the VM just like
what the software backend does. That is to say, we can implement
a hardware based vhost backend in QEMU, and any virtio ring
compatible devices potentially can be used with this backend.
(We also call it vDPA -- vhost Data Path Acceleration).

One problem is that, different virtio ring compatible devices
may have different device interfaces. That is to say, we will
need different drivers in QEMU. It could be troublesome. And
that's what this patch trying to fix. The idea behind this
patch is very simple: mdev is a standard way to emulate device
in kernel. So we defined a standard device based on mdev, which
is able to accept vhost messages. When the mdev emulation code
(i.e. the generic mdev parent ops provided by this patch) gets
vhost messages, it will parse and deliver them to accelerator
drivers. Drivers can use these messages to setup accelerators.

That is to say, the generic mdev parent ops (e.g. read()/write()/
ioctl()/...) will be provided for accelerator drivers to register
accelerators as mdev parent devices. And each accelerator device
will support generating standard mdev instance(s).

With this standard device interface, we will be able to just
develop one userspace driver to implement the hardware based
vhost backend in QEMU.

Difference between vDPA and PCI passthru
========================================

The key difference between vDPA and PCI passthru is that, in
vDPA only the data path of the device (e.g. DMA ring, notify
region and queue interrupt) is pass-throughed to the VM, the
device control path (e.g. PCI configuration space and MMIO
regions) is still defined and emulated by QEMU.

The benefits of keeping virtio device emulation in QEMU compared
with virtio device PCI passthru include (but not limit to):

- consistent device interface for guest OS in the VM;
- max flexibility on the hardware design, especially the
  accelerator for each vhost backend doesn't have to be a
  full PCI device;
- leveraging the existing virtio live-migration framework;

The interface of this mdev based device
=======================================

1. BAR0

The MMIO region described by BAR0 is the main control
interface. Messages will be written to or read from
this region.

The message type is determined by the `request` field
in message header. The message size is encoded in the
message header too. The message format looks like this:

struct vhost_vfio_op {
	__u64 request;
	__u32 flags;
	/* Flag values: */
#define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
	__u32 size;
	union {
		__u64 u64;
		struct vhost_vring_state state;
		struct vhost_vring_addr addr;
		struct vhost_memory memory;
	} payload;
};

The existing vhost-kernel ioctl cmds are reused as
the message requests in above structure.

Each message will be written to or read from this
region at offset 0:

int vhost_vfio_write(struct vhost_dev *dev, struct vhost_vfio_op *op)
{
	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
	struct vhost_vfio *vfio = dev->opaque;
	int ret;

	ret = pwrite64(vfio->device_fd, op, count, vfio->bar0_offset);
	if (ret != count)
		return -1;

	return 0;
}

int vhost_vfio_read(struct vhost_dev *dev, struct vhost_vfio_op *op)
{
	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
	struct vhost_vfio *vfio = dev->opaque;
	uint64_t request = op->request;
	int ret;

	ret = pread64(vfio->device_fd, op, count, vfio->bar0_offset);
	if (ret != count || request != op->request)
		return -1;

	return 0;
}

It's quite straightforward to set things to the device.
Just need to write the message to device directly:

int vhost_vfio_set_features(struct vhost_dev *dev, uint64_t features)
{
	struct vhost_vfio_op op;

	op.request = VHOST_SET_FEATURES;
	op.flags = 0;
	op.size = sizeof(features);
	op.payload.u64 = features;

	return vhost_vfio_write(dev, &op);
}

To get things from the device, two steps are needed.
Take VHOST_GET_FEATURE as an example:

int vhost_vfio_get_features(struct vhost_dev *dev, uint64_t *features)
{
	struct vhost_vfio_op op;
	int ret;

	op.request = VHOST_GET_FEATURES;
	op.flags = VHOST_VFIO_NEED_REPLY;
	op.size = 0;

	/* Just need to write the header */
	ret = vhost_vfio_write(dev, &op);
	if (ret != 0)
		goto out;

	/* `op` wasn't changed during write */
	op.flags = 0;
	op.size = sizeof(*features);

	ret = vhost_vfio_read(dev, &op);
	if (ret != 0)
		goto out;

	*features = op.payload.u64;
out:
	return ret;
}

2. BAR1 (mmap-able)

The MMIO region described by BAR1 will be used to notify the
device.

Each queue will has a page for notification, and it can be
mapped to VM (if hardware also supports), and the virtio
driver in the VM will be able to notify the device directly.

The MMIO region described by BAR1 is also write-able. If the
accelerator's notification register(s) cannot be mapped to the
VM, write() can also be used to notify the device. Something
like this:

void notify_relay(void *opaque)
{
	......
	offset = 0x1000 * queue_idx; /* XXX assume page size is 4K here. */

	ret = pwrite64(vfio->device_fd, &queue_idx, sizeof(queue_idx),
			vfio->bar1_offset + offset);
	......
}

Other BARs are reserved.

3. VFIO interrupt ioctl API

VFIO interrupt ioctl API is used to setup device interrupts.
IRQ-bypass will also be supported.

Currently, only VFIO_PCI_MSIX_IRQ_INDEX is supported.

The API for drivers to provide mdev instances
=============================================

The read()/write()/ioctl()/mmap()/open()/release() mdev
parent ops have been provided for accelerators' drivers
to provide mdev instances.

ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
		  size_t count, loff_t *ppos);
ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
		   size_t count, loff_t *ppos);
long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
int vdpa_open(struct mdev_device *mdev);
void vdpa_close(struct mdev_device *mdev);

Each accelerator driver just needs to implement its own
create()/remove() ops, and provide a vdpa device ops
which will be called by the generic mdev emulation code.

Currently, the vdpa device ops are defined as:

typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);

struct vdpa_device_ops {
	vdpa_start_device_t		start;
	vdpa_stop_device_t		stop;
	vdpa_dma_map_t			dma_map;
	vdpa_dma_unmap_t		dma_unmap;
	vdpa_set_eventfd_t		set_eventfd;
	vdpa_supported_features_t	supported_features;
	vdpa_notify_device_t		notify;
	vdpa_get_notify_addr_t		get_notify_addr;
};

struct vdpa_dev {
	struct mdev_device *mdev;
	struct mutex ops_lock;
	u8 vconfig[VDPA_CONFIG_SIZE];
	int nr_vring;
	u64 features;
	u64 state;
	struct vhost_memory *mem_table;
	bool pending_reply;
	struct vhost_vfio_op pending;
	const struct vdpa_device_ops *ops;
	void *private;
	int max_vrings;
	struct vdpa_vring_info vring_info[0];
};

struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
			    int max_vrings);
void vdpa_free(struct vdpa_dev *vdpa);

A simple example
================

# Query the number of available mdev instances
$ cat /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/available_instances

# Create a mdev instance
$ echo $UUID > /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/create

# Launch QEMU with a virtio-net device
$ qemu \
	...... \
	-netdev type=vhost-vfio,sysfsdev=/sys/bus/mdev/devices/$UUID,id=$ID \
	-device virtio-net-pci,netdev=$ID

-------- END --------

Most of above words will be refined and moved to a doc in
the formal patch. In this RFC, all introductions and code
are gathered in this patch, the idea is to make it easier
to find all the relevant information. Anyone who wants to
comment could use inline comment and just keep the relevant
parts. Sorry for the big RFC patch..

This patch is just a RFC for now, and something is still
missing or needs to be refined. But it's never too early
to hear the thoughts from the community. So any comments
would be appreciated! Thanks! :-)

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/vhost/Makefile     |   3 +
 drivers/vhost/vdpa.c       | 805 +++++++++++++++++++++++++++++++++++++++++++++
 include/linux/vdpa_mdev.h  |  76 +++++
 include/uapi/linux/vhost.h |  26 ++
 4 files changed, 910 insertions(+)
 create mode 100644 drivers/vhost/vdpa.c
 create mode 100644 include/linux/vdpa_mdev.h

diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
index 6c6df24f770c..7d185e083140 100644
--- a/drivers/vhost/Makefile
+++ b/drivers/vhost/Makefile
@@ -11,3 +11,6 @@ vhost_vsock-y := vsock.o
 obj-$(CONFIG_VHOST_RING) += vringh.o
 
 obj-$(CONFIG_VHOST)	+= vhost.o
+
+obj-m += vhost_vdpa.o  # FIXME: add an option
+vhost_vdpa-y := vdpa.o
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
new file mode 100644
index 000000000000..aa19c266ea19
--- /dev/null
+++ b/drivers/vhost/vdpa.c
@@ -0,0 +1,805 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 Intel Corporation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/vfio.h>
+#include <linux/vhost.h>
+#include <linux/mdev.h>
+#include <linux/vdpa_mdev.h>
+
+#define VDPA_BAR0_SIZE		0x1000000 // TBD
+
+#define VDPA_VFIO_PCI_OFFSET_SHIFT	40
+#define VDPA_VFIO_PCI_OFFSET_MASK \
+		((1ULL << VDPA_VFIO_PCI_OFFSET_SHIFT) - 1)
+#define VDPA_VFIO_PCI_OFFSET_TO_INDEX(offset) \
+		((offset) >> VDPA_VFIO_PCI_OFFSET_SHIFT)
+#define VDPA_VFIO_PCI_INDEX_TO_OFFSET(index) \
+		((u64)(index) << VDPA_VFIO_PCI_OFFSET_SHIFT)
+#define VDPA_VFIO_PCI_BAR_OFFSET(offset) \
+		((offset) & VDPA_VFIO_PCI_OFFSET_MASK)
+
+#define STORE_LE16(addr, val)	(*(u16 *)(addr) = cpu_to_le16(val))
+#define STORE_LE32(addr, val)	(*(u32 *)(addr) = cpu_to_le32(val))
+
+static void vdpa_create_config_space(struct vdpa_dev *vdpa)
+{
+	/* PCI device ID / vendor ID */
+	STORE_LE32(&vdpa->vconfig[0x0], 0xffffffff); // FIXME TBD
+
+	/* Programming interface class */
+	vdpa->vconfig[0x9] = 0x00;
+
+	/* Sub class */
+	vdpa->vconfig[0xa] = 0x00;
+
+	/* Base class */
+	vdpa->vconfig[0xb] = 0x02;
+
+	// FIXME TBD
+}
+
+struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
+			    int max_vrings)
+{
+	struct vdpa_dev *vdpa;
+	size_t size;
+
+	size = sizeof(struct vdpa_dev) + max_vrings *
+			sizeof(struct vdpa_vring_info);
+
+	vdpa = kzalloc(size, GFP_KERNEL);
+	if (vdpa == NULL)
+		return NULL;
+
+	mutex_init(&vdpa->ops_lock);
+
+	vdpa->mdev = mdev;
+	vdpa->private = private;
+	vdpa->max_vrings = max_vrings;
+
+	vdpa_create_config_space(vdpa);
+
+	return vdpa;
+}
+EXPORT_SYMBOL(vdpa_alloc);
+
+void vdpa_free(struct vdpa_dev *vdpa)
+{
+	struct mdev_device *mdev;
+
+	mdev = vdpa->mdev;
+
+	vdpa->ops->stop(vdpa);
+	vdpa->ops->dma_unmap(vdpa);
+
+	mdev_set_drvdata(mdev, NULL);
+
+	mutex_destroy(&vdpa->ops_lock);
+
+	kfree(vdpa->mem_table);
+	kfree(vdpa);
+}
+EXPORT_SYMBOL(vdpa_free);
+
+static ssize_t vdpa_handle_pcicfg_read(struct mdev_device *mdev,
+		char __user *buf, size_t count, loff_t *ppos)
+{
+	struct vdpa_dev *vdpa;
+	loff_t pos = *ppos;
+	loff_t offset;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
+
+	if (count + offset > VDPA_CONFIG_SIZE)
+		return -EINVAL;
+
+	if (copy_to_user(buf, (vdpa->vconfig + offset), count))
+		return -EFAULT;
+
+	return count;
+}
+
+static ssize_t vdpa_handle_bar0_read(struct mdev_device *mdev,
+		char __user *buf, size_t count, loff_t *ppos)
+{
+	struct vdpa_dev *vdpa;
+	struct vhost_vfio_op *op = NULL;
+	loff_t pos = *ppos;
+	loff_t offset;
+	int ret;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
+	if (offset != 0) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (!vdpa->pending_reply) {
+		ret = 0;
+		goto out;
+	}
+
+	vdpa->pending_reply = false;
+
+	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
+		     GFP_KERNEL);
+	if (op == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	op->request = vdpa->pending.request;
+
+	switch (op->request) {
+	case VHOST_GET_VRING_BASE:
+		op->payload.state = vdpa->pending.payload.state;
+		op->size = sizeof(op->payload.state);
+		break;
+	case VHOST_GET_FEATURES:
+		op->payload.u64 = vdpa->pending.payload.u64;
+		op->size = sizeof(op->payload.u64);
+		break;
+	default:
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	if (op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	if (copy_to_user(buf, op, count)) {
+		ret = -EFAULT;
+		goto out_free;
+	}
+
+	ret = count;
+
+out_free:
+	kfree(op);
+out:
+	return ret;
+}
+
+ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
+		  size_t count, loff_t *ppos)
+{
+	int done = 0;
+	unsigned int index;
+	loff_t pos = *ppos;
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	mutex_lock(&vdpa->ops_lock);
+
+	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
+
+	switch (index) {
+	case VFIO_PCI_CONFIG_REGION_INDEX:
+		done = vdpa_handle_pcicfg_read(mdev, buf, count, ppos);
+		break;
+	case VFIO_PCI_BAR0_REGION_INDEX:
+		done = vdpa_handle_bar0_read(mdev, buf, count, ppos);
+		break;
+	}
+
+	if (done > 0)
+		*ppos += done;
+
+	mutex_unlock(&vdpa->ops_lock);
+
+	return done;
+}
+EXPORT_SYMBOL(vdpa_read);
+
+static ssize_t vdpa_handle_pcicfg_write(struct mdev_device *mdev,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	return count;
+}
+
+static int vhost_set_mem_table(struct mdev_device *mdev,
+		struct vhost_memory *mem)
+{
+	struct vdpa_dev *vdpa;
+	struct vhost_memory *mem_table;
+	size_t size;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	// FIXME fix this
+	if (vdpa->state != VHOST_DEVICE_S_STOPPED)
+		return -EBUSY;
+
+	size = sizeof(*mem) + mem->nregions * sizeof(*mem->regions);
+
+	mem_table = kzalloc(size, GFP_KERNEL);
+	if (mem_table == NULL)
+		return -ENOMEM;
+
+	memcpy(mem_table, mem, size);
+
+	kfree(vdpa->mem_table);
+
+	vdpa->mem_table = mem_table;
+
+	vdpa->ops->dma_unmap(vdpa);
+	vdpa->ops->dma_map(vdpa);
+
+	return 0;
+}
+
+static int vhost_set_vring_addr(struct mdev_device *mdev,
+		struct vhost_vring_addr *addr)
+{
+	struct vdpa_dev *vdpa;
+	int qid = addr->index;
+	struct vdpa_vring_info *vring;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (qid >= vdpa->max_vrings)
+		return -EINVAL;
+
+	/* FIXME to be fixed */
+	if (qid >= vdpa->nr_vring)
+		vdpa->nr_vring = qid + 1;
+
+	vring = &vdpa->vring_info[qid];
+
+	vring->desc_user_addr = addr->desc_user_addr;
+	vring->used_user_addr = addr->used_user_addr;
+	vring->avail_user_addr = addr->avail_user_addr;
+	vring->log_guest_addr = addr->log_guest_addr;
+
+	return 0;
+}
+
+static int vhost_set_vring_num(struct mdev_device *mdev,
+		struct vhost_vring_state *num)
+{
+	struct vdpa_dev *vdpa;
+	int qid = num->index;
+	struct vdpa_vring_info *vring;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (qid >= vdpa->max_vrings)
+		return -EINVAL;
+
+	vring = &vdpa->vring_info[qid];
+
+	vring->size = num->num;
+
+	return 0;
+}
+
+static int vhost_set_vring_base(struct mdev_device *mdev,
+		struct vhost_vring_state *base)
+{
+	struct vdpa_dev *vdpa;
+	int qid = base->index;
+	struct vdpa_vring_info *vring;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (qid >= vdpa->max_vrings)
+		return -EINVAL;
+
+	vring = &vdpa->vring_info[qid];
+
+	vring->base = base->num;
+
+	return 0;
+}
+
+static int vhost_get_vring_base(struct mdev_device *mdev,
+		struct vhost_vring_state *base)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	vdpa->pending_reply = true;
+	vdpa->pending.request = VHOST_GET_VRING_BASE;
+	vdpa->pending.payload.state.index = base->index;
+
+	// FIXME to be implemented
+
+	return 0;
+}
+
+static int vhost_set_features(struct mdev_device *mdev, u64 *features)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	vdpa->features = *features;
+
+	return 0;
+}
+
+static int vhost_get_features(struct mdev_device *mdev, u64 *features)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	vdpa->pending_reply = true;
+	vdpa->pending.request = VHOST_GET_FEATURES;
+	vdpa->pending.payload.u64 =
+		vdpa->ops->supported_features(vdpa);
+
+	return 0;
+}
+
+static int vhost_set_owner(struct mdev_device *mdev)
+{
+	return 0;
+}
+
+static int vhost_reset_owner(struct mdev_device *mdev)
+{
+	return 0;
+}
+
+static int vhost_set_state(struct mdev_device *mdev, u64 *state)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (*state >= VHOST_DEVICE_S_MAX)
+		return -EINVAL;
+
+	if (vdpa->state == *state)
+		return 0;
+
+	vdpa->state = *state;
+
+	switch (vdpa->state) {
+	case VHOST_DEVICE_S_RUNNING:
+		vdpa->ops->start(vdpa);
+		break;
+	case VHOST_DEVICE_S_STOPPED:
+		vdpa->ops->stop(vdpa);
+		break;
+	}
+
+	return 0;
+}
+
+static ssize_t vdpa_handle_bar0_write(struct mdev_device *mdev,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct vhost_vfio_op *op = NULL;
+	loff_t pos = *ppos;
+	loff_t offset;
+	int ret;
+
+	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
+	if (offset != 0) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (count < VHOST_VFIO_OP_HDR_SIZE) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
+		     GFP_KERNEL);
+	if (op == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	if (copy_from_user(op, buf, VHOST_VFIO_OP_HDR_SIZE)) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	if (op->size > VHOST_VFIO_OP_PAYLOAD_MAX_SIZE ||
+	    op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	if (copy_from_user(&op->payload, buf + VHOST_VFIO_OP_HDR_SIZE,
+			   op->size)) {
+		ret = -EFAULT;
+		goto out_free;
+	}
+
+	switch (op->request) {
+	case VHOST_SET_LOG_BASE:
+		break;
+	case VHOST_SET_MEM_TABLE:
+		vhost_set_mem_table(mdev, &op->payload.memory);
+		break;
+	case VHOST_SET_VRING_ADDR:
+		vhost_set_vring_addr(mdev, &op->payload.addr);
+		break;
+	case VHOST_SET_VRING_NUM:
+		vhost_set_vring_num(mdev, &op->payload.state);
+		break;
+	case VHOST_SET_VRING_BASE:
+		vhost_set_vring_base(mdev, &op->payload.state);
+		break;
+	case VHOST_GET_VRING_BASE:
+		vhost_get_vring_base(mdev, &op->payload.state);
+		break;
+	case VHOST_SET_FEATURES:
+		vhost_set_features(mdev, &op->payload.u64);
+		break;
+	case VHOST_GET_FEATURES:
+		vhost_get_features(mdev, &op->payload.u64);
+		break;
+	case VHOST_SET_OWNER:
+		vhost_set_owner(mdev);
+		break;
+	case VHOST_RESET_OWNER:
+		vhost_reset_owner(mdev);
+		break;
+	case VHOST_DEVICE_SET_STATE:
+		vhost_set_state(mdev, &op->payload.u64);
+		break;
+	default:
+		break;
+	}
+
+	ret = count;
+
+out_free:
+	kfree(op);
+out:
+	return ret;
+}
+
+static ssize_t vdpa_handle_bar1_write(struct mdev_device *mdev,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct vdpa_dev *vdpa;
+	int qid;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (count < sizeof(qid))
+		return -EINVAL;
+
+	if (copy_from_user(&qid, buf, sizeof(qid)))
+		return -EINVAL;
+
+	vdpa->ops->notify(vdpa, qid);
+
+	return count;
+}
+
+ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
+		   size_t count, loff_t *ppos)
+{
+	int done = 0;
+	unsigned int index;
+	loff_t pos = *ppos;
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	mutex_lock(&vdpa->ops_lock);
+
+	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
+
+	switch (index) {
+	case VFIO_PCI_CONFIG_REGION_INDEX:
+		done = vdpa_handle_pcicfg_write(mdev, buf, count, ppos);
+		break;
+	case VFIO_PCI_BAR0_REGION_INDEX:
+		done = vdpa_handle_bar0_write(mdev, buf, count, ppos);
+		break;
+	case VFIO_PCI_BAR1_REGION_INDEX:
+		done = vdpa_handle_bar1_write(mdev, buf, count, ppos);
+		break;
+	}
+
+	if (done > 0)
+		*ppos += done;
+
+	mutex_unlock(&vdpa->ops_lock);
+
+	return done;
+}
+EXPORT_SYMBOL(vdpa_write);
+
+static int vdpa_get_region_info(struct mdev_device *mdev,
+				struct vfio_region_info *region_info,
+				u16 *cap_type_id, void **cap_type)
+{
+	struct vdpa_dev *vdpa;
+	u32 bar_index;
+	u64 size = 0;
+
+	if (!mdev)
+		return -EINVAL;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -EINVAL;
+
+	bar_index = region_info->index;
+	if (bar_index >= VFIO_PCI_NUM_REGIONS)
+		return -EINVAL;
+
+	mutex_lock(&vdpa->ops_lock);
+
+	switch (bar_index) {
+	case VFIO_PCI_CONFIG_REGION_INDEX:
+		size = VDPA_CONFIG_SIZE;
+		break;
+	case VFIO_PCI_BAR0_REGION_INDEX:
+		size = VDPA_BAR0_SIZE;
+		break;
+	case VFIO_PCI_BAR1_REGION_INDEX:
+		size = (u64)vdpa->max_vrings << PAGE_SHIFT;
+		break;
+	default:
+		size = 0;
+		break;
+	}
+
+	// FIXME: mark BAR1 as mmap-able (VFIO_REGION_INFO_FLAG_MMAP)
+	region_info->size = size;
+	region_info->offset = VDPA_VFIO_PCI_INDEX_TO_OFFSET(bar_index);
+	region_info->flags = VFIO_REGION_INFO_FLAG_READ |
+		VFIO_REGION_INFO_FLAG_WRITE;
+	mutex_unlock(&vdpa->ops_lock);
+	return 0;
+}
+
+static int vdpa_reset(struct mdev_device *mdev)
+{
+	struct vdpa_dev *vdpa;
+
+	if (!mdev)
+		return -EINVAL;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int vdpa_get_device_info(struct mdev_device *mdev,
+				struct vfio_device_info *dev_info)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	dev_info->flags = VFIO_DEVICE_FLAGS_PCI;
+	dev_info->num_regions = VFIO_PCI_NUM_REGIONS;
+	dev_info->num_irqs = vdpa->max_vrings;
+
+	return 0;
+}
+
+static int vdpa_get_irq_info(struct mdev_device *mdev,
+			     struct vfio_irq_info *info)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (info->index != VFIO_PCI_MSIX_IRQ_INDEX)
+		return -ENOTSUPP;
+
+	info->flags = VFIO_IRQ_INFO_EVENTFD;
+	info->count = vdpa->max_vrings;
+
+	return 0;
+}
+
+static int vdpa_set_irqs(struct mdev_device *mdev, uint32_t flags,
+			 unsigned int index, unsigned int start,
+			 unsigned int count, void *data)
+{
+	struct vdpa_dev *vdpa;
+	int *fd = data, i;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -EINVAL;
+
+	if (index != VFIO_PCI_MSIX_IRQ_INDEX)
+		return -ENOTSUPP;
+
+	for (i = 0; i < count; i++)
+		vdpa->ops->set_eventfd(vdpa, start + i,
+			(flags & VFIO_IRQ_SET_DATA_EVENTFD) ? fd[i] : -1);
+
+	return 0;
+}
+
+long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg)
+{
+	int ret = 0;
+	unsigned long minsz;
+	struct vdpa_dev *vdpa;
+
+	if (!mdev)
+		return -EINVAL;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	switch (cmd) {
+	case VFIO_DEVICE_GET_INFO:
+	{
+		struct vfio_device_info info;
+
+		minsz = offsetofend(struct vfio_device_info, num_irqs);
+
+		if (copy_from_user(&info, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (info.argsz < minsz)
+			return -EINVAL;
+
+		ret = vdpa_get_device_info(mdev, &info);
+		if (ret)
+			return ret;
+
+		if (copy_to_user((void __user *)arg, &info, minsz))
+			return -EFAULT;
+
+		return 0;
+	}
+	case VFIO_DEVICE_GET_REGION_INFO:
+	{
+		struct vfio_region_info info;
+		u16 cap_type_id = 0;
+		void *cap_type = NULL;
+
+		minsz = offsetofend(struct vfio_region_info, offset);
+
+		if (copy_from_user(&info, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (info.argsz < minsz)
+			return -EINVAL;
+
+		ret = vdpa_get_region_info(mdev, &info, &cap_type_id,
+					   &cap_type);
+		if (ret)
+			return ret;
+
+		if (copy_to_user((void __user *)arg, &info, minsz))
+			return -EFAULT;
+
+		return 0;
+	}
+	case VFIO_DEVICE_GET_IRQ_INFO:
+	{
+		struct vfio_irq_info info;
+
+		minsz = offsetofend(struct vfio_irq_info, count);
+
+		if (copy_from_user(&info, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (info.argsz < minsz || info.index >= vdpa->max_vrings)
+			return -EINVAL;
+
+		ret = vdpa_get_irq_info(mdev, &info);
+		if (ret)
+			return ret;
+
+		if (copy_to_user((void __user *)arg, &info, minsz))
+			return -EFAULT;
+
+		return 0;
+	}
+	case VFIO_DEVICE_SET_IRQS:
+	{
+		struct vfio_irq_set hdr;
+		size_t data_size = 0;
+		u8 *data = NULL;
+
+		minsz = offsetofend(struct vfio_irq_set, count);
+
+		if (copy_from_user(&hdr, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		ret = vfio_set_irqs_validate_and_prepare(&hdr, vdpa->max_vrings,
+							 VFIO_PCI_NUM_IRQS,
+							 &data_size);
+		if (ret)
+			return ret;
+
+		if (data_size) {
+			data = memdup_user((void __user *)(arg + minsz),
+					   data_size);
+			if (IS_ERR(data))
+				return PTR_ERR(data);
+		}
+
+		ret = vdpa_set_irqs(mdev, hdr.flags, hdr.index, hdr.start,
+				hdr.count, data);
+
+		kfree(data);
+		return ret;
+	}
+	case VFIO_DEVICE_RESET:
+		return vdpa_reset(mdev);
+	}
+	return -ENOTTY;
+}
+EXPORT_SYMBOL(vdpa_ioctl);
+
+int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
+{
+	// FIXME: to be implemented
+
+	return 0;
+}
+EXPORT_SYMBOL(vdpa_mmap);
+
+int vdpa_open(struct mdev_device *mdev)
+{
+	return 0;
+}
+EXPORT_SYMBOL(vdpa_open);
+
+void vdpa_close(struct mdev_device *mdev)
+{
+}
+EXPORT_SYMBOL(vdpa_close);
+
+MODULE_VERSION("0.0.0");
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Hardware virtio accelerator abstraction");
diff --git a/include/linux/vdpa_mdev.h b/include/linux/vdpa_mdev.h
new file mode 100644
index 000000000000..8414e86ba4b8
--- /dev/null
+++ b/include/linux/vdpa_mdev.h
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 Intel Corporation.
+ */
+
+#ifndef VDPA_MDEV_H
+#define VDPA_MDEV_H
+
+#define VDPA_CONFIG_SIZE 0xff
+
+struct mdev_device;
+struct vdpa_dev;
+
+/*
+ * XXX: Any comments about the vDPA API design for drivers
+ *      would be appreciated!
+ */
+
+typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
+typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
+typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
+typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
+typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
+typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
+typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
+typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
+
+struct vdpa_device_ops {
+	vdpa_start_device_t		start;
+	vdpa_stop_device_t		stop;
+	vdpa_dma_map_t			dma_map;
+	vdpa_dma_unmap_t		dma_unmap;
+	vdpa_set_eventfd_t		set_eventfd;
+	vdpa_supported_features_t	supported_features;
+	vdpa_notify_device_t		notify;
+	vdpa_get_notify_addr_t		get_notify_addr;
+};
+
+struct vdpa_vring_info {
+	u64 desc_user_addr;
+	u64 used_user_addr;
+	u64 avail_user_addr;
+	u64 log_guest_addr;
+	u16 size;
+	u16 base;
+};
+
+struct vdpa_dev {
+	struct mdev_device *mdev;
+	struct mutex ops_lock;
+	u8 vconfig[VDPA_CONFIG_SIZE];
+	int nr_vring;
+	u64 features;
+	u64 state;
+	struct vhost_memory *mem_table;
+	bool pending_reply;
+	struct vhost_vfio_op pending;
+	const struct vdpa_device_ops *ops;
+	void *private;
+	int max_vrings;
+	struct vdpa_vring_info vring_info[0];
+};
+
+struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
+			    int max_vrings);
+void vdpa_free(struct vdpa_dev *vdpa);
+ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
+		  size_t count, loff_t *ppos);
+ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
+		   size_t count, loff_t *ppos);
+long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
+int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
+int vdpa_open(struct mdev_device *mdev);
+void vdpa_close(struct mdev_device *mdev);
+
+#endif /* VDPA_MDEV_H */
diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
index c51f8e5cc608..92a1ca0b5fe1 100644
--- a/include/uapi/linux/vhost.h
+++ b/include/uapi/linux/vhost.h
@@ -207,4 +207,30 @@ struct vhost_scsi_target {
 #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
 #define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
 
+/* VHOST_DEVICE specific defines */
+
+#define VHOST_DEVICE_SET_STATE _IOW(VHOST_VIRTIO, 0x70, __u64)
+
+#define VHOST_DEVICE_S_STOPPED 0
+#define VHOST_DEVICE_S_RUNNING 1
+#define VHOST_DEVICE_S_MAX     2
+
+struct vhost_vfio_op {
+	__u64 request;
+	__u32 flags;
+	/* Flag values: */
+#define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
+	__u32 size;
+	union {
+		__u64 u64;
+		struct vhost_vring_state state;
+		struct vhost_vring_addr addr;
+		struct vhost_memory memory;
+	} payload;
+};
+
+#define VHOST_VFIO_OP_HDR_SIZE \
+		((unsigned long)&((struct vhost_vfio_op *)NULL)->payload)
+#define VHOST_VFIO_OP_PAYLOAD_MAX_SIZE 1024 /* FIXME TBD */
+
 #endif
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [virtio-dev] [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-02 15:23 ` Tiwei Bie
  0 siblings, 0 replies; 55+ messages in thread
From: Tiwei Bie @ 2018-04-02 15:23 UTC (permalink / raw)
  To: mst, jasowang, alex.williamson, ddutile, alexander.h.duyck
  Cc: virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang,
	tiwei.bie

This patch introduces a mdev (mediated device) based hardware
vhost backend. This backend is an abstraction of the various
hardware vhost accelerators (potentially any device that uses
virtio ring can be used as a vhost accelerator). Some generic
mdev parent ops are provided for accelerator drivers to support
generating mdev instances.

What's this
===========

The idea is that we can setup a virtio ring compatible device
with the messages available at the vhost-backend. Originally,
these messages are used to implement a software vhost backend,
but now we will use these messages to setup a virtio ring
compatible hardware device. Then the hardware device will be
able to work with the guest virtio driver in the VM just like
what the software backend does. That is to say, we can implement
a hardware based vhost backend in QEMU, and any virtio ring
compatible devices potentially can be used with this backend.
(We also call it vDPA -- vhost Data Path Acceleration).

One problem is that, different virtio ring compatible devices
may have different device interfaces. That is to say, we will
need different drivers in QEMU. It could be troublesome. And
that's what this patch trying to fix. The idea behind this
patch is very simple: mdev is a standard way to emulate device
in kernel. So we defined a standard device based on mdev, which
is able to accept vhost messages. When the mdev emulation code
(i.e. the generic mdev parent ops provided by this patch) gets
vhost messages, it will parse and deliver them to accelerator
drivers. Drivers can use these messages to setup accelerators.

That is to say, the generic mdev parent ops (e.g. read()/write()/
ioctl()/...) will be provided for accelerator drivers to register
accelerators as mdev parent devices. And each accelerator device
will support generating standard mdev instance(s).

With this standard device interface, we will be able to just
develop one userspace driver to implement the hardware based
vhost backend in QEMU.

Difference between vDPA and PCI passthru
========================================

The key difference between vDPA and PCI passthru is that, in
vDPA only the data path of the device (e.g. DMA ring, notify
region and queue interrupt) is pass-throughed to the VM, the
device control path (e.g. PCI configuration space and MMIO
regions) is still defined and emulated by QEMU.

The benefits of keeping virtio device emulation in QEMU compared
with virtio device PCI passthru include (but not limit to):

- consistent device interface for guest OS in the VM;
- max flexibility on the hardware design, especially the
  accelerator for each vhost backend doesn't have to be a
  full PCI device;
- leveraging the existing virtio live-migration framework;

The interface of this mdev based device
=======================================

1. BAR0

The MMIO region described by BAR0 is the main control
interface. Messages will be written to or read from
this region.

The message type is determined by the `request` field
in message header. The message size is encoded in the
message header too. The message format looks like this:

struct vhost_vfio_op {
	__u64 request;
	__u32 flags;
	/* Flag values: */
#define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
	__u32 size;
	union {
		__u64 u64;
		struct vhost_vring_state state;
		struct vhost_vring_addr addr;
		struct vhost_memory memory;
	} payload;
};

The existing vhost-kernel ioctl cmds are reused as
the message requests in above structure.

Each message will be written to or read from this
region at offset 0:

int vhost_vfio_write(struct vhost_dev *dev, struct vhost_vfio_op *op)
{
	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
	struct vhost_vfio *vfio = dev->opaque;
	int ret;

	ret = pwrite64(vfio->device_fd, op, count, vfio->bar0_offset);
	if (ret != count)
		return -1;

	return 0;
}

int vhost_vfio_read(struct vhost_dev *dev, struct vhost_vfio_op *op)
{
	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
	struct vhost_vfio *vfio = dev->opaque;
	uint64_t request = op->request;
	int ret;

	ret = pread64(vfio->device_fd, op, count, vfio->bar0_offset);
	if (ret != count || request != op->request)
		return -1;

	return 0;
}

It's quite straightforward to set things to the device.
Just need to write the message to device directly:

int vhost_vfio_set_features(struct vhost_dev *dev, uint64_t features)
{
	struct vhost_vfio_op op;

	op.request = VHOST_SET_FEATURES;
	op.flags = 0;
	op.size = sizeof(features);
	op.payload.u64 = features;

	return vhost_vfio_write(dev, &op);
}

To get things from the device, two steps are needed.
Take VHOST_GET_FEATURE as an example:

int vhost_vfio_get_features(struct vhost_dev *dev, uint64_t *features)
{
	struct vhost_vfio_op op;
	int ret;

	op.request = VHOST_GET_FEATURES;
	op.flags = VHOST_VFIO_NEED_REPLY;
	op.size = 0;

	/* Just need to write the header */
	ret = vhost_vfio_write(dev, &op);
	if (ret != 0)
		goto out;

	/* `op` wasn't changed during write */
	op.flags = 0;
	op.size = sizeof(*features);

	ret = vhost_vfio_read(dev, &op);
	if (ret != 0)
		goto out;

	*features = op.payload.u64;
out:
	return ret;
}

2. BAR1 (mmap-able)

The MMIO region described by BAR1 will be used to notify the
device.

Each queue will has a page for notification, and it can be
mapped to VM (if hardware also supports), and the virtio
driver in the VM will be able to notify the device directly.

The MMIO region described by BAR1 is also write-able. If the
accelerator's notification register(s) cannot be mapped to the
VM, write() can also be used to notify the device. Something
like this:

void notify_relay(void *opaque)
{
	......
	offset = 0x1000 * queue_idx; /* XXX assume page size is 4K here. */

	ret = pwrite64(vfio->device_fd, &queue_idx, sizeof(queue_idx),
			vfio->bar1_offset + offset);
	......
}

Other BARs are reserved.

3. VFIO interrupt ioctl API

VFIO interrupt ioctl API is used to setup device interrupts.
IRQ-bypass will also be supported.

Currently, only VFIO_PCI_MSIX_IRQ_INDEX is supported.

The API for drivers to provide mdev instances
=============================================

The read()/write()/ioctl()/mmap()/open()/release() mdev
parent ops have been provided for accelerators' drivers
to provide mdev instances.

ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
		  size_t count, loff_t *ppos);
ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
		   size_t count, loff_t *ppos);
long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
int vdpa_open(struct mdev_device *mdev);
void vdpa_close(struct mdev_device *mdev);

Each accelerator driver just needs to implement its own
create()/remove() ops, and provide a vdpa device ops
which will be called by the generic mdev emulation code.

Currently, the vdpa device ops are defined as:

typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);

struct vdpa_device_ops {
	vdpa_start_device_t		start;
	vdpa_stop_device_t		stop;
	vdpa_dma_map_t			dma_map;
	vdpa_dma_unmap_t		dma_unmap;
	vdpa_set_eventfd_t		set_eventfd;
	vdpa_supported_features_t	supported_features;
	vdpa_notify_device_t		notify;
	vdpa_get_notify_addr_t		get_notify_addr;
};

struct vdpa_dev {
	struct mdev_device *mdev;
	struct mutex ops_lock;
	u8 vconfig[VDPA_CONFIG_SIZE];
	int nr_vring;
	u64 features;
	u64 state;
	struct vhost_memory *mem_table;
	bool pending_reply;
	struct vhost_vfio_op pending;
	const struct vdpa_device_ops *ops;
	void *private;
	int max_vrings;
	struct vdpa_vring_info vring_info[0];
};

struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
			    int max_vrings);
void vdpa_free(struct vdpa_dev *vdpa);

A simple example
================

# Query the number of available mdev instances
$ cat /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/available_instances

# Create a mdev instance
$ echo $UUID > /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/create

# Launch QEMU with a virtio-net device
$ qemu \
	...... \
	-netdev type=vhost-vfio,sysfsdev=/sys/bus/mdev/devices/$UUID,id=$ID \
	-device virtio-net-pci,netdev=$ID

-------- END --------

Most of above words will be refined and moved to a doc in
the formal patch. In this RFC, all introductions and code
are gathered in this patch, the idea is to make it easier
to find all the relevant information. Anyone who wants to
comment could use inline comment and just keep the relevant
parts. Sorry for the big RFC patch..

This patch is just a RFC for now, and something is still
missing or needs to be refined. But it's never too early
to hear the thoughts from the community. So any comments
would be appreciated! Thanks! :-)

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/vhost/Makefile     |   3 +
 drivers/vhost/vdpa.c       | 805 +++++++++++++++++++++++++++++++++++++++++++++
 include/linux/vdpa_mdev.h  |  76 +++++
 include/uapi/linux/vhost.h |  26 ++
 4 files changed, 910 insertions(+)
 create mode 100644 drivers/vhost/vdpa.c
 create mode 100644 include/linux/vdpa_mdev.h

diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
index 6c6df24f770c..7d185e083140 100644
--- a/drivers/vhost/Makefile
+++ b/drivers/vhost/Makefile
@@ -11,3 +11,6 @@ vhost_vsock-y := vsock.o
 obj-$(CONFIG_VHOST_RING) += vringh.o
 
 obj-$(CONFIG_VHOST)	+= vhost.o
+
+obj-m += vhost_vdpa.o  # FIXME: add an option
+vhost_vdpa-y := vdpa.o
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
new file mode 100644
index 000000000000..aa19c266ea19
--- /dev/null
+++ b/drivers/vhost/vdpa.c
@@ -0,0 +1,805 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 Intel Corporation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/vfio.h>
+#include <linux/vhost.h>
+#include <linux/mdev.h>
+#include <linux/vdpa_mdev.h>
+
+#define VDPA_BAR0_SIZE		0x1000000 // TBD
+
+#define VDPA_VFIO_PCI_OFFSET_SHIFT	40
+#define VDPA_VFIO_PCI_OFFSET_MASK \
+		((1ULL << VDPA_VFIO_PCI_OFFSET_SHIFT) - 1)
+#define VDPA_VFIO_PCI_OFFSET_TO_INDEX(offset) \
+		((offset) >> VDPA_VFIO_PCI_OFFSET_SHIFT)
+#define VDPA_VFIO_PCI_INDEX_TO_OFFSET(index) \
+		((u64)(index) << VDPA_VFIO_PCI_OFFSET_SHIFT)
+#define VDPA_VFIO_PCI_BAR_OFFSET(offset) \
+		((offset) & VDPA_VFIO_PCI_OFFSET_MASK)
+
+#define STORE_LE16(addr, val)	(*(u16 *)(addr) = cpu_to_le16(val))
+#define STORE_LE32(addr, val)	(*(u32 *)(addr) = cpu_to_le32(val))
+
+static void vdpa_create_config_space(struct vdpa_dev *vdpa)
+{
+	/* PCI device ID / vendor ID */
+	STORE_LE32(&vdpa->vconfig[0x0], 0xffffffff); // FIXME TBD
+
+	/* Programming interface class */
+	vdpa->vconfig[0x9] = 0x00;
+
+	/* Sub class */
+	vdpa->vconfig[0xa] = 0x00;
+
+	/* Base class */
+	vdpa->vconfig[0xb] = 0x02;
+
+	// FIXME TBD
+}
+
+struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
+			    int max_vrings)
+{
+	struct vdpa_dev *vdpa;
+	size_t size;
+
+	size = sizeof(struct vdpa_dev) + max_vrings *
+			sizeof(struct vdpa_vring_info);
+
+	vdpa = kzalloc(size, GFP_KERNEL);
+	if (vdpa == NULL)
+		return NULL;
+
+	mutex_init(&vdpa->ops_lock);
+
+	vdpa->mdev = mdev;
+	vdpa->private = private;
+	vdpa->max_vrings = max_vrings;
+
+	vdpa_create_config_space(vdpa);
+
+	return vdpa;
+}
+EXPORT_SYMBOL(vdpa_alloc);
+
+void vdpa_free(struct vdpa_dev *vdpa)
+{
+	struct mdev_device *mdev;
+
+	mdev = vdpa->mdev;
+
+	vdpa->ops->stop(vdpa);
+	vdpa->ops->dma_unmap(vdpa);
+
+	mdev_set_drvdata(mdev, NULL);
+
+	mutex_destroy(&vdpa->ops_lock);
+
+	kfree(vdpa->mem_table);
+	kfree(vdpa);
+}
+EXPORT_SYMBOL(vdpa_free);
+
+static ssize_t vdpa_handle_pcicfg_read(struct mdev_device *mdev,
+		char __user *buf, size_t count, loff_t *ppos)
+{
+	struct vdpa_dev *vdpa;
+	loff_t pos = *ppos;
+	loff_t offset;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
+
+	if (count + offset > VDPA_CONFIG_SIZE)
+		return -EINVAL;
+
+	if (copy_to_user(buf, (vdpa->vconfig + offset), count))
+		return -EFAULT;
+
+	return count;
+}
+
+static ssize_t vdpa_handle_bar0_read(struct mdev_device *mdev,
+		char __user *buf, size_t count, loff_t *ppos)
+{
+	struct vdpa_dev *vdpa;
+	struct vhost_vfio_op *op = NULL;
+	loff_t pos = *ppos;
+	loff_t offset;
+	int ret;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
+	if (offset != 0) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (!vdpa->pending_reply) {
+		ret = 0;
+		goto out;
+	}
+
+	vdpa->pending_reply = false;
+
+	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
+		     GFP_KERNEL);
+	if (op == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	op->request = vdpa->pending.request;
+
+	switch (op->request) {
+	case VHOST_GET_VRING_BASE:
+		op->payload.state = vdpa->pending.payload.state;
+		op->size = sizeof(op->payload.state);
+		break;
+	case VHOST_GET_FEATURES:
+		op->payload.u64 = vdpa->pending.payload.u64;
+		op->size = sizeof(op->payload.u64);
+		break;
+	default:
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	if (op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	if (copy_to_user(buf, op, count)) {
+		ret = -EFAULT;
+		goto out_free;
+	}
+
+	ret = count;
+
+out_free:
+	kfree(op);
+out:
+	return ret;
+}
+
+ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
+		  size_t count, loff_t *ppos)
+{
+	int done = 0;
+	unsigned int index;
+	loff_t pos = *ppos;
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	mutex_lock(&vdpa->ops_lock);
+
+	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
+
+	switch (index) {
+	case VFIO_PCI_CONFIG_REGION_INDEX:
+		done = vdpa_handle_pcicfg_read(mdev, buf, count, ppos);
+		break;
+	case VFIO_PCI_BAR0_REGION_INDEX:
+		done = vdpa_handle_bar0_read(mdev, buf, count, ppos);
+		break;
+	}
+
+	if (done > 0)
+		*ppos += done;
+
+	mutex_unlock(&vdpa->ops_lock);
+
+	return done;
+}
+EXPORT_SYMBOL(vdpa_read);
+
+static ssize_t vdpa_handle_pcicfg_write(struct mdev_device *mdev,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	return count;
+}
+
+static int vhost_set_mem_table(struct mdev_device *mdev,
+		struct vhost_memory *mem)
+{
+	struct vdpa_dev *vdpa;
+	struct vhost_memory *mem_table;
+	size_t size;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	// FIXME fix this
+	if (vdpa->state != VHOST_DEVICE_S_STOPPED)
+		return -EBUSY;
+
+	size = sizeof(*mem) + mem->nregions * sizeof(*mem->regions);
+
+	mem_table = kzalloc(size, GFP_KERNEL);
+	if (mem_table == NULL)
+		return -ENOMEM;
+
+	memcpy(mem_table, mem, size);
+
+	kfree(vdpa->mem_table);
+
+	vdpa->mem_table = mem_table;
+
+	vdpa->ops->dma_unmap(vdpa);
+	vdpa->ops->dma_map(vdpa);
+
+	return 0;
+}
+
+static int vhost_set_vring_addr(struct mdev_device *mdev,
+		struct vhost_vring_addr *addr)
+{
+	struct vdpa_dev *vdpa;
+	int qid = addr->index;
+	struct vdpa_vring_info *vring;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (qid >= vdpa->max_vrings)
+		return -EINVAL;
+
+	/* FIXME to be fixed */
+	if (qid >= vdpa->nr_vring)
+		vdpa->nr_vring = qid + 1;
+
+	vring = &vdpa->vring_info[qid];
+
+	vring->desc_user_addr = addr->desc_user_addr;
+	vring->used_user_addr = addr->used_user_addr;
+	vring->avail_user_addr = addr->avail_user_addr;
+	vring->log_guest_addr = addr->log_guest_addr;
+
+	return 0;
+}
+
+static int vhost_set_vring_num(struct mdev_device *mdev,
+		struct vhost_vring_state *num)
+{
+	struct vdpa_dev *vdpa;
+	int qid = num->index;
+	struct vdpa_vring_info *vring;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (qid >= vdpa->max_vrings)
+		return -EINVAL;
+
+	vring = &vdpa->vring_info[qid];
+
+	vring->size = num->num;
+
+	return 0;
+}
+
+static int vhost_set_vring_base(struct mdev_device *mdev,
+		struct vhost_vring_state *base)
+{
+	struct vdpa_dev *vdpa;
+	int qid = base->index;
+	struct vdpa_vring_info *vring;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (qid >= vdpa->max_vrings)
+		return -EINVAL;
+
+	vring = &vdpa->vring_info[qid];
+
+	vring->base = base->num;
+
+	return 0;
+}
+
+static int vhost_get_vring_base(struct mdev_device *mdev,
+		struct vhost_vring_state *base)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	vdpa->pending_reply = true;
+	vdpa->pending.request = VHOST_GET_VRING_BASE;
+	vdpa->pending.payload.state.index = base->index;
+
+	// FIXME to be implemented
+
+	return 0;
+}
+
+static int vhost_set_features(struct mdev_device *mdev, u64 *features)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	vdpa->features = *features;
+
+	return 0;
+}
+
+static int vhost_get_features(struct mdev_device *mdev, u64 *features)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	vdpa->pending_reply = true;
+	vdpa->pending.request = VHOST_GET_FEATURES;
+	vdpa->pending.payload.u64 =
+		vdpa->ops->supported_features(vdpa);
+
+	return 0;
+}
+
+static int vhost_set_owner(struct mdev_device *mdev)
+{
+	return 0;
+}
+
+static int vhost_reset_owner(struct mdev_device *mdev)
+{
+	return 0;
+}
+
+static int vhost_set_state(struct mdev_device *mdev, u64 *state)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (*state >= VHOST_DEVICE_S_MAX)
+		return -EINVAL;
+
+	if (vdpa->state == *state)
+		return 0;
+
+	vdpa->state = *state;
+
+	switch (vdpa->state) {
+	case VHOST_DEVICE_S_RUNNING:
+		vdpa->ops->start(vdpa);
+		break;
+	case VHOST_DEVICE_S_STOPPED:
+		vdpa->ops->stop(vdpa);
+		break;
+	}
+
+	return 0;
+}
+
+static ssize_t vdpa_handle_bar0_write(struct mdev_device *mdev,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct vhost_vfio_op *op = NULL;
+	loff_t pos = *ppos;
+	loff_t offset;
+	int ret;
+
+	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
+	if (offset != 0) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (count < VHOST_VFIO_OP_HDR_SIZE) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
+		     GFP_KERNEL);
+	if (op == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	if (copy_from_user(op, buf, VHOST_VFIO_OP_HDR_SIZE)) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	if (op->size > VHOST_VFIO_OP_PAYLOAD_MAX_SIZE ||
+	    op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	if (copy_from_user(&op->payload, buf + VHOST_VFIO_OP_HDR_SIZE,
+			   op->size)) {
+		ret = -EFAULT;
+		goto out_free;
+	}
+
+	switch (op->request) {
+	case VHOST_SET_LOG_BASE:
+		break;
+	case VHOST_SET_MEM_TABLE:
+		vhost_set_mem_table(mdev, &op->payload.memory);
+		break;
+	case VHOST_SET_VRING_ADDR:
+		vhost_set_vring_addr(mdev, &op->payload.addr);
+		break;
+	case VHOST_SET_VRING_NUM:
+		vhost_set_vring_num(mdev, &op->payload.state);
+		break;
+	case VHOST_SET_VRING_BASE:
+		vhost_set_vring_base(mdev, &op->payload.state);
+		break;
+	case VHOST_GET_VRING_BASE:
+		vhost_get_vring_base(mdev, &op->payload.state);
+		break;
+	case VHOST_SET_FEATURES:
+		vhost_set_features(mdev, &op->payload.u64);
+		break;
+	case VHOST_GET_FEATURES:
+		vhost_get_features(mdev, &op->payload.u64);
+		break;
+	case VHOST_SET_OWNER:
+		vhost_set_owner(mdev);
+		break;
+	case VHOST_RESET_OWNER:
+		vhost_reset_owner(mdev);
+		break;
+	case VHOST_DEVICE_SET_STATE:
+		vhost_set_state(mdev, &op->payload.u64);
+		break;
+	default:
+		break;
+	}
+
+	ret = count;
+
+out_free:
+	kfree(op);
+out:
+	return ret;
+}
+
+static ssize_t vdpa_handle_bar1_write(struct mdev_device *mdev,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct vdpa_dev *vdpa;
+	int qid;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (count < sizeof(qid))
+		return -EINVAL;
+
+	if (copy_from_user(&qid, buf, sizeof(qid)))
+		return -EINVAL;
+
+	vdpa->ops->notify(vdpa, qid);
+
+	return count;
+}
+
+ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
+		   size_t count, loff_t *ppos)
+{
+	int done = 0;
+	unsigned int index;
+	loff_t pos = *ppos;
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	mutex_lock(&vdpa->ops_lock);
+
+	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
+
+	switch (index) {
+	case VFIO_PCI_CONFIG_REGION_INDEX:
+		done = vdpa_handle_pcicfg_write(mdev, buf, count, ppos);
+		break;
+	case VFIO_PCI_BAR0_REGION_INDEX:
+		done = vdpa_handle_bar0_write(mdev, buf, count, ppos);
+		break;
+	case VFIO_PCI_BAR1_REGION_INDEX:
+		done = vdpa_handle_bar1_write(mdev, buf, count, ppos);
+		break;
+	}
+
+	if (done > 0)
+		*ppos += done;
+
+	mutex_unlock(&vdpa->ops_lock);
+
+	return done;
+}
+EXPORT_SYMBOL(vdpa_write);
+
+static int vdpa_get_region_info(struct mdev_device *mdev,
+				struct vfio_region_info *region_info,
+				u16 *cap_type_id, void **cap_type)
+{
+	struct vdpa_dev *vdpa;
+	u32 bar_index;
+	u64 size = 0;
+
+	if (!mdev)
+		return -EINVAL;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -EINVAL;
+
+	bar_index = region_info->index;
+	if (bar_index >= VFIO_PCI_NUM_REGIONS)
+		return -EINVAL;
+
+	mutex_lock(&vdpa->ops_lock);
+
+	switch (bar_index) {
+	case VFIO_PCI_CONFIG_REGION_INDEX:
+		size = VDPA_CONFIG_SIZE;
+		break;
+	case VFIO_PCI_BAR0_REGION_INDEX:
+		size = VDPA_BAR0_SIZE;
+		break;
+	case VFIO_PCI_BAR1_REGION_INDEX:
+		size = (u64)vdpa->max_vrings << PAGE_SHIFT;
+		break;
+	default:
+		size = 0;
+		break;
+	}
+
+	// FIXME: mark BAR1 as mmap-able (VFIO_REGION_INFO_FLAG_MMAP)
+	region_info->size = size;
+	region_info->offset = VDPA_VFIO_PCI_INDEX_TO_OFFSET(bar_index);
+	region_info->flags = VFIO_REGION_INFO_FLAG_READ |
+		VFIO_REGION_INFO_FLAG_WRITE;
+	mutex_unlock(&vdpa->ops_lock);
+	return 0;
+}
+
+static int vdpa_reset(struct mdev_device *mdev)
+{
+	struct vdpa_dev *vdpa;
+
+	if (!mdev)
+		return -EINVAL;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int vdpa_get_device_info(struct mdev_device *mdev,
+				struct vfio_device_info *dev_info)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	dev_info->flags = VFIO_DEVICE_FLAGS_PCI;
+	dev_info->num_regions = VFIO_PCI_NUM_REGIONS;
+	dev_info->num_irqs = vdpa->max_vrings;
+
+	return 0;
+}
+
+static int vdpa_get_irq_info(struct mdev_device *mdev,
+			     struct vfio_irq_info *info)
+{
+	struct vdpa_dev *vdpa;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	if (info->index != VFIO_PCI_MSIX_IRQ_INDEX)
+		return -ENOTSUPP;
+
+	info->flags = VFIO_IRQ_INFO_EVENTFD;
+	info->count = vdpa->max_vrings;
+
+	return 0;
+}
+
+static int vdpa_set_irqs(struct mdev_device *mdev, uint32_t flags,
+			 unsigned int index, unsigned int start,
+			 unsigned int count, void *data)
+{
+	struct vdpa_dev *vdpa;
+	int *fd = data, i;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -EINVAL;
+
+	if (index != VFIO_PCI_MSIX_IRQ_INDEX)
+		return -ENOTSUPP;
+
+	for (i = 0; i < count; i++)
+		vdpa->ops->set_eventfd(vdpa, start + i,
+			(flags & VFIO_IRQ_SET_DATA_EVENTFD) ? fd[i] : -1);
+
+	return 0;
+}
+
+long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg)
+{
+	int ret = 0;
+	unsigned long minsz;
+	struct vdpa_dev *vdpa;
+
+	if (!mdev)
+		return -EINVAL;
+
+	vdpa = mdev_get_drvdata(mdev);
+	if (!vdpa)
+		return -ENODEV;
+
+	switch (cmd) {
+	case VFIO_DEVICE_GET_INFO:
+	{
+		struct vfio_device_info info;
+
+		minsz = offsetofend(struct vfio_device_info, num_irqs);
+
+		if (copy_from_user(&info, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (info.argsz < minsz)
+			return -EINVAL;
+
+		ret = vdpa_get_device_info(mdev, &info);
+		if (ret)
+			return ret;
+
+		if (copy_to_user((void __user *)arg, &info, minsz))
+			return -EFAULT;
+
+		return 0;
+	}
+	case VFIO_DEVICE_GET_REGION_INFO:
+	{
+		struct vfio_region_info info;
+		u16 cap_type_id = 0;
+		void *cap_type = NULL;
+
+		minsz = offsetofend(struct vfio_region_info, offset);
+
+		if (copy_from_user(&info, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (info.argsz < minsz)
+			return -EINVAL;
+
+		ret = vdpa_get_region_info(mdev, &info, &cap_type_id,
+					   &cap_type);
+		if (ret)
+			return ret;
+
+		if (copy_to_user((void __user *)arg, &info, minsz))
+			return -EFAULT;
+
+		return 0;
+	}
+	case VFIO_DEVICE_GET_IRQ_INFO:
+	{
+		struct vfio_irq_info info;
+
+		minsz = offsetofend(struct vfio_irq_info, count);
+
+		if (copy_from_user(&info, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (info.argsz < minsz || info.index >= vdpa->max_vrings)
+			return -EINVAL;
+
+		ret = vdpa_get_irq_info(mdev, &info);
+		if (ret)
+			return ret;
+
+		if (copy_to_user((void __user *)arg, &info, minsz))
+			return -EFAULT;
+
+		return 0;
+	}
+	case VFIO_DEVICE_SET_IRQS:
+	{
+		struct vfio_irq_set hdr;
+		size_t data_size = 0;
+		u8 *data = NULL;
+
+		minsz = offsetofend(struct vfio_irq_set, count);
+
+		if (copy_from_user(&hdr, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		ret = vfio_set_irqs_validate_and_prepare(&hdr, vdpa->max_vrings,
+							 VFIO_PCI_NUM_IRQS,
+							 &data_size);
+		if (ret)
+			return ret;
+
+		if (data_size) {
+			data = memdup_user((void __user *)(arg + minsz),
+					   data_size);
+			if (IS_ERR(data))
+				return PTR_ERR(data);
+		}
+
+		ret = vdpa_set_irqs(mdev, hdr.flags, hdr.index, hdr.start,
+				hdr.count, data);
+
+		kfree(data);
+		return ret;
+	}
+	case VFIO_DEVICE_RESET:
+		return vdpa_reset(mdev);
+	}
+	return -ENOTTY;
+}
+EXPORT_SYMBOL(vdpa_ioctl);
+
+int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
+{
+	// FIXME: to be implemented
+
+	return 0;
+}
+EXPORT_SYMBOL(vdpa_mmap);
+
+int vdpa_open(struct mdev_device *mdev)
+{
+	return 0;
+}
+EXPORT_SYMBOL(vdpa_open);
+
+void vdpa_close(struct mdev_device *mdev)
+{
+}
+EXPORT_SYMBOL(vdpa_close);
+
+MODULE_VERSION("0.0.0");
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Hardware virtio accelerator abstraction");
diff --git a/include/linux/vdpa_mdev.h b/include/linux/vdpa_mdev.h
new file mode 100644
index 000000000000..8414e86ba4b8
--- /dev/null
+++ b/include/linux/vdpa_mdev.h
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 Intel Corporation.
+ */
+
+#ifndef VDPA_MDEV_H
+#define VDPA_MDEV_H
+
+#define VDPA_CONFIG_SIZE 0xff
+
+struct mdev_device;
+struct vdpa_dev;
+
+/*
+ * XXX: Any comments about the vDPA API design for drivers
+ *      would be appreciated!
+ */
+
+typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
+typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
+typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
+typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
+typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
+typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
+typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
+typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
+
+struct vdpa_device_ops {
+	vdpa_start_device_t		start;
+	vdpa_stop_device_t		stop;
+	vdpa_dma_map_t			dma_map;
+	vdpa_dma_unmap_t		dma_unmap;
+	vdpa_set_eventfd_t		set_eventfd;
+	vdpa_supported_features_t	supported_features;
+	vdpa_notify_device_t		notify;
+	vdpa_get_notify_addr_t		get_notify_addr;
+};
+
+struct vdpa_vring_info {
+	u64 desc_user_addr;
+	u64 used_user_addr;
+	u64 avail_user_addr;
+	u64 log_guest_addr;
+	u16 size;
+	u16 base;
+};
+
+struct vdpa_dev {
+	struct mdev_device *mdev;
+	struct mutex ops_lock;
+	u8 vconfig[VDPA_CONFIG_SIZE];
+	int nr_vring;
+	u64 features;
+	u64 state;
+	struct vhost_memory *mem_table;
+	bool pending_reply;
+	struct vhost_vfio_op pending;
+	const struct vdpa_device_ops *ops;
+	void *private;
+	int max_vrings;
+	struct vdpa_vring_info vring_info[0];
+};
+
+struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
+			    int max_vrings);
+void vdpa_free(struct vdpa_dev *vdpa);
+ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
+		  size_t count, loff_t *ppos);
+ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
+		   size_t count, loff_t *ppos);
+long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
+int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
+int vdpa_open(struct mdev_device *mdev);
+void vdpa_close(struct mdev_device *mdev);
+
+#endif /* VDPA_MDEV_H */
diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
index c51f8e5cc608..92a1ca0b5fe1 100644
--- a/include/uapi/linux/vhost.h
+++ b/include/uapi/linux/vhost.h
@@ -207,4 +207,30 @@ struct vhost_scsi_target {
 #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
 #define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
 
+/* VHOST_DEVICE specific defines */
+
+#define VHOST_DEVICE_SET_STATE _IOW(VHOST_VIRTIO, 0x70, __u64)
+
+#define VHOST_DEVICE_S_STOPPED 0
+#define VHOST_DEVICE_S_RUNNING 1
+#define VHOST_DEVICE_S_MAX     2
+
+struct vhost_vfio_op {
+	__u64 request;
+	__u32 flags;
+	/* Flag values: */
+#define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
+	__u32 size;
+	union {
+		__u64 u64;
+		struct vhost_vring_state state;
+		struct vhost_vring_addr addr;
+		struct vhost_memory memory;
+	} payload;
+};
+
+#define VHOST_VFIO_OP_HDR_SIZE \
+		((unsigned long)&((struct vhost_vfio_op *)NULL)->payload)
+#define VHOST_VFIO_OP_PAYLOAD_MAX_SIZE 1024 /* FIXME TBD */
+
 #endif
-- 
2.11.0


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-02 15:23 ` [virtio-dev] " Tiwei Bie
@ 2018-04-10  2:52   ` Jason Wang
  -1 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-10  2:52 UTC (permalink / raw)
  To: Tiwei Bie, mst, alex.williamson, ddutile, alexander.h.duyck
  Cc: virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang



On 2018年04月02日 23:23, Tiwei Bie wrote:
> This patch introduces a mdev (mediated device) based hardware
> vhost backend. This backend is an abstraction of the various
> hardware vhost accelerators (potentially any device that uses
> virtio ring can be used as a vhost accelerator). Some generic
> mdev parent ops are provided for accelerator drivers to support
> generating mdev instances.
>
> What's this
> ===========
>
> The idea is that we can setup a virtio ring compatible device
> with the messages available at the vhost-backend. Originally,
> these messages are used to implement a software vhost backend,
> but now we will use these messages to setup a virtio ring
> compatible hardware device. Then the hardware device will be
> able to work with the guest virtio driver in the VM just like
> what the software backend does. That is to say, we can implement
> a hardware based vhost backend in QEMU, and any virtio ring
> compatible devices potentially can be used with this backend.
> (We also call it vDPA -- vhost Data Path Acceleration).
>
> One problem is that, different virtio ring compatible devices
> may have different device interfaces. That is to say, we will
> need different drivers in QEMU. It could be troublesome. And
> that's what this patch trying to fix. The idea behind this
> patch is very simple: mdev is a standard way to emulate device
> in kernel.

So you just move the abstraction layer from qemu to kernel, and you 
still need different drivers in kernel for different device interfaces 
of accelerators. This looks even more complex than leaving it in qemu. 
As you said, another idea is to implement userspace vhost backend for 
accelerators which seems easier and could co-work with other parts of 
qemu without inventing new type of messages.

Need careful thought here to seek a best solution here.

>   So we defined a standard device based on mdev, which
> is able to accept vhost messages. When the mdev emulation code
> (i.e. the generic mdev parent ops provided by this patch) gets
> vhost messages, it will parse and deliver them to accelerator
> drivers. Drivers can use these messages to setup accelerators.
>
> That is to say, the generic mdev parent ops (e.g. read()/write()/
> ioctl()/...) will be provided for accelerator drivers to register
> accelerators as mdev parent devices. And each accelerator device
> will support generating standard mdev instance(s).
>
> With this standard device interface, we will be able to just
> develop one userspace driver to implement the hardware based
> vhost backend in QEMU.
>
> Difference between vDPA and PCI passthru
> ========================================
>
> The key difference between vDPA and PCI passthru is that, in
> vDPA only the data path of the device (e.g. DMA ring, notify
> region and queue interrupt) is pass-throughed to the VM, the
> device control path (e.g. PCI configuration space and MMIO
> regions) is still defined and emulated by QEMU.
>
> The benefits of keeping virtio device emulation in QEMU compared
> with virtio device PCI passthru include (but not limit to):
>
> - consistent device interface for guest OS in the VM;
> - max flexibility on the hardware design, especially the
>    accelerator for each vhost backend doesn't have to be a
>    full PCI device;
> - leveraging the existing virtio live-migration framework;
>
> The interface of this mdev based device
> =======================================
>
> 1. BAR0
>
> The MMIO region described by BAR0 is the main control
> interface. Messages will be written to or read from
> this region.
>
> The message type is determined by the `request` field
> in message header. The message size is encoded in the
> message header too. The message format looks like this:
>
> struct vhost_vfio_op {
> 	__u64 request;
> 	__u32 flags;
> 	/* Flag values: */
> #define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> 	__u32 size;
> 	union {
> 		__u64 u64;
> 		struct vhost_vring_state state;
> 		struct vhost_vring_addr addr;
> 		struct vhost_memory memory;
> 	} payload;
> };
>
> The existing vhost-kernel ioctl cmds are reused as
> the message requests in above structure.
>
> Each message will be written to or read from this
> region at offset 0:
>
> int vhost_vfio_write(struct vhost_dev *dev, struct vhost_vfio_op *op)
> {
> 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> 	struct vhost_vfio *vfio = dev->opaque;
> 	int ret;
>
> 	ret = pwrite64(vfio->device_fd, op, count, vfio->bar0_offset);
> 	if (ret != count)
> 		return -1;
>
> 	return 0;
> }
>
> int vhost_vfio_read(struct vhost_dev *dev, struct vhost_vfio_op *op)
> {
> 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> 	struct vhost_vfio *vfio = dev->opaque;
> 	uint64_t request = op->request;
> 	int ret;
>
> 	ret = pread64(vfio->device_fd, op, count, vfio->bar0_offset);
> 	if (ret != count || request != op->request)
> 		return -1;
>
> 	return 0;
> }
>
> It's quite straightforward to set things to the device.
> Just need to write the message to device directly:
>
> int vhost_vfio_set_features(struct vhost_dev *dev, uint64_t features)
> {
> 	struct vhost_vfio_op op;
>
> 	op.request = VHOST_SET_FEATURES;
> 	op.flags = 0;
> 	op.size = sizeof(features);
> 	op.payload.u64 = features;
>
> 	return vhost_vfio_write(dev, &op);
> }
>
> To get things from the device, two steps are needed.
> Take VHOST_GET_FEATURE as an example:
>
> int vhost_vfio_get_features(struct vhost_dev *dev, uint64_t *features)
> {
> 	struct vhost_vfio_op op;
> 	int ret;
>
> 	op.request = VHOST_GET_FEATURES;
> 	op.flags = VHOST_VFIO_NEED_REPLY;
> 	op.size = 0;
>
> 	/* Just need to write the header */
> 	ret = vhost_vfio_write(dev, &op);
> 	if (ret != 0)
> 		goto out;
>
> 	/* `op` wasn't changed during write */
> 	op.flags = 0;
> 	op.size = sizeof(*features);
>
> 	ret = vhost_vfio_read(dev, &op);
> 	if (ret != 0)
> 		goto out;
>
> 	*features = op.payload.u64;
> out:
> 	return ret;
> }
>
> 2. BAR1 (mmap-able)
>
> The MMIO region described by BAR1 will be used to notify the
> device.
>
> Each queue will has a page for notification, and it can be
> mapped to VM (if hardware also supports), and the virtio
> driver in the VM will be able to notify the device directly.
>
> The MMIO region described by BAR1 is also write-able. If the
> accelerator's notification register(s) cannot be mapped to the
> VM, write() can also be used to notify the device. Something
> like this:
>
> void notify_relay(void *opaque)
> {
> 	......
> 	offset = 0x1000 * queue_idx; /* XXX assume page size is 4K here. */
>
> 	ret = pwrite64(vfio->device_fd, &queue_idx, sizeof(queue_idx),
> 			vfio->bar1_offset + offset);
> 	......
> }
>
> Other BARs are reserved.
>
> 3. VFIO interrupt ioctl API
>
> VFIO interrupt ioctl API is used to setup device interrupts.
> IRQ-bypass will also be supported.
>
> Currently, only VFIO_PCI_MSIX_IRQ_INDEX is supported.
>
> The API for drivers to provide mdev instances
> =============================================
>
> The read()/write()/ioctl()/mmap()/open()/release() mdev
> parent ops have been provided for accelerators' drivers
> to provide mdev instances.
>
> ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> 		  size_t count, loff_t *ppos);
> ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> 		   size_t count, loff_t *ppos);
> long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> int vdpa_open(struct mdev_device *mdev);
> void vdpa_close(struct mdev_device *mdev);
>
> Each accelerator driver just needs to implement its own
> create()/remove() ops, and provide a vdpa device ops
> which will be called by the generic mdev emulation code.
>
> Currently, the vdpa device ops are defined as:
>
> typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
>
> struct vdpa_device_ops {
> 	vdpa_start_device_t		start;
> 	vdpa_stop_device_t		stop;
> 	vdpa_dma_map_t			dma_map;
> 	vdpa_dma_unmap_t		dma_unmap;
> 	vdpa_set_eventfd_t		set_eventfd;
> 	vdpa_supported_features_t	supported_features;
> 	vdpa_notify_device_t		notify;
> 	vdpa_get_notify_addr_t		get_notify_addr;
> };
>
> struct vdpa_dev {
> 	struct mdev_device *mdev;
> 	struct mutex ops_lock;
> 	u8 vconfig[VDPA_CONFIG_SIZE];
> 	int nr_vring;
> 	u64 features;
> 	u64 state;
> 	struct vhost_memory *mem_table;
> 	bool pending_reply;
> 	struct vhost_vfio_op pending;
> 	const struct vdpa_device_ops *ops;
> 	void *private;
> 	int max_vrings;
> 	struct vdpa_vring_info vring_info[0];
> };
>
> struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> 			    int max_vrings);
> void vdpa_free(struct vdpa_dev *vdpa);
>
> A simple example
> ================
>
> # Query the number of available mdev instances
> $ cat /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/available_instances
>
> # Create a mdev instance
> $ echo $UUID > /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/create
>
> # Launch QEMU with a virtio-net device
> $ qemu \
> 	...... \
> 	-netdev type=vhost-vfio,sysfsdev=/sys/bus/mdev/devices/$UUID,id=$ID \
> 	-device virtio-net-pci,netdev=$ID
>
> -------- END --------
>
> Most of above words will be refined and moved to a doc in
> the formal patch. In this RFC, all introductions and code
> are gathered in this patch, the idea is to make it easier
> to find all the relevant information. Anyone who wants to
> comment could use inline comment and just keep the relevant
> parts. Sorry for the big RFC patch..
>
> This patch is just a RFC for now, and something is still
> missing or needs to be refined. But it's never too early
> to hear the thoughts from the community. So any comments
> would be appreciated! Thanks! :-)

I don't see vhost_vfio_write() and other above functions in the patch. 
Looks like some part of the patch is missed, it would be better to post 
a complete series with an example driver (vDPA) to get a full picture.

Thanks

> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>   drivers/vhost/Makefile     |   3 +
>   drivers/vhost/vdpa.c       | 805 +++++++++++++++++++++++++++++++++++++++++++++
>   include/linux/vdpa_mdev.h  |  76 +++++
>   include/uapi/linux/vhost.h |  26 ++
>   4 files changed, 910 insertions(+)
>   create mode 100644 drivers/vhost/vdpa.c
>   create mode 100644 include/linux/vdpa_mdev.h
>
> diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> index 6c6df24f770c..7d185e083140 100644
> --- a/drivers/vhost/Makefile
> +++ b/drivers/vhost/Makefile
> @@ -11,3 +11,6 @@ vhost_vsock-y := vsock.o
>   obj-$(CONFIG_VHOST_RING) += vringh.o
>   
>   obj-$(CONFIG_VHOST)	+= vhost.o
> +
> +obj-m += vhost_vdpa.o  # FIXME: add an option
> +vhost_vdpa-y := vdpa.o
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> new file mode 100644
> index 000000000000..aa19c266ea19
> --- /dev/null
> +++ b/drivers/vhost/vdpa.c
> @@ -0,0 +1,805 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 Intel Corporation.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/vfio.h>
> +#include <linux/vhost.h>
> +#include <linux/mdev.h>
> +#include <linux/vdpa_mdev.h>
> +
> +#define VDPA_BAR0_SIZE		0x1000000 // TBD
> +
> +#define VDPA_VFIO_PCI_OFFSET_SHIFT	40
> +#define VDPA_VFIO_PCI_OFFSET_MASK \
> +		((1ULL << VDPA_VFIO_PCI_OFFSET_SHIFT) - 1)
> +#define VDPA_VFIO_PCI_OFFSET_TO_INDEX(offset) \
> +		((offset) >> VDPA_VFIO_PCI_OFFSET_SHIFT)
> +#define VDPA_VFIO_PCI_INDEX_TO_OFFSET(index) \
> +		((u64)(index) << VDPA_VFIO_PCI_OFFSET_SHIFT)
> +#define VDPA_VFIO_PCI_BAR_OFFSET(offset) \
> +		((offset) & VDPA_VFIO_PCI_OFFSET_MASK)
> +
> +#define STORE_LE16(addr, val)	(*(u16 *)(addr) = cpu_to_le16(val))
> +#define STORE_LE32(addr, val)	(*(u32 *)(addr) = cpu_to_le32(val))
> +
> +static void vdpa_create_config_space(struct vdpa_dev *vdpa)
> +{
> +	/* PCI device ID / vendor ID */
> +	STORE_LE32(&vdpa->vconfig[0x0], 0xffffffff); // FIXME TBD
> +
> +	/* Programming interface class */
> +	vdpa->vconfig[0x9] = 0x00;
> +
> +	/* Sub class */
> +	vdpa->vconfig[0xa] = 0x00;
> +
> +	/* Base class */
> +	vdpa->vconfig[0xb] = 0x02;
> +
> +	// FIXME TBD
> +}
> +
> +struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> +			    int max_vrings)
> +{
> +	struct vdpa_dev *vdpa;
> +	size_t size;
> +
> +	size = sizeof(struct vdpa_dev) + max_vrings *
> +			sizeof(struct vdpa_vring_info);
> +
> +	vdpa = kzalloc(size, GFP_KERNEL);
> +	if (vdpa == NULL)
> +		return NULL;
> +
> +	mutex_init(&vdpa->ops_lock);
> +
> +	vdpa->mdev = mdev;
> +	vdpa->private = private;
> +	vdpa->max_vrings = max_vrings;
> +
> +	vdpa_create_config_space(vdpa);
> +
> +	return vdpa;
> +}
> +EXPORT_SYMBOL(vdpa_alloc);
> +
> +void vdpa_free(struct vdpa_dev *vdpa)
> +{
> +	struct mdev_device *mdev;
> +
> +	mdev = vdpa->mdev;
> +
> +	vdpa->ops->stop(vdpa);
> +	vdpa->ops->dma_unmap(vdpa);
> +
> +	mdev_set_drvdata(mdev, NULL);
> +
> +	mutex_destroy(&vdpa->ops_lock);
> +
> +	kfree(vdpa->mem_table);
> +	kfree(vdpa);
> +}
> +EXPORT_SYMBOL(vdpa_free);
> +
> +static ssize_t vdpa_handle_pcicfg_read(struct mdev_device *mdev,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +
> +	if (count + offset > VDPA_CONFIG_SIZE)
> +		return -EINVAL;
> +
> +	if (copy_to_user(buf, (vdpa->vconfig + offset), count))
> +		return -EFAULT;
> +
> +	return count;
> +}
> +
> +static ssize_t vdpa_handle_bar0_read(struct mdev_device *mdev,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	struct vhost_vfio_op *op = NULL;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +	int ret;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa) {
> +		ret = -ENODEV;
> +		goto out;
> +	}
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +	if (offset != 0) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (!vdpa->pending_reply) {
> +		ret = 0;
> +		goto out;
> +	}
> +
> +	vdpa->pending_reply = false;
> +
> +	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
> +		     GFP_KERNEL);
> +	if (op == NULL) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	op->request = vdpa->pending.request;
> +
> +	switch (op->request) {
> +	case VHOST_GET_VRING_BASE:
> +		op->payload.state = vdpa->pending.payload.state;
> +		op->size = sizeof(op->payload.state);
> +		break;
> +	case VHOST_GET_FEATURES:
> +		op->payload.u64 = vdpa->pending.payload.u64;
> +		op->size = sizeof(op->payload.u64);
> +		break;
> +	default:
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (copy_to_user(buf, op, count)) {
> +		ret = -EFAULT;
> +		goto out_free;
> +	}
> +
> +	ret = count;
> +
> +out_free:
> +	kfree(op);
> +out:
> +	return ret;
> +}
> +
> +ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> +		  size_t count, loff_t *ppos)
> +{
> +	int done = 0;
> +	unsigned int index;
> +	loff_t pos = *ppos;
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
> +
> +	switch (index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		done = vdpa_handle_pcicfg_read(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		done = vdpa_handle_bar0_read(mdev, buf, count, ppos);
> +		break;
> +	}
> +
> +	if (done > 0)
> +		*ppos += done;
> +
> +	mutex_unlock(&vdpa->ops_lock);
> +
> +	return done;
> +}
> +EXPORT_SYMBOL(vdpa_read);
> +
> +static ssize_t vdpa_handle_pcicfg_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	return count;
> +}
> +
> +static int vhost_set_mem_table(struct mdev_device *mdev,
> +		struct vhost_memory *mem)
> +{
> +	struct vdpa_dev *vdpa;
> +	struct vhost_memory *mem_table;
> +	size_t size;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	// FIXME fix this
> +	if (vdpa->state != VHOST_DEVICE_S_STOPPED)
> +		return -EBUSY;
> +
> +	size = sizeof(*mem) + mem->nregions * sizeof(*mem->regions);
> +
> +	mem_table = kzalloc(size, GFP_KERNEL);
> +	if (mem_table == NULL)
> +		return -ENOMEM;
> +
> +	memcpy(mem_table, mem, size);
> +
> +	kfree(vdpa->mem_table);
> +
> +	vdpa->mem_table = mem_table;
> +
> +	vdpa->ops->dma_unmap(vdpa);
> +	vdpa->ops->dma_map(vdpa);
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_addr(struct mdev_device *mdev,
> +		struct vhost_vring_addr *addr)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = addr->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	/* FIXME to be fixed */
> +	if (qid >= vdpa->nr_vring)
> +		vdpa->nr_vring = qid + 1;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->desc_user_addr = addr->desc_user_addr;
> +	vring->used_user_addr = addr->used_user_addr;
> +	vring->avail_user_addr = addr->avail_user_addr;
> +	vring->log_guest_addr = addr->log_guest_addr;
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_num(struct mdev_device *mdev,
> +		struct vhost_vring_state *num)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = num->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->size = num->num;
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_base(struct mdev_device *mdev,
> +		struct vhost_vring_state *base)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = base->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->base = base->num;
> +
> +	return 0;
> +}
> +
> +static int vhost_get_vring_base(struct mdev_device *mdev,
> +		struct vhost_vring_state *base)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->pending_reply = true;
> +	vdpa->pending.request = VHOST_GET_VRING_BASE;
> +	vdpa->pending.payload.state.index = base->index;
> +
> +	// FIXME to be implemented
> +
> +	return 0;
> +}
> +
> +static int vhost_set_features(struct mdev_device *mdev, u64 *features)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->features = *features;
> +
> +	return 0;
> +}
> +
> +static int vhost_get_features(struct mdev_device *mdev, u64 *features)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->pending_reply = true;
> +	vdpa->pending.request = VHOST_GET_FEATURES;
> +	vdpa->pending.payload.u64 =
> +		vdpa->ops->supported_features(vdpa);
> +
> +	return 0;
> +}
> +
> +static int vhost_set_owner(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +
> +static int vhost_reset_owner(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +
> +static int vhost_set_state(struct mdev_device *mdev, u64 *state)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (*state >= VHOST_DEVICE_S_MAX)
> +		return -EINVAL;
> +
> +	if (vdpa->state == *state)
> +		return 0;
> +
> +	vdpa->state = *state;
> +
> +	switch (vdpa->state) {
> +	case VHOST_DEVICE_S_RUNNING:
> +		vdpa->ops->start(vdpa);
> +		break;
> +	case VHOST_DEVICE_S_STOPPED:
> +		vdpa->ops->stop(vdpa);
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static ssize_t vdpa_handle_bar0_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vhost_vfio_op *op = NULL;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +	int ret;
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +	if (offset != 0) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (count < VHOST_VFIO_OP_HDR_SIZE) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
> +		     GFP_KERNEL);
> +	if (op == NULL) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	if (copy_from_user(op, buf, VHOST_VFIO_OP_HDR_SIZE)) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (op->size > VHOST_VFIO_OP_PAYLOAD_MAX_SIZE ||
> +	    op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (copy_from_user(&op->payload, buf + VHOST_VFIO_OP_HDR_SIZE,
> +			   op->size)) {
> +		ret = -EFAULT;
> +		goto out_free;
> +	}
> +
> +	switch (op->request) {
> +	case VHOST_SET_LOG_BASE:
> +		break;
> +	case VHOST_SET_MEM_TABLE:
> +		vhost_set_mem_table(mdev, &op->payload.memory);
> +		break;
> +	case VHOST_SET_VRING_ADDR:
> +		vhost_set_vring_addr(mdev, &op->payload.addr);
> +		break;
> +	case VHOST_SET_VRING_NUM:
> +		vhost_set_vring_num(mdev, &op->payload.state);
> +		break;
> +	case VHOST_SET_VRING_BASE:
> +		vhost_set_vring_base(mdev, &op->payload.state);
> +		break;
> +	case VHOST_GET_VRING_BASE:
> +		vhost_get_vring_base(mdev, &op->payload.state);
> +		break;
> +	case VHOST_SET_FEATURES:
> +		vhost_set_features(mdev, &op->payload.u64);
> +		break;
> +	case VHOST_GET_FEATURES:
> +		vhost_get_features(mdev, &op->payload.u64);
> +		break;
> +	case VHOST_SET_OWNER:
> +		vhost_set_owner(mdev);
> +		break;
> +	case VHOST_RESET_OWNER:
> +		vhost_reset_owner(mdev);
> +		break;
> +	case VHOST_DEVICE_SET_STATE:
> +		vhost_set_state(mdev, &op->payload.u64);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	ret = count;
> +
> +out_free:
> +	kfree(op);
> +out:
> +	return ret;
> +}
> +
> +static ssize_t vdpa_handle_bar1_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (count < sizeof(qid))
> +		return -EINVAL;
> +
> +	if (copy_from_user(&qid, buf, sizeof(qid)))
> +		return -EINVAL;
> +
> +	vdpa->ops->notify(vdpa, qid);
> +
> +	return count;
> +}
> +
> +ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> +		   size_t count, loff_t *ppos)
> +{
> +	int done = 0;
> +	unsigned int index;
> +	loff_t pos = *ppos;
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
> +
> +	switch (index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		done = vdpa_handle_pcicfg_write(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		done = vdpa_handle_bar0_write(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR1_REGION_INDEX:
> +		done = vdpa_handle_bar1_write(mdev, buf, count, ppos);
> +		break;
> +	}
> +
> +	if (done > 0)
> +		*ppos += done;
> +
> +	mutex_unlock(&vdpa->ops_lock);
> +
> +	return done;
> +}
> +EXPORT_SYMBOL(vdpa_write);
> +
> +static int vdpa_get_region_info(struct mdev_device *mdev,
> +				struct vfio_region_info *region_info,
> +				u16 *cap_type_id, void **cap_type)
> +{
> +	struct vdpa_dev *vdpa;
> +	u32 bar_index;
> +	u64 size = 0;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	bar_index = region_info->index;
> +	if (bar_index >= VFIO_PCI_NUM_REGIONS)
> +		return -EINVAL;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	switch (bar_index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		size = VDPA_CONFIG_SIZE;
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		size = VDPA_BAR0_SIZE;
> +		break;
> +	case VFIO_PCI_BAR1_REGION_INDEX:
> +		size = (u64)vdpa->max_vrings << PAGE_SHIFT;
> +		break;
> +	default:
> +		size = 0;
> +		break;
> +	}
> +
> +	// FIXME: mark BAR1 as mmap-able (VFIO_REGION_INFO_FLAG_MMAP)
> +	region_info->size = size;
> +	region_info->offset = VDPA_VFIO_PCI_INDEX_TO_OFFSET(bar_index);
> +	region_info->flags = VFIO_REGION_INFO_FLAG_READ |
> +		VFIO_REGION_INFO_FLAG_WRITE;
> +	mutex_unlock(&vdpa->ops_lock);
> +	return 0;
> +}
> +
> +static int vdpa_reset(struct mdev_device *mdev)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int vdpa_get_device_info(struct mdev_device *mdev,
> +				struct vfio_device_info *dev_info)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	dev_info->flags = VFIO_DEVICE_FLAGS_PCI;
> +	dev_info->num_regions = VFIO_PCI_NUM_REGIONS;
> +	dev_info->num_irqs = vdpa->max_vrings;
> +
> +	return 0;
> +}
> +
> +static int vdpa_get_irq_info(struct mdev_device *mdev,
> +			     struct vfio_irq_info *info)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (info->index != VFIO_PCI_MSIX_IRQ_INDEX)
> +		return -ENOTSUPP;
> +
> +	info->flags = VFIO_IRQ_INFO_EVENTFD;
> +	info->count = vdpa->max_vrings;
> +
> +	return 0;
> +}
> +
> +static int vdpa_set_irqs(struct mdev_device *mdev, uint32_t flags,
> +			 unsigned int index, unsigned int start,
> +			 unsigned int count, void *data)
> +{
> +	struct vdpa_dev *vdpa;
> +	int *fd = data, i;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	if (index != VFIO_PCI_MSIX_IRQ_INDEX)
> +		return -ENOTSUPP;
> +
> +	for (i = 0; i < count; i++)
> +		vdpa->ops->set_eventfd(vdpa, start + i,
> +			(flags & VFIO_IRQ_SET_DATA_EVENTFD) ? fd[i] : -1);
> +
> +	return 0;
> +}
> +
> +long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg)
> +{
> +	int ret = 0;
> +	unsigned long minsz;
> +	struct vdpa_dev *vdpa;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	switch (cmd) {
> +	case VFIO_DEVICE_GET_INFO:
> +	{
> +		struct vfio_device_info info;
> +
> +		minsz = offsetofend(struct vfio_device_info, num_irqs);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_device_info(mdev, &info);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_GET_REGION_INFO:
> +	{
> +		struct vfio_region_info info;
> +		u16 cap_type_id = 0;
> +		void *cap_type = NULL;
> +
> +		minsz = offsetofend(struct vfio_region_info, offset);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_region_info(mdev, &info, &cap_type_id,
> +					   &cap_type);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_GET_IRQ_INFO:
> +	{
> +		struct vfio_irq_info info;
> +
> +		minsz = offsetofend(struct vfio_irq_info, count);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz || info.index >= vdpa->max_vrings)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_irq_info(mdev, &info);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_SET_IRQS:
> +	{
> +		struct vfio_irq_set hdr;
> +		size_t data_size = 0;
> +		u8 *data = NULL;
> +
> +		minsz = offsetofend(struct vfio_irq_set, count);
> +
> +		if (copy_from_user(&hdr, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		ret = vfio_set_irqs_validate_and_prepare(&hdr, vdpa->max_vrings,
> +							 VFIO_PCI_NUM_IRQS,
> +							 &data_size);
> +		if (ret)
> +			return ret;
> +
> +		if (data_size) {
> +			data = memdup_user((void __user *)(arg + minsz),
> +					   data_size);
> +			if (IS_ERR(data))
> +				return PTR_ERR(data);
> +		}
> +
> +		ret = vdpa_set_irqs(mdev, hdr.flags, hdr.index, hdr.start,
> +				hdr.count, data);
> +
> +		kfree(data);
> +		return ret;
> +	}
> +	case VFIO_DEVICE_RESET:
> +		return vdpa_reset(mdev);
> +	}
> +	return -ENOTTY;
> +}
> +EXPORT_SYMBOL(vdpa_ioctl);
> +
> +int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
> +{
> +	// FIXME: to be implemented
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(vdpa_mmap);
> +
> +int vdpa_open(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +EXPORT_SYMBOL(vdpa_open);
> +
> +void vdpa_close(struct mdev_device *mdev)
> +{
> +}
> +EXPORT_SYMBOL(vdpa_close);
> +
> +MODULE_VERSION("0.0.0");
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("Hardware virtio accelerator abstraction");
> diff --git a/include/linux/vdpa_mdev.h b/include/linux/vdpa_mdev.h
> new file mode 100644
> index 000000000000..8414e86ba4b8
> --- /dev/null
> +++ b/include/linux/vdpa_mdev.h
> @@ -0,0 +1,76 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 Intel Corporation.
> + */
> +
> +#ifndef VDPA_MDEV_H
> +#define VDPA_MDEV_H
> +
> +#define VDPA_CONFIG_SIZE 0xff
> +
> +struct mdev_device;
> +struct vdpa_dev;
> +
> +/*
> + * XXX: Any comments about the vDPA API design for drivers
> + *      would be appreciated!
> + */
> +
> +typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> +typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> +typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> +typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
> +
> +struct vdpa_device_ops {
> +	vdpa_start_device_t		start;
> +	vdpa_stop_device_t		stop;
> +	vdpa_dma_map_t			dma_map;
> +	vdpa_dma_unmap_t		dma_unmap;
> +	vdpa_set_eventfd_t		set_eventfd;
> +	vdpa_supported_features_t	supported_features;
> +	vdpa_notify_device_t		notify;
> +	vdpa_get_notify_addr_t		get_notify_addr;
> +};
> +
> +struct vdpa_vring_info {
> +	u64 desc_user_addr;
> +	u64 used_user_addr;
> +	u64 avail_user_addr;
> +	u64 log_guest_addr;
> +	u16 size;
> +	u16 base;
> +};
> +
> +struct vdpa_dev {
> +	struct mdev_device *mdev;
> +	struct mutex ops_lock;
> +	u8 vconfig[VDPA_CONFIG_SIZE];
> +	int nr_vring;
> +	u64 features;
> +	u64 state;
> +	struct vhost_memory *mem_table;
> +	bool pending_reply;
> +	struct vhost_vfio_op pending;
> +	const struct vdpa_device_ops *ops;
> +	void *private;
> +	int max_vrings;
> +	struct vdpa_vring_info vring_info[0];
> +};
> +
> +struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> +			    int max_vrings);
> +void vdpa_free(struct vdpa_dev *vdpa);
> +ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> +		  size_t count, loff_t *ppos);
> +ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> +		   size_t count, loff_t *ppos);
> +long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> +int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> +int vdpa_open(struct mdev_device *mdev);
> +void vdpa_close(struct mdev_device *mdev);
> +
> +#endif /* VDPA_MDEV_H */
> diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> index c51f8e5cc608..92a1ca0b5fe1 100644
> --- a/include/uapi/linux/vhost.h
> +++ b/include/uapi/linux/vhost.h
> @@ -207,4 +207,30 @@ struct vhost_scsi_target {
>   #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
>   #define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
>   
> +/* VHOST_DEVICE specific defines */
> +
> +#define VHOST_DEVICE_SET_STATE _IOW(VHOST_VIRTIO, 0x70, __u64)
> +
> +#define VHOST_DEVICE_S_STOPPED 0
> +#define VHOST_DEVICE_S_RUNNING 1
> +#define VHOST_DEVICE_S_MAX     2
> +
> +struct vhost_vfio_op {
> +	__u64 request;
> +	__u32 flags;
> +	/* Flag values: */
> +#define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> +	__u32 size;
> +	union {
> +		__u64 u64;
> +		struct vhost_vring_state state;
> +		struct vhost_vring_addr addr;
> +		struct vhost_memory memory;
> +	} payload;
> +};
> +
> +#define VHOST_VFIO_OP_HDR_SIZE \
> +		((unsigned long)&((struct vhost_vfio_op *)NULL)->payload)
> +#define VHOST_VFIO_OP_PAYLOAD_MAX_SIZE 1024 /* FIXME TBD */
> +
>   #endif

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-02 15:23 ` [virtio-dev] " Tiwei Bie
  (?)
  (?)
@ 2018-04-10  2:52 ` Jason Wang
  -1 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-10  2:52 UTC (permalink / raw)
  To: Tiwei Bie, mst, alex.williamson, ddutile, alexander.h.duyck
  Cc: jianfeng.tan, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, xiao.w.wang, zhihong.wang



On 2018年04月02日 23:23, Tiwei Bie wrote:
> This patch introduces a mdev (mediated device) based hardware
> vhost backend. This backend is an abstraction of the various
> hardware vhost accelerators (potentially any device that uses
> virtio ring can be used as a vhost accelerator). Some generic
> mdev parent ops are provided for accelerator drivers to support
> generating mdev instances.
>
> What's this
> ===========
>
> The idea is that we can setup a virtio ring compatible device
> with the messages available at the vhost-backend. Originally,
> these messages are used to implement a software vhost backend,
> but now we will use these messages to setup a virtio ring
> compatible hardware device. Then the hardware device will be
> able to work with the guest virtio driver in the VM just like
> what the software backend does. That is to say, we can implement
> a hardware based vhost backend in QEMU, and any virtio ring
> compatible devices potentially can be used with this backend.
> (We also call it vDPA -- vhost Data Path Acceleration).
>
> One problem is that, different virtio ring compatible devices
> may have different device interfaces. That is to say, we will
> need different drivers in QEMU. It could be troublesome. And
> that's what this patch trying to fix. The idea behind this
> patch is very simple: mdev is a standard way to emulate device
> in kernel.

So you just move the abstraction layer from qemu to kernel, and you 
still need different drivers in kernel for different device interfaces 
of accelerators. This looks even more complex than leaving it in qemu. 
As you said, another idea is to implement userspace vhost backend for 
accelerators which seems easier and could co-work with other parts of 
qemu without inventing new type of messages.

Need careful thought here to seek a best solution here.

>   So we defined a standard device based on mdev, which
> is able to accept vhost messages. When the mdev emulation code
> (i.e. the generic mdev parent ops provided by this patch) gets
> vhost messages, it will parse and deliver them to accelerator
> drivers. Drivers can use these messages to setup accelerators.
>
> That is to say, the generic mdev parent ops (e.g. read()/write()/
> ioctl()/...) will be provided for accelerator drivers to register
> accelerators as mdev parent devices. And each accelerator device
> will support generating standard mdev instance(s).
>
> With this standard device interface, we will be able to just
> develop one userspace driver to implement the hardware based
> vhost backend in QEMU.
>
> Difference between vDPA and PCI passthru
> ========================================
>
> The key difference between vDPA and PCI passthru is that, in
> vDPA only the data path of the device (e.g. DMA ring, notify
> region and queue interrupt) is pass-throughed to the VM, the
> device control path (e.g. PCI configuration space and MMIO
> regions) is still defined and emulated by QEMU.
>
> The benefits of keeping virtio device emulation in QEMU compared
> with virtio device PCI passthru include (but not limit to):
>
> - consistent device interface for guest OS in the VM;
> - max flexibility on the hardware design, especially the
>    accelerator for each vhost backend doesn't have to be a
>    full PCI device;
> - leveraging the existing virtio live-migration framework;
>
> The interface of this mdev based device
> =======================================
>
> 1. BAR0
>
> The MMIO region described by BAR0 is the main control
> interface. Messages will be written to or read from
> this region.
>
> The message type is determined by the `request` field
> in message header. The message size is encoded in the
> message header too. The message format looks like this:
>
> struct vhost_vfio_op {
> 	__u64 request;
> 	__u32 flags;
> 	/* Flag values: */
> #define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> 	__u32 size;
> 	union {
> 		__u64 u64;
> 		struct vhost_vring_state state;
> 		struct vhost_vring_addr addr;
> 		struct vhost_memory memory;
> 	} payload;
> };
>
> The existing vhost-kernel ioctl cmds are reused as
> the message requests in above structure.
>
> Each message will be written to or read from this
> region at offset 0:
>
> int vhost_vfio_write(struct vhost_dev *dev, struct vhost_vfio_op *op)
> {
> 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> 	struct vhost_vfio *vfio = dev->opaque;
> 	int ret;
>
> 	ret = pwrite64(vfio->device_fd, op, count, vfio->bar0_offset);
> 	if (ret != count)
> 		return -1;
>
> 	return 0;
> }
>
> int vhost_vfio_read(struct vhost_dev *dev, struct vhost_vfio_op *op)
> {
> 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> 	struct vhost_vfio *vfio = dev->opaque;
> 	uint64_t request = op->request;
> 	int ret;
>
> 	ret = pread64(vfio->device_fd, op, count, vfio->bar0_offset);
> 	if (ret != count || request != op->request)
> 		return -1;
>
> 	return 0;
> }
>
> It's quite straightforward to set things to the device.
> Just need to write the message to device directly:
>
> int vhost_vfio_set_features(struct vhost_dev *dev, uint64_t features)
> {
> 	struct vhost_vfio_op op;
>
> 	op.request = VHOST_SET_FEATURES;
> 	op.flags = 0;
> 	op.size = sizeof(features);
> 	op.payload.u64 = features;
>
> 	return vhost_vfio_write(dev, &op);
> }
>
> To get things from the device, two steps are needed.
> Take VHOST_GET_FEATURE as an example:
>
> int vhost_vfio_get_features(struct vhost_dev *dev, uint64_t *features)
> {
> 	struct vhost_vfio_op op;
> 	int ret;
>
> 	op.request = VHOST_GET_FEATURES;
> 	op.flags = VHOST_VFIO_NEED_REPLY;
> 	op.size = 0;
>
> 	/* Just need to write the header */
> 	ret = vhost_vfio_write(dev, &op);
> 	if (ret != 0)
> 		goto out;
>
> 	/* `op` wasn't changed during write */
> 	op.flags = 0;
> 	op.size = sizeof(*features);
>
> 	ret = vhost_vfio_read(dev, &op);
> 	if (ret != 0)
> 		goto out;
>
> 	*features = op.payload.u64;
> out:
> 	return ret;
> }
>
> 2. BAR1 (mmap-able)
>
> The MMIO region described by BAR1 will be used to notify the
> device.
>
> Each queue will has a page for notification, and it can be
> mapped to VM (if hardware also supports), and the virtio
> driver in the VM will be able to notify the device directly.
>
> The MMIO region described by BAR1 is also write-able. If the
> accelerator's notification register(s) cannot be mapped to the
> VM, write() can also be used to notify the device. Something
> like this:
>
> void notify_relay(void *opaque)
> {
> 	......
> 	offset = 0x1000 * queue_idx; /* XXX assume page size is 4K here. */
>
> 	ret = pwrite64(vfio->device_fd, &queue_idx, sizeof(queue_idx),
> 			vfio->bar1_offset + offset);
> 	......
> }
>
> Other BARs are reserved.
>
> 3. VFIO interrupt ioctl API
>
> VFIO interrupt ioctl API is used to setup device interrupts.
> IRQ-bypass will also be supported.
>
> Currently, only VFIO_PCI_MSIX_IRQ_INDEX is supported.
>
> The API for drivers to provide mdev instances
> =============================================
>
> The read()/write()/ioctl()/mmap()/open()/release() mdev
> parent ops have been provided for accelerators' drivers
> to provide mdev instances.
>
> ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> 		  size_t count, loff_t *ppos);
> ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> 		   size_t count, loff_t *ppos);
> long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> int vdpa_open(struct mdev_device *mdev);
> void vdpa_close(struct mdev_device *mdev);
>
> Each accelerator driver just needs to implement its own
> create()/remove() ops, and provide a vdpa device ops
> which will be called by the generic mdev emulation code.
>
> Currently, the vdpa device ops are defined as:
>
> typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
>
> struct vdpa_device_ops {
> 	vdpa_start_device_t		start;
> 	vdpa_stop_device_t		stop;
> 	vdpa_dma_map_t			dma_map;
> 	vdpa_dma_unmap_t		dma_unmap;
> 	vdpa_set_eventfd_t		set_eventfd;
> 	vdpa_supported_features_t	supported_features;
> 	vdpa_notify_device_t		notify;
> 	vdpa_get_notify_addr_t		get_notify_addr;
> };
>
> struct vdpa_dev {
> 	struct mdev_device *mdev;
> 	struct mutex ops_lock;
> 	u8 vconfig[VDPA_CONFIG_SIZE];
> 	int nr_vring;
> 	u64 features;
> 	u64 state;
> 	struct vhost_memory *mem_table;
> 	bool pending_reply;
> 	struct vhost_vfio_op pending;
> 	const struct vdpa_device_ops *ops;
> 	void *private;
> 	int max_vrings;
> 	struct vdpa_vring_info vring_info[0];
> };
>
> struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> 			    int max_vrings);
> void vdpa_free(struct vdpa_dev *vdpa);
>
> A simple example
> ================
>
> # Query the number of available mdev instances
> $ cat /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/available_instances
>
> # Create a mdev instance
> $ echo $UUID > /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/create
>
> # Launch QEMU with a virtio-net device
> $ qemu \
> 	...... \
> 	-netdev type=vhost-vfio,sysfsdev=/sys/bus/mdev/devices/$UUID,id=$ID \
> 	-device virtio-net-pci,netdev=$ID
>
> -------- END --------
>
> Most of above words will be refined and moved to a doc in
> the formal patch. In this RFC, all introductions and code
> are gathered in this patch, the idea is to make it easier
> to find all the relevant information. Anyone who wants to
> comment could use inline comment and just keep the relevant
> parts. Sorry for the big RFC patch..
>
> This patch is just a RFC for now, and something is still
> missing or needs to be refined. But it's never too early
> to hear the thoughts from the community. So any comments
> would be appreciated! Thanks! :-)

I don't see vhost_vfio_write() and other above functions in the patch. 
Looks like some part of the patch is missed, it would be better to post 
a complete series with an example driver (vDPA) to get a full picture.

Thanks

> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>   drivers/vhost/Makefile     |   3 +
>   drivers/vhost/vdpa.c       | 805 +++++++++++++++++++++++++++++++++++++++++++++
>   include/linux/vdpa_mdev.h  |  76 +++++
>   include/uapi/linux/vhost.h |  26 ++
>   4 files changed, 910 insertions(+)
>   create mode 100644 drivers/vhost/vdpa.c
>   create mode 100644 include/linux/vdpa_mdev.h
>
> diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> index 6c6df24f770c..7d185e083140 100644
> --- a/drivers/vhost/Makefile
> +++ b/drivers/vhost/Makefile
> @@ -11,3 +11,6 @@ vhost_vsock-y := vsock.o
>   obj-$(CONFIG_VHOST_RING) += vringh.o
>   
>   obj-$(CONFIG_VHOST)	+= vhost.o
> +
> +obj-m += vhost_vdpa.o  # FIXME: add an option
> +vhost_vdpa-y := vdpa.o
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> new file mode 100644
> index 000000000000..aa19c266ea19
> --- /dev/null
> +++ b/drivers/vhost/vdpa.c
> @@ -0,0 +1,805 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 Intel Corporation.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/vfio.h>
> +#include <linux/vhost.h>
> +#include <linux/mdev.h>
> +#include <linux/vdpa_mdev.h>
> +
> +#define VDPA_BAR0_SIZE		0x1000000 // TBD
> +
> +#define VDPA_VFIO_PCI_OFFSET_SHIFT	40
> +#define VDPA_VFIO_PCI_OFFSET_MASK \
> +		((1ULL << VDPA_VFIO_PCI_OFFSET_SHIFT) - 1)
> +#define VDPA_VFIO_PCI_OFFSET_TO_INDEX(offset) \
> +		((offset) >> VDPA_VFIO_PCI_OFFSET_SHIFT)
> +#define VDPA_VFIO_PCI_INDEX_TO_OFFSET(index) \
> +		((u64)(index) << VDPA_VFIO_PCI_OFFSET_SHIFT)
> +#define VDPA_VFIO_PCI_BAR_OFFSET(offset) \
> +		((offset) & VDPA_VFIO_PCI_OFFSET_MASK)
> +
> +#define STORE_LE16(addr, val)	(*(u16 *)(addr) = cpu_to_le16(val))
> +#define STORE_LE32(addr, val)	(*(u32 *)(addr) = cpu_to_le32(val))
> +
> +static void vdpa_create_config_space(struct vdpa_dev *vdpa)
> +{
> +	/* PCI device ID / vendor ID */
> +	STORE_LE32(&vdpa->vconfig[0x0], 0xffffffff); // FIXME TBD
> +
> +	/* Programming interface class */
> +	vdpa->vconfig[0x9] = 0x00;
> +
> +	/* Sub class */
> +	vdpa->vconfig[0xa] = 0x00;
> +
> +	/* Base class */
> +	vdpa->vconfig[0xb] = 0x02;
> +
> +	// FIXME TBD
> +}
> +
> +struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> +			    int max_vrings)
> +{
> +	struct vdpa_dev *vdpa;
> +	size_t size;
> +
> +	size = sizeof(struct vdpa_dev) + max_vrings *
> +			sizeof(struct vdpa_vring_info);
> +
> +	vdpa = kzalloc(size, GFP_KERNEL);
> +	if (vdpa == NULL)
> +		return NULL;
> +
> +	mutex_init(&vdpa->ops_lock);
> +
> +	vdpa->mdev = mdev;
> +	vdpa->private = private;
> +	vdpa->max_vrings = max_vrings;
> +
> +	vdpa_create_config_space(vdpa);
> +
> +	return vdpa;
> +}
> +EXPORT_SYMBOL(vdpa_alloc);
> +
> +void vdpa_free(struct vdpa_dev *vdpa)
> +{
> +	struct mdev_device *mdev;
> +
> +	mdev = vdpa->mdev;
> +
> +	vdpa->ops->stop(vdpa);
> +	vdpa->ops->dma_unmap(vdpa);
> +
> +	mdev_set_drvdata(mdev, NULL);
> +
> +	mutex_destroy(&vdpa->ops_lock);
> +
> +	kfree(vdpa->mem_table);
> +	kfree(vdpa);
> +}
> +EXPORT_SYMBOL(vdpa_free);
> +
> +static ssize_t vdpa_handle_pcicfg_read(struct mdev_device *mdev,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +
> +	if (count + offset > VDPA_CONFIG_SIZE)
> +		return -EINVAL;
> +
> +	if (copy_to_user(buf, (vdpa->vconfig + offset), count))
> +		return -EFAULT;
> +
> +	return count;
> +}
> +
> +static ssize_t vdpa_handle_bar0_read(struct mdev_device *mdev,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	struct vhost_vfio_op *op = NULL;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +	int ret;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa) {
> +		ret = -ENODEV;
> +		goto out;
> +	}
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +	if (offset != 0) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (!vdpa->pending_reply) {
> +		ret = 0;
> +		goto out;
> +	}
> +
> +	vdpa->pending_reply = false;
> +
> +	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
> +		     GFP_KERNEL);
> +	if (op == NULL) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	op->request = vdpa->pending.request;
> +
> +	switch (op->request) {
> +	case VHOST_GET_VRING_BASE:
> +		op->payload.state = vdpa->pending.payload.state;
> +		op->size = sizeof(op->payload.state);
> +		break;
> +	case VHOST_GET_FEATURES:
> +		op->payload.u64 = vdpa->pending.payload.u64;
> +		op->size = sizeof(op->payload.u64);
> +		break;
> +	default:
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (copy_to_user(buf, op, count)) {
> +		ret = -EFAULT;
> +		goto out_free;
> +	}
> +
> +	ret = count;
> +
> +out_free:
> +	kfree(op);
> +out:
> +	return ret;
> +}
> +
> +ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> +		  size_t count, loff_t *ppos)
> +{
> +	int done = 0;
> +	unsigned int index;
> +	loff_t pos = *ppos;
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
> +
> +	switch (index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		done = vdpa_handle_pcicfg_read(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		done = vdpa_handle_bar0_read(mdev, buf, count, ppos);
> +		break;
> +	}
> +
> +	if (done > 0)
> +		*ppos += done;
> +
> +	mutex_unlock(&vdpa->ops_lock);
> +
> +	return done;
> +}
> +EXPORT_SYMBOL(vdpa_read);
> +
> +static ssize_t vdpa_handle_pcicfg_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	return count;
> +}
> +
> +static int vhost_set_mem_table(struct mdev_device *mdev,
> +		struct vhost_memory *mem)
> +{
> +	struct vdpa_dev *vdpa;
> +	struct vhost_memory *mem_table;
> +	size_t size;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	// FIXME fix this
> +	if (vdpa->state != VHOST_DEVICE_S_STOPPED)
> +		return -EBUSY;
> +
> +	size = sizeof(*mem) + mem->nregions * sizeof(*mem->regions);
> +
> +	mem_table = kzalloc(size, GFP_KERNEL);
> +	if (mem_table == NULL)
> +		return -ENOMEM;
> +
> +	memcpy(mem_table, mem, size);
> +
> +	kfree(vdpa->mem_table);
> +
> +	vdpa->mem_table = mem_table;
> +
> +	vdpa->ops->dma_unmap(vdpa);
> +	vdpa->ops->dma_map(vdpa);
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_addr(struct mdev_device *mdev,
> +		struct vhost_vring_addr *addr)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = addr->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	/* FIXME to be fixed */
> +	if (qid >= vdpa->nr_vring)
> +		vdpa->nr_vring = qid + 1;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->desc_user_addr = addr->desc_user_addr;
> +	vring->used_user_addr = addr->used_user_addr;
> +	vring->avail_user_addr = addr->avail_user_addr;
> +	vring->log_guest_addr = addr->log_guest_addr;
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_num(struct mdev_device *mdev,
> +		struct vhost_vring_state *num)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = num->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->size = num->num;
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_base(struct mdev_device *mdev,
> +		struct vhost_vring_state *base)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = base->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->base = base->num;
> +
> +	return 0;
> +}
> +
> +static int vhost_get_vring_base(struct mdev_device *mdev,
> +		struct vhost_vring_state *base)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->pending_reply = true;
> +	vdpa->pending.request = VHOST_GET_VRING_BASE;
> +	vdpa->pending.payload.state.index = base->index;
> +
> +	// FIXME to be implemented
> +
> +	return 0;
> +}
> +
> +static int vhost_set_features(struct mdev_device *mdev, u64 *features)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->features = *features;
> +
> +	return 0;
> +}
> +
> +static int vhost_get_features(struct mdev_device *mdev, u64 *features)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->pending_reply = true;
> +	vdpa->pending.request = VHOST_GET_FEATURES;
> +	vdpa->pending.payload.u64 =
> +		vdpa->ops->supported_features(vdpa);
> +
> +	return 0;
> +}
> +
> +static int vhost_set_owner(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +
> +static int vhost_reset_owner(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +
> +static int vhost_set_state(struct mdev_device *mdev, u64 *state)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (*state >= VHOST_DEVICE_S_MAX)
> +		return -EINVAL;
> +
> +	if (vdpa->state == *state)
> +		return 0;
> +
> +	vdpa->state = *state;
> +
> +	switch (vdpa->state) {
> +	case VHOST_DEVICE_S_RUNNING:
> +		vdpa->ops->start(vdpa);
> +		break;
> +	case VHOST_DEVICE_S_STOPPED:
> +		vdpa->ops->stop(vdpa);
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static ssize_t vdpa_handle_bar0_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vhost_vfio_op *op = NULL;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +	int ret;
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +	if (offset != 0) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (count < VHOST_VFIO_OP_HDR_SIZE) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
> +		     GFP_KERNEL);
> +	if (op == NULL) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	if (copy_from_user(op, buf, VHOST_VFIO_OP_HDR_SIZE)) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (op->size > VHOST_VFIO_OP_PAYLOAD_MAX_SIZE ||
> +	    op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (copy_from_user(&op->payload, buf + VHOST_VFIO_OP_HDR_SIZE,
> +			   op->size)) {
> +		ret = -EFAULT;
> +		goto out_free;
> +	}
> +
> +	switch (op->request) {
> +	case VHOST_SET_LOG_BASE:
> +		break;
> +	case VHOST_SET_MEM_TABLE:
> +		vhost_set_mem_table(mdev, &op->payload.memory);
> +		break;
> +	case VHOST_SET_VRING_ADDR:
> +		vhost_set_vring_addr(mdev, &op->payload.addr);
> +		break;
> +	case VHOST_SET_VRING_NUM:
> +		vhost_set_vring_num(mdev, &op->payload.state);
> +		break;
> +	case VHOST_SET_VRING_BASE:
> +		vhost_set_vring_base(mdev, &op->payload.state);
> +		break;
> +	case VHOST_GET_VRING_BASE:
> +		vhost_get_vring_base(mdev, &op->payload.state);
> +		break;
> +	case VHOST_SET_FEATURES:
> +		vhost_set_features(mdev, &op->payload.u64);
> +		break;
> +	case VHOST_GET_FEATURES:
> +		vhost_get_features(mdev, &op->payload.u64);
> +		break;
> +	case VHOST_SET_OWNER:
> +		vhost_set_owner(mdev);
> +		break;
> +	case VHOST_RESET_OWNER:
> +		vhost_reset_owner(mdev);
> +		break;
> +	case VHOST_DEVICE_SET_STATE:
> +		vhost_set_state(mdev, &op->payload.u64);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	ret = count;
> +
> +out_free:
> +	kfree(op);
> +out:
> +	return ret;
> +}
> +
> +static ssize_t vdpa_handle_bar1_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (count < sizeof(qid))
> +		return -EINVAL;
> +
> +	if (copy_from_user(&qid, buf, sizeof(qid)))
> +		return -EINVAL;
> +
> +	vdpa->ops->notify(vdpa, qid);
> +
> +	return count;
> +}
> +
> +ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> +		   size_t count, loff_t *ppos)
> +{
> +	int done = 0;
> +	unsigned int index;
> +	loff_t pos = *ppos;
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
> +
> +	switch (index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		done = vdpa_handle_pcicfg_write(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		done = vdpa_handle_bar0_write(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR1_REGION_INDEX:
> +		done = vdpa_handle_bar1_write(mdev, buf, count, ppos);
> +		break;
> +	}
> +
> +	if (done > 0)
> +		*ppos += done;
> +
> +	mutex_unlock(&vdpa->ops_lock);
> +
> +	return done;
> +}
> +EXPORT_SYMBOL(vdpa_write);
> +
> +static int vdpa_get_region_info(struct mdev_device *mdev,
> +				struct vfio_region_info *region_info,
> +				u16 *cap_type_id, void **cap_type)
> +{
> +	struct vdpa_dev *vdpa;
> +	u32 bar_index;
> +	u64 size = 0;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	bar_index = region_info->index;
> +	if (bar_index >= VFIO_PCI_NUM_REGIONS)
> +		return -EINVAL;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	switch (bar_index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		size = VDPA_CONFIG_SIZE;
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		size = VDPA_BAR0_SIZE;
> +		break;
> +	case VFIO_PCI_BAR1_REGION_INDEX:
> +		size = (u64)vdpa->max_vrings << PAGE_SHIFT;
> +		break;
> +	default:
> +		size = 0;
> +		break;
> +	}
> +
> +	// FIXME: mark BAR1 as mmap-able (VFIO_REGION_INFO_FLAG_MMAP)
> +	region_info->size = size;
> +	region_info->offset = VDPA_VFIO_PCI_INDEX_TO_OFFSET(bar_index);
> +	region_info->flags = VFIO_REGION_INFO_FLAG_READ |
> +		VFIO_REGION_INFO_FLAG_WRITE;
> +	mutex_unlock(&vdpa->ops_lock);
> +	return 0;
> +}
> +
> +static int vdpa_reset(struct mdev_device *mdev)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int vdpa_get_device_info(struct mdev_device *mdev,
> +				struct vfio_device_info *dev_info)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	dev_info->flags = VFIO_DEVICE_FLAGS_PCI;
> +	dev_info->num_regions = VFIO_PCI_NUM_REGIONS;
> +	dev_info->num_irqs = vdpa->max_vrings;
> +
> +	return 0;
> +}
> +
> +static int vdpa_get_irq_info(struct mdev_device *mdev,
> +			     struct vfio_irq_info *info)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (info->index != VFIO_PCI_MSIX_IRQ_INDEX)
> +		return -ENOTSUPP;
> +
> +	info->flags = VFIO_IRQ_INFO_EVENTFD;
> +	info->count = vdpa->max_vrings;
> +
> +	return 0;
> +}
> +
> +static int vdpa_set_irqs(struct mdev_device *mdev, uint32_t flags,
> +			 unsigned int index, unsigned int start,
> +			 unsigned int count, void *data)
> +{
> +	struct vdpa_dev *vdpa;
> +	int *fd = data, i;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	if (index != VFIO_PCI_MSIX_IRQ_INDEX)
> +		return -ENOTSUPP;
> +
> +	for (i = 0; i < count; i++)
> +		vdpa->ops->set_eventfd(vdpa, start + i,
> +			(flags & VFIO_IRQ_SET_DATA_EVENTFD) ? fd[i] : -1);
> +
> +	return 0;
> +}
> +
> +long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg)
> +{
> +	int ret = 0;
> +	unsigned long minsz;
> +	struct vdpa_dev *vdpa;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	switch (cmd) {
> +	case VFIO_DEVICE_GET_INFO:
> +	{
> +		struct vfio_device_info info;
> +
> +		minsz = offsetofend(struct vfio_device_info, num_irqs);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_device_info(mdev, &info);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_GET_REGION_INFO:
> +	{
> +		struct vfio_region_info info;
> +		u16 cap_type_id = 0;
> +		void *cap_type = NULL;
> +
> +		minsz = offsetofend(struct vfio_region_info, offset);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_region_info(mdev, &info, &cap_type_id,
> +					   &cap_type);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_GET_IRQ_INFO:
> +	{
> +		struct vfio_irq_info info;
> +
> +		minsz = offsetofend(struct vfio_irq_info, count);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz || info.index >= vdpa->max_vrings)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_irq_info(mdev, &info);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_SET_IRQS:
> +	{
> +		struct vfio_irq_set hdr;
> +		size_t data_size = 0;
> +		u8 *data = NULL;
> +
> +		minsz = offsetofend(struct vfio_irq_set, count);
> +
> +		if (copy_from_user(&hdr, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		ret = vfio_set_irqs_validate_and_prepare(&hdr, vdpa->max_vrings,
> +							 VFIO_PCI_NUM_IRQS,
> +							 &data_size);
> +		if (ret)
> +			return ret;
> +
> +		if (data_size) {
> +			data = memdup_user((void __user *)(arg + minsz),
> +					   data_size);
> +			if (IS_ERR(data))
> +				return PTR_ERR(data);
> +		}
> +
> +		ret = vdpa_set_irqs(mdev, hdr.flags, hdr.index, hdr.start,
> +				hdr.count, data);
> +
> +		kfree(data);
> +		return ret;
> +	}
> +	case VFIO_DEVICE_RESET:
> +		return vdpa_reset(mdev);
> +	}
> +	return -ENOTTY;
> +}
> +EXPORT_SYMBOL(vdpa_ioctl);
> +
> +int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
> +{
> +	// FIXME: to be implemented
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(vdpa_mmap);
> +
> +int vdpa_open(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +EXPORT_SYMBOL(vdpa_open);
> +
> +void vdpa_close(struct mdev_device *mdev)
> +{
> +}
> +EXPORT_SYMBOL(vdpa_close);
> +
> +MODULE_VERSION("0.0.0");
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("Hardware virtio accelerator abstraction");
> diff --git a/include/linux/vdpa_mdev.h b/include/linux/vdpa_mdev.h
> new file mode 100644
> index 000000000000..8414e86ba4b8
> --- /dev/null
> +++ b/include/linux/vdpa_mdev.h
> @@ -0,0 +1,76 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 Intel Corporation.
> + */
> +
> +#ifndef VDPA_MDEV_H
> +#define VDPA_MDEV_H
> +
> +#define VDPA_CONFIG_SIZE 0xff
> +
> +struct mdev_device;
> +struct vdpa_dev;
> +
> +/*
> + * XXX: Any comments about the vDPA API design for drivers
> + *      would be appreciated!
> + */
> +
> +typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> +typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> +typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> +typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
> +
> +struct vdpa_device_ops {
> +	vdpa_start_device_t		start;
> +	vdpa_stop_device_t		stop;
> +	vdpa_dma_map_t			dma_map;
> +	vdpa_dma_unmap_t		dma_unmap;
> +	vdpa_set_eventfd_t		set_eventfd;
> +	vdpa_supported_features_t	supported_features;
> +	vdpa_notify_device_t		notify;
> +	vdpa_get_notify_addr_t		get_notify_addr;
> +};
> +
> +struct vdpa_vring_info {
> +	u64 desc_user_addr;
> +	u64 used_user_addr;
> +	u64 avail_user_addr;
> +	u64 log_guest_addr;
> +	u16 size;
> +	u16 base;
> +};
> +
> +struct vdpa_dev {
> +	struct mdev_device *mdev;
> +	struct mutex ops_lock;
> +	u8 vconfig[VDPA_CONFIG_SIZE];
> +	int nr_vring;
> +	u64 features;
> +	u64 state;
> +	struct vhost_memory *mem_table;
> +	bool pending_reply;
> +	struct vhost_vfio_op pending;
> +	const struct vdpa_device_ops *ops;
> +	void *private;
> +	int max_vrings;
> +	struct vdpa_vring_info vring_info[0];
> +};
> +
> +struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> +			    int max_vrings);
> +void vdpa_free(struct vdpa_dev *vdpa);
> +ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> +		  size_t count, loff_t *ppos);
> +ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> +		   size_t count, loff_t *ppos);
> +long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> +int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> +int vdpa_open(struct mdev_device *mdev);
> +void vdpa_close(struct mdev_device *mdev);
> +
> +#endif /* VDPA_MDEV_H */
> diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> index c51f8e5cc608..92a1ca0b5fe1 100644
> --- a/include/uapi/linux/vhost.h
> +++ b/include/uapi/linux/vhost.h
> @@ -207,4 +207,30 @@ struct vhost_scsi_target {
>   #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
>   #define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
>   
> +/* VHOST_DEVICE specific defines */
> +
> +#define VHOST_DEVICE_SET_STATE _IOW(VHOST_VIRTIO, 0x70, __u64)
> +
> +#define VHOST_DEVICE_S_STOPPED 0
> +#define VHOST_DEVICE_S_RUNNING 1
> +#define VHOST_DEVICE_S_MAX     2
> +
> +struct vhost_vfio_op {
> +	__u64 request;
> +	__u32 flags;
> +	/* Flag values: */
> +#define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> +	__u32 size;
> +	union {
> +		__u64 u64;
> +		struct vhost_vring_state state;
> +		struct vhost_vring_addr addr;
> +		struct vhost_memory memory;
> +	} payload;
> +};
> +
> +#define VHOST_VFIO_OP_HDR_SIZE \
> +		((unsigned long)&((struct vhost_vfio_op *)NULL)->payload)
> +#define VHOST_VFIO_OP_PAYLOAD_MAX_SIZE 1024 /* FIXME TBD */
> +
>   #endif

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-10  2:52   ` Jason Wang
  0 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-10  2:52 UTC (permalink / raw)
  To: Tiwei Bie, mst, alex.williamson, ddutile, alexander.h.duyck
  Cc: virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang



On 2018年04月02日 23:23, Tiwei Bie wrote:
> This patch introduces a mdev (mediated device) based hardware
> vhost backend. This backend is an abstraction of the various
> hardware vhost accelerators (potentially any device that uses
> virtio ring can be used as a vhost accelerator). Some generic
> mdev parent ops are provided for accelerator drivers to support
> generating mdev instances.
>
> What's this
> ===========
>
> The idea is that we can setup a virtio ring compatible device
> with the messages available at the vhost-backend. Originally,
> these messages are used to implement a software vhost backend,
> but now we will use these messages to setup a virtio ring
> compatible hardware device. Then the hardware device will be
> able to work with the guest virtio driver in the VM just like
> what the software backend does. That is to say, we can implement
> a hardware based vhost backend in QEMU, and any virtio ring
> compatible devices potentially can be used with this backend.
> (We also call it vDPA -- vhost Data Path Acceleration).
>
> One problem is that, different virtio ring compatible devices
> may have different device interfaces. That is to say, we will
> need different drivers in QEMU. It could be troublesome. And
> that's what this patch trying to fix. The idea behind this
> patch is very simple: mdev is a standard way to emulate device
> in kernel.

So you just move the abstraction layer from qemu to kernel, and you 
still need different drivers in kernel for different device interfaces 
of accelerators. This looks even more complex than leaving it in qemu. 
As you said, another idea is to implement userspace vhost backend for 
accelerators which seems easier and could co-work with other parts of 
qemu without inventing new type of messages.

Need careful thought here to seek a best solution here.

>   So we defined a standard device based on mdev, which
> is able to accept vhost messages. When the mdev emulation code
> (i.e. the generic mdev parent ops provided by this patch) gets
> vhost messages, it will parse and deliver them to accelerator
> drivers. Drivers can use these messages to setup accelerators.
>
> That is to say, the generic mdev parent ops (e.g. read()/write()/
> ioctl()/...) will be provided for accelerator drivers to register
> accelerators as mdev parent devices. And each accelerator device
> will support generating standard mdev instance(s).
>
> With this standard device interface, we will be able to just
> develop one userspace driver to implement the hardware based
> vhost backend in QEMU.
>
> Difference between vDPA and PCI passthru
> ========================================
>
> The key difference between vDPA and PCI passthru is that, in
> vDPA only the data path of the device (e.g. DMA ring, notify
> region and queue interrupt) is pass-throughed to the VM, the
> device control path (e.g. PCI configuration space and MMIO
> regions) is still defined and emulated by QEMU.
>
> The benefits of keeping virtio device emulation in QEMU compared
> with virtio device PCI passthru include (but not limit to):
>
> - consistent device interface for guest OS in the VM;
> - max flexibility on the hardware design, especially the
>    accelerator for each vhost backend doesn't have to be a
>    full PCI device;
> - leveraging the existing virtio live-migration framework;
>
> The interface of this mdev based device
> =======================================
>
> 1. BAR0
>
> The MMIO region described by BAR0 is the main control
> interface. Messages will be written to or read from
> this region.
>
> The message type is determined by the `request` field
> in message header. The message size is encoded in the
> message header too. The message format looks like this:
>
> struct vhost_vfio_op {
> 	__u64 request;
> 	__u32 flags;
> 	/* Flag values: */
> #define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> 	__u32 size;
> 	union {
> 		__u64 u64;
> 		struct vhost_vring_state state;
> 		struct vhost_vring_addr addr;
> 		struct vhost_memory memory;
> 	} payload;
> };
>
> The existing vhost-kernel ioctl cmds are reused as
> the message requests in above structure.
>
> Each message will be written to or read from this
> region at offset 0:
>
> int vhost_vfio_write(struct vhost_dev *dev, struct vhost_vfio_op *op)
> {
> 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> 	struct vhost_vfio *vfio = dev->opaque;
> 	int ret;
>
> 	ret = pwrite64(vfio->device_fd, op, count, vfio->bar0_offset);
> 	if (ret != count)
> 		return -1;
>
> 	return 0;
> }
>
> int vhost_vfio_read(struct vhost_dev *dev, struct vhost_vfio_op *op)
> {
> 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> 	struct vhost_vfio *vfio = dev->opaque;
> 	uint64_t request = op->request;
> 	int ret;
>
> 	ret = pread64(vfio->device_fd, op, count, vfio->bar0_offset);
> 	if (ret != count || request != op->request)
> 		return -1;
>
> 	return 0;
> }
>
> It's quite straightforward to set things to the device.
> Just need to write the message to device directly:
>
> int vhost_vfio_set_features(struct vhost_dev *dev, uint64_t features)
> {
> 	struct vhost_vfio_op op;
>
> 	op.request = VHOST_SET_FEATURES;
> 	op.flags = 0;
> 	op.size = sizeof(features);
> 	op.payload.u64 = features;
>
> 	return vhost_vfio_write(dev, &op);
> }
>
> To get things from the device, two steps are needed.
> Take VHOST_GET_FEATURE as an example:
>
> int vhost_vfio_get_features(struct vhost_dev *dev, uint64_t *features)
> {
> 	struct vhost_vfio_op op;
> 	int ret;
>
> 	op.request = VHOST_GET_FEATURES;
> 	op.flags = VHOST_VFIO_NEED_REPLY;
> 	op.size = 0;
>
> 	/* Just need to write the header */
> 	ret = vhost_vfio_write(dev, &op);
> 	if (ret != 0)
> 		goto out;
>
> 	/* `op` wasn't changed during write */
> 	op.flags = 0;
> 	op.size = sizeof(*features);
>
> 	ret = vhost_vfio_read(dev, &op);
> 	if (ret != 0)
> 		goto out;
>
> 	*features = op.payload.u64;
> out:
> 	return ret;
> }
>
> 2. BAR1 (mmap-able)
>
> The MMIO region described by BAR1 will be used to notify the
> device.
>
> Each queue will has a page for notification, and it can be
> mapped to VM (if hardware also supports), and the virtio
> driver in the VM will be able to notify the device directly.
>
> The MMIO region described by BAR1 is also write-able. If the
> accelerator's notification register(s) cannot be mapped to the
> VM, write() can also be used to notify the device. Something
> like this:
>
> void notify_relay(void *opaque)
> {
> 	......
> 	offset = 0x1000 * queue_idx; /* XXX assume page size is 4K here. */
>
> 	ret = pwrite64(vfio->device_fd, &queue_idx, sizeof(queue_idx),
> 			vfio->bar1_offset + offset);
> 	......
> }
>
> Other BARs are reserved.
>
> 3. VFIO interrupt ioctl API
>
> VFIO interrupt ioctl API is used to setup device interrupts.
> IRQ-bypass will also be supported.
>
> Currently, only VFIO_PCI_MSIX_IRQ_INDEX is supported.
>
> The API for drivers to provide mdev instances
> =============================================
>
> The read()/write()/ioctl()/mmap()/open()/release() mdev
> parent ops have been provided for accelerators' drivers
> to provide mdev instances.
>
> ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> 		  size_t count, loff_t *ppos);
> ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> 		   size_t count, loff_t *ppos);
> long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> int vdpa_open(struct mdev_device *mdev);
> void vdpa_close(struct mdev_device *mdev);
>
> Each accelerator driver just needs to implement its own
> create()/remove() ops, and provide a vdpa device ops
> which will be called by the generic mdev emulation code.
>
> Currently, the vdpa device ops are defined as:
>
> typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
>
> struct vdpa_device_ops {
> 	vdpa_start_device_t		start;
> 	vdpa_stop_device_t		stop;
> 	vdpa_dma_map_t			dma_map;
> 	vdpa_dma_unmap_t		dma_unmap;
> 	vdpa_set_eventfd_t		set_eventfd;
> 	vdpa_supported_features_t	supported_features;
> 	vdpa_notify_device_t		notify;
> 	vdpa_get_notify_addr_t		get_notify_addr;
> };
>
> struct vdpa_dev {
> 	struct mdev_device *mdev;
> 	struct mutex ops_lock;
> 	u8 vconfig[VDPA_CONFIG_SIZE];
> 	int nr_vring;
> 	u64 features;
> 	u64 state;
> 	struct vhost_memory *mem_table;
> 	bool pending_reply;
> 	struct vhost_vfio_op pending;
> 	const struct vdpa_device_ops *ops;
> 	void *private;
> 	int max_vrings;
> 	struct vdpa_vring_info vring_info[0];
> };
>
> struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> 			    int max_vrings);
> void vdpa_free(struct vdpa_dev *vdpa);
>
> A simple example
> ================
>
> # Query the number of available mdev instances
> $ cat /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/available_instances
>
> # Create a mdev instance
> $ echo $UUID > /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/create
>
> # Launch QEMU with a virtio-net device
> $ qemu \
> 	...... \
> 	-netdev type=vhost-vfio,sysfsdev=/sys/bus/mdev/devices/$UUID,id=$ID \
> 	-device virtio-net-pci,netdev=$ID
>
> -------- END --------
>
> Most of above words will be refined and moved to a doc in
> the formal patch. In this RFC, all introductions and code
> are gathered in this patch, the idea is to make it easier
> to find all the relevant information. Anyone who wants to
> comment could use inline comment and just keep the relevant
> parts. Sorry for the big RFC patch..
>
> This patch is just a RFC for now, and something is still
> missing or needs to be refined. But it's never too early
> to hear the thoughts from the community. So any comments
> would be appreciated! Thanks! :-)

I don't see vhost_vfio_write() and other above functions in the patch. 
Looks like some part of the patch is missed, it would be better to post 
a complete series with an example driver (vDPA) to get a full picture.

Thanks

> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>   drivers/vhost/Makefile     |   3 +
>   drivers/vhost/vdpa.c       | 805 +++++++++++++++++++++++++++++++++++++++++++++
>   include/linux/vdpa_mdev.h  |  76 +++++
>   include/uapi/linux/vhost.h |  26 ++
>   4 files changed, 910 insertions(+)
>   create mode 100644 drivers/vhost/vdpa.c
>   create mode 100644 include/linux/vdpa_mdev.h
>
> diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> index 6c6df24f770c..7d185e083140 100644
> --- a/drivers/vhost/Makefile
> +++ b/drivers/vhost/Makefile
> @@ -11,3 +11,6 @@ vhost_vsock-y := vsock.o
>   obj-$(CONFIG_VHOST_RING) += vringh.o
>   
>   obj-$(CONFIG_VHOST)	+= vhost.o
> +
> +obj-m += vhost_vdpa.o  # FIXME: add an option
> +vhost_vdpa-y := vdpa.o
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> new file mode 100644
> index 000000000000..aa19c266ea19
> --- /dev/null
> +++ b/drivers/vhost/vdpa.c
> @@ -0,0 +1,805 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 Intel Corporation.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/vfio.h>
> +#include <linux/vhost.h>
> +#include <linux/mdev.h>
> +#include <linux/vdpa_mdev.h>
> +
> +#define VDPA_BAR0_SIZE		0x1000000 // TBD
> +
> +#define VDPA_VFIO_PCI_OFFSET_SHIFT	40
> +#define VDPA_VFIO_PCI_OFFSET_MASK \
> +		((1ULL << VDPA_VFIO_PCI_OFFSET_SHIFT) - 1)
> +#define VDPA_VFIO_PCI_OFFSET_TO_INDEX(offset) \
> +		((offset) >> VDPA_VFIO_PCI_OFFSET_SHIFT)
> +#define VDPA_VFIO_PCI_INDEX_TO_OFFSET(index) \
> +		((u64)(index) << VDPA_VFIO_PCI_OFFSET_SHIFT)
> +#define VDPA_VFIO_PCI_BAR_OFFSET(offset) \
> +		((offset) & VDPA_VFIO_PCI_OFFSET_MASK)
> +
> +#define STORE_LE16(addr, val)	(*(u16 *)(addr) = cpu_to_le16(val))
> +#define STORE_LE32(addr, val)	(*(u32 *)(addr) = cpu_to_le32(val))
> +
> +static void vdpa_create_config_space(struct vdpa_dev *vdpa)
> +{
> +	/* PCI device ID / vendor ID */
> +	STORE_LE32(&vdpa->vconfig[0x0], 0xffffffff); // FIXME TBD
> +
> +	/* Programming interface class */
> +	vdpa->vconfig[0x9] = 0x00;
> +
> +	/* Sub class */
> +	vdpa->vconfig[0xa] = 0x00;
> +
> +	/* Base class */
> +	vdpa->vconfig[0xb] = 0x02;
> +
> +	// FIXME TBD
> +}
> +
> +struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> +			    int max_vrings)
> +{
> +	struct vdpa_dev *vdpa;
> +	size_t size;
> +
> +	size = sizeof(struct vdpa_dev) + max_vrings *
> +			sizeof(struct vdpa_vring_info);
> +
> +	vdpa = kzalloc(size, GFP_KERNEL);
> +	if (vdpa == NULL)
> +		return NULL;
> +
> +	mutex_init(&vdpa->ops_lock);
> +
> +	vdpa->mdev = mdev;
> +	vdpa->private = private;
> +	vdpa->max_vrings = max_vrings;
> +
> +	vdpa_create_config_space(vdpa);
> +
> +	return vdpa;
> +}
> +EXPORT_SYMBOL(vdpa_alloc);
> +
> +void vdpa_free(struct vdpa_dev *vdpa)
> +{
> +	struct mdev_device *mdev;
> +
> +	mdev = vdpa->mdev;
> +
> +	vdpa->ops->stop(vdpa);
> +	vdpa->ops->dma_unmap(vdpa);
> +
> +	mdev_set_drvdata(mdev, NULL);
> +
> +	mutex_destroy(&vdpa->ops_lock);
> +
> +	kfree(vdpa->mem_table);
> +	kfree(vdpa);
> +}
> +EXPORT_SYMBOL(vdpa_free);
> +
> +static ssize_t vdpa_handle_pcicfg_read(struct mdev_device *mdev,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +
> +	if (count + offset > VDPA_CONFIG_SIZE)
> +		return -EINVAL;
> +
> +	if (copy_to_user(buf, (vdpa->vconfig + offset), count))
> +		return -EFAULT;
> +
> +	return count;
> +}
> +
> +static ssize_t vdpa_handle_bar0_read(struct mdev_device *mdev,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	struct vhost_vfio_op *op = NULL;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +	int ret;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa) {
> +		ret = -ENODEV;
> +		goto out;
> +	}
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +	if (offset != 0) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (!vdpa->pending_reply) {
> +		ret = 0;
> +		goto out;
> +	}
> +
> +	vdpa->pending_reply = false;
> +
> +	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
> +		     GFP_KERNEL);
> +	if (op == NULL) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	op->request = vdpa->pending.request;
> +
> +	switch (op->request) {
> +	case VHOST_GET_VRING_BASE:
> +		op->payload.state = vdpa->pending.payload.state;
> +		op->size = sizeof(op->payload.state);
> +		break;
> +	case VHOST_GET_FEATURES:
> +		op->payload.u64 = vdpa->pending.payload.u64;
> +		op->size = sizeof(op->payload.u64);
> +		break;
> +	default:
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (copy_to_user(buf, op, count)) {
> +		ret = -EFAULT;
> +		goto out_free;
> +	}
> +
> +	ret = count;
> +
> +out_free:
> +	kfree(op);
> +out:
> +	return ret;
> +}
> +
> +ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> +		  size_t count, loff_t *ppos)
> +{
> +	int done = 0;
> +	unsigned int index;
> +	loff_t pos = *ppos;
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
> +
> +	switch (index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		done = vdpa_handle_pcicfg_read(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		done = vdpa_handle_bar0_read(mdev, buf, count, ppos);
> +		break;
> +	}
> +
> +	if (done > 0)
> +		*ppos += done;
> +
> +	mutex_unlock(&vdpa->ops_lock);
> +
> +	return done;
> +}
> +EXPORT_SYMBOL(vdpa_read);
> +
> +static ssize_t vdpa_handle_pcicfg_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	return count;
> +}
> +
> +static int vhost_set_mem_table(struct mdev_device *mdev,
> +		struct vhost_memory *mem)
> +{
> +	struct vdpa_dev *vdpa;
> +	struct vhost_memory *mem_table;
> +	size_t size;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	// FIXME fix this
> +	if (vdpa->state != VHOST_DEVICE_S_STOPPED)
> +		return -EBUSY;
> +
> +	size = sizeof(*mem) + mem->nregions * sizeof(*mem->regions);
> +
> +	mem_table = kzalloc(size, GFP_KERNEL);
> +	if (mem_table == NULL)
> +		return -ENOMEM;
> +
> +	memcpy(mem_table, mem, size);
> +
> +	kfree(vdpa->mem_table);
> +
> +	vdpa->mem_table = mem_table;
> +
> +	vdpa->ops->dma_unmap(vdpa);
> +	vdpa->ops->dma_map(vdpa);
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_addr(struct mdev_device *mdev,
> +		struct vhost_vring_addr *addr)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = addr->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	/* FIXME to be fixed */
> +	if (qid >= vdpa->nr_vring)
> +		vdpa->nr_vring = qid + 1;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->desc_user_addr = addr->desc_user_addr;
> +	vring->used_user_addr = addr->used_user_addr;
> +	vring->avail_user_addr = addr->avail_user_addr;
> +	vring->log_guest_addr = addr->log_guest_addr;
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_num(struct mdev_device *mdev,
> +		struct vhost_vring_state *num)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = num->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->size = num->num;
> +
> +	return 0;
> +}
> +
> +static int vhost_set_vring_base(struct mdev_device *mdev,
> +		struct vhost_vring_state *base)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid = base->index;
> +	struct vdpa_vring_info *vring;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (qid >= vdpa->max_vrings)
> +		return -EINVAL;
> +
> +	vring = &vdpa->vring_info[qid];
> +
> +	vring->base = base->num;
> +
> +	return 0;
> +}
> +
> +static int vhost_get_vring_base(struct mdev_device *mdev,
> +		struct vhost_vring_state *base)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->pending_reply = true;
> +	vdpa->pending.request = VHOST_GET_VRING_BASE;
> +	vdpa->pending.payload.state.index = base->index;
> +
> +	// FIXME to be implemented
> +
> +	return 0;
> +}
> +
> +static int vhost_set_features(struct mdev_device *mdev, u64 *features)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->features = *features;
> +
> +	return 0;
> +}
> +
> +static int vhost_get_features(struct mdev_device *mdev, u64 *features)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	vdpa->pending_reply = true;
> +	vdpa->pending.request = VHOST_GET_FEATURES;
> +	vdpa->pending.payload.u64 =
> +		vdpa->ops->supported_features(vdpa);
> +
> +	return 0;
> +}
> +
> +static int vhost_set_owner(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +
> +static int vhost_reset_owner(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +
> +static int vhost_set_state(struct mdev_device *mdev, u64 *state)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (*state >= VHOST_DEVICE_S_MAX)
> +		return -EINVAL;
> +
> +	if (vdpa->state == *state)
> +		return 0;
> +
> +	vdpa->state = *state;
> +
> +	switch (vdpa->state) {
> +	case VHOST_DEVICE_S_RUNNING:
> +		vdpa->ops->start(vdpa);
> +		break;
> +	case VHOST_DEVICE_S_STOPPED:
> +		vdpa->ops->stop(vdpa);
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static ssize_t vdpa_handle_bar0_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vhost_vfio_op *op = NULL;
> +	loff_t pos = *ppos;
> +	loff_t offset;
> +	int ret;
> +
> +	offset = VDPA_VFIO_PCI_BAR_OFFSET(pos);
> +	if (offset != 0) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	if (count < VHOST_VFIO_OP_HDR_SIZE) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	op = kzalloc(VHOST_VFIO_OP_HDR_SIZE + VHOST_VFIO_OP_PAYLOAD_MAX_SIZE,
> +		     GFP_KERNEL);
> +	if (op == NULL) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	if (copy_from_user(op, buf, VHOST_VFIO_OP_HDR_SIZE)) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (op->size > VHOST_VFIO_OP_PAYLOAD_MAX_SIZE ||
> +	    op->size + VHOST_VFIO_OP_HDR_SIZE != count) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	if (copy_from_user(&op->payload, buf + VHOST_VFIO_OP_HDR_SIZE,
> +			   op->size)) {
> +		ret = -EFAULT;
> +		goto out_free;
> +	}
> +
> +	switch (op->request) {
> +	case VHOST_SET_LOG_BASE:
> +		break;
> +	case VHOST_SET_MEM_TABLE:
> +		vhost_set_mem_table(mdev, &op->payload.memory);
> +		break;
> +	case VHOST_SET_VRING_ADDR:
> +		vhost_set_vring_addr(mdev, &op->payload.addr);
> +		break;
> +	case VHOST_SET_VRING_NUM:
> +		vhost_set_vring_num(mdev, &op->payload.state);
> +		break;
> +	case VHOST_SET_VRING_BASE:
> +		vhost_set_vring_base(mdev, &op->payload.state);
> +		break;
> +	case VHOST_GET_VRING_BASE:
> +		vhost_get_vring_base(mdev, &op->payload.state);
> +		break;
> +	case VHOST_SET_FEATURES:
> +		vhost_set_features(mdev, &op->payload.u64);
> +		break;
> +	case VHOST_GET_FEATURES:
> +		vhost_get_features(mdev, &op->payload.u64);
> +		break;
> +	case VHOST_SET_OWNER:
> +		vhost_set_owner(mdev);
> +		break;
> +	case VHOST_RESET_OWNER:
> +		vhost_reset_owner(mdev);
> +		break;
> +	case VHOST_DEVICE_SET_STATE:
> +		vhost_set_state(mdev, &op->payload.u64);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	ret = count;
> +
> +out_free:
> +	kfree(op);
> +out:
> +	return ret;
> +}
> +
> +static ssize_t vdpa_handle_bar1_write(struct mdev_device *mdev,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct vdpa_dev *vdpa;
> +	int qid;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (count < sizeof(qid))
> +		return -EINVAL;
> +
> +	if (copy_from_user(&qid, buf, sizeof(qid)))
> +		return -EINVAL;
> +
> +	vdpa->ops->notify(vdpa, qid);
> +
> +	return count;
> +}
> +
> +ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> +		   size_t count, loff_t *ppos)
> +{
> +	int done = 0;
> +	unsigned int index;
> +	loff_t pos = *ppos;
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	index = VDPA_VFIO_PCI_OFFSET_TO_INDEX(pos);
> +
> +	switch (index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		done = vdpa_handle_pcicfg_write(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		done = vdpa_handle_bar0_write(mdev, buf, count, ppos);
> +		break;
> +	case VFIO_PCI_BAR1_REGION_INDEX:
> +		done = vdpa_handle_bar1_write(mdev, buf, count, ppos);
> +		break;
> +	}
> +
> +	if (done > 0)
> +		*ppos += done;
> +
> +	mutex_unlock(&vdpa->ops_lock);
> +
> +	return done;
> +}
> +EXPORT_SYMBOL(vdpa_write);
> +
> +static int vdpa_get_region_info(struct mdev_device *mdev,
> +				struct vfio_region_info *region_info,
> +				u16 *cap_type_id, void **cap_type)
> +{
> +	struct vdpa_dev *vdpa;
> +	u32 bar_index;
> +	u64 size = 0;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	bar_index = region_info->index;
> +	if (bar_index >= VFIO_PCI_NUM_REGIONS)
> +		return -EINVAL;
> +
> +	mutex_lock(&vdpa->ops_lock);
> +
> +	switch (bar_index) {
> +	case VFIO_PCI_CONFIG_REGION_INDEX:
> +		size = VDPA_CONFIG_SIZE;
> +		break;
> +	case VFIO_PCI_BAR0_REGION_INDEX:
> +		size = VDPA_BAR0_SIZE;
> +		break;
> +	case VFIO_PCI_BAR1_REGION_INDEX:
> +		size = (u64)vdpa->max_vrings << PAGE_SHIFT;
> +		break;
> +	default:
> +		size = 0;
> +		break;
> +	}
> +
> +	// FIXME: mark BAR1 as mmap-able (VFIO_REGION_INFO_FLAG_MMAP)
> +	region_info->size = size;
> +	region_info->offset = VDPA_VFIO_PCI_INDEX_TO_OFFSET(bar_index);
> +	region_info->flags = VFIO_REGION_INFO_FLAG_READ |
> +		VFIO_REGION_INFO_FLAG_WRITE;
> +	mutex_unlock(&vdpa->ops_lock);
> +	return 0;
> +}
> +
> +static int vdpa_reset(struct mdev_device *mdev)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int vdpa_get_device_info(struct mdev_device *mdev,
> +				struct vfio_device_info *dev_info)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	dev_info->flags = VFIO_DEVICE_FLAGS_PCI;
> +	dev_info->num_regions = VFIO_PCI_NUM_REGIONS;
> +	dev_info->num_irqs = vdpa->max_vrings;
> +
> +	return 0;
> +}
> +
> +static int vdpa_get_irq_info(struct mdev_device *mdev,
> +			     struct vfio_irq_info *info)
> +{
> +	struct vdpa_dev *vdpa;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	if (info->index != VFIO_PCI_MSIX_IRQ_INDEX)
> +		return -ENOTSUPP;
> +
> +	info->flags = VFIO_IRQ_INFO_EVENTFD;
> +	info->count = vdpa->max_vrings;
> +
> +	return 0;
> +}
> +
> +static int vdpa_set_irqs(struct mdev_device *mdev, uint32_t flags,
> +			 unsigned int index, unsigned int start,
> +			 unsigned int count, void *data)
> +{
> +	struct vdpa_dev *vdpa;
> +	int *fd = data, i;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -EINVAL;
> +
> +	if (index != VFIO_PCI_MSIX_IRQ_INDEX)
> +		return -ENOTSUPP;
> +
> +	for (i = 0; i < count; i++)
> +		vdpa->ops->set_eventfd(vdpa, start + i,
> +			(flags & VFIO_IRQ_SET_DATA_EVENTFD) ? fd[i] : -1);
> +
> +	return 0;
> +}
> +
> +long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg)
> +{
> +	int ret = 0;
> +	unsigned long minsz;
> +	struct vdpa_dev *vdpa;
> +
> +	if (!mdev)
> +		return -EINVAL;
> +
> +	vdpa = mdev_get_drvdata(mdev);
> +	if (!vdpa)
> +		return -ENODEV;
> +
> +	switch (cmd) {
> +	case VFIO_DEVICE_GET_INFO:
> +	{
> +		struct vfio_device_info info;
> +
> +		minsz = offsetofend(struct vfio_device_info, num_irqs);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_device_info(mdev, &info);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_GET_REGION_INFO:
> +	{
> +		struct vfio_region_info info;
> +		u16 cap_type_id = 0;
> +		void *cap_type = NULL;
> +
> +		minsz = offsetofend(struct vfio_region_info, offset);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_region_info(mdev, &info, &cap_type_id,
> +					   &cap_type);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_GET_IRQ_INFO:
> +	{
> +		struct vfio_irq_info info;
> +
> +		minsz = offsetofend(struct vfio_irq_info, count);
> +
> +		if (copy_from_user(&info, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (info.argsz < minsz || info.index >= vdpa->max_vrings)
> +			return -EINVAL;
> +
> +		ret = vdpa_get_irq_info(mdev, &info);
> +		if (ret)
> +			return ret;
> +
> +		if (copy_to_user((void __user *)arg, &info, minsz))
> +			return -EFAULT;
> +
> +		return 0;
> +	}
> +	case VFIO_DEVICE_SET_IRQS:
> +	{
> +		struct vfio_irq_set hdr;
> +		size_t data_size = 0;
> +		u8 *data = NULL;
> +
> +		minsz = offsetofend(struct vfio_irq_set, count);
> +
> +		if (copy_from_user(&hdr, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		ret = vfio_set_irqs_validate_and_prepare(&hdr, vdpa->max_vrings,
> +							 VFIO_PCI_NUM_IRQS,
> +							 &data_size);
> +		if (ret)
> +			return ret;
> +
> +		if (data_size) {
> +			data = memdup_user((void __user *)(arg + minsz),
> +					   data_size);
> +			if (IS_ERR(data))
> +				return PTR_ERR(data);
> +		}
> +
> +		ret = vdpa_set_irqs(mdev, hdr.flags, hdr.index, hdr.start,
> +				hdr.count, data);
> +
> +		kfree(data);
> +		return ret;
> +	}
> +	case VFIO_DEVICE_RESET:
> +		return vdpa_reset(mdev);
> +	}
> +	return -ENOTTY;
> +}
> +EXPORT_SYMBOL(vdpa_ioctl);
> +
> +int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
> +{
> +	// FIXME: to be implemented
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(vdpa_mmap);
> +
> +int vdpa_open(struct mdev_device *mdev)
> +{
> +	return 0;
> +}
> +EXPORT_SYMBOL(vdpa_open);
> +
> +void vdpa_close(struct mdev_device *mdev)
> +{
> +}
> +EXPORT_SYMBOL(vdpa_close);
> +
> +MODULE_VERSION("0.0.0");
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("Hardware virtio accelerator abstraction");
> diff --git a/include/linux/vdpa_mdev.h b/include/linux/vdpa_mdev.h
> new file mode 100644
> index 000000000000..8414e86ba4b8
> --- /dev/null
> +++ b/include/linux/vdpa_mdev.h
> @@ -0,0 +1,76 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2018 Intel Corporation.
> + */
> +
> +#ifndef VDPA_MDEV_H
> +#define VDPA_MDEV_H
> +
> +#define VDPA_CONFIG_SIZE 0xff
> +
> +struct mdev_device;
> +struct vdpa_dev;
> +
> +/*
> + * XXX: Any comments about the vDPA API design for drivers
> + *      would be appreciated!
> + */
> +
> +typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> +typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> +typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> +typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> +typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
> +
> +struct vdpa_device_ops {
> +	vdpa_start_device_t		start;
> +	vdpa_stop_device_t		stop;
> +	vdpa_dma_map_t			dma_map;
> +	vdpa_dma_unmap_t		dma_unmap;
> +	vdpa_set_eventfd_t		set_eventfd;
> +	vdpa_supported_features_t	supported_features;
> +	vdpa_notify_device_t		notify;
> +	vdpa_get_notify_addr_t		get_notify_addr;
> +};
> +
> +struct vdpa_vring_info {
> +	u64 desc_user_addr;
> +	u64 used_user_addr;
> +	u64 avail_user_addr;
> +	u64 log_guest_addr;
> +	u16 size;
> +	u16 base;
> +};
> +
> +struct vdpa_dev {
> +	struct mdev_device *mdev;
> +	struct mutex ops_lock;
> +	u8 vconfig[VDPA_CONFIG_SIZE];
> +	int nr_vring;
> +	u64 features;
> +	u64 state;
> +	struct vhost_memory *mem_table;
> +	bool pending_reply;
> +	struct vhost_vfio_op pending;
> +	const struct vdpa_device_ops *ops;
> +	void *private;
> +	int max_vrings;
> +	struct vdpa_vring_info vring_info[0];
> +};
> +
> +struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> +			    int max_vrings);
> +void vdpa_free(struct vdpa_dev *vdpa);
> +ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> +		  size_t count, loff_t *ppos);
> +ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> +		   size_t count, loff_t *ppos);
> +long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> +int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> +int vdpa_open(struct mdev_device *mdev);
> +void vdpa_close(struct mdev_device *mdev);
> +
> +#endif /* VDPA_MDEV_H */
> diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> index c51f8e5cc608..92a1ca0b5fe1 100644
> --- a/include/uapi/linux/vhost.h
> +++ b/include/uapi/linux/vhost.h
> @@ -207,4 +207,30 @@ struct vhost_scsi_target {
>   #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
>   #define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
>   
> +/* VHOST_DEVICE specific defines */
> +
> +#define VHOST_DEVICE_SET_STATE _IOW(VHOST_VIRTIO, 0x70, __u64)
> +
> +#define VHOST_DEVICE_S_STOPPED 0
> +#define VHOST_DEVICE_S_RUNNING 1
> +#define VHOST_DEVICE_S_MAX     2
> +
> +struct vhost_vfio_op {
> +	__u64 request;
> +	__u32 flags;
> +	/* Flag values: */
> +#define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> +	__u32 size;
> +	union {
> +		__u64 u64;
> +		struct vhost_vring_state state;
> +		struct vhost_vring_addr addr;
> +		struct vhost_memory memory;
> +	} payload;
> +};
> +
> +#define VHOST_VFIO_OP_HDR_SIZE \
> +		((unsigned long)&((struct vhost_vfio_op *)NULL)->payload)
> +#define VHOST_VFIO_OP_PAYLOAD_MAX_SIZE 1024 /* FIXME TBD */
> +
>   #endif


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  2:52   ` [virtio-dev] " Jason Wang
@ 2018-04-10  4:57     ` Tiwei Bie
  -1 siblings, 0 replies; 55+ messages in thread
From: Tiwei Bie @ 2018-04-10  4:57 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, alex.williamson, ddutile, alexander.h.duyck, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang

On Tue, Apr 10, 2018 at 10:52:52AM +0800, Jason Wang wrote:
> On 2018年04月02日 23:23, Tiwei Bie wrote:
> > This patch introduces a mdev (mediated device) based hardware
> > vhost backend. This backend is an abstraction of the various
> > hardware vhost accelerators (potentially any device that uses
> > virtio ring can be used as a vhost accelerator). Some generic
> > mdev parent ops are provided for accelerator drivers to support
> > generating mdev instances.
> > 
> > What's this
> > ===========
> > 
> > The idea is that we can setup a virtio ring compatible device
> > with the messages available at the vhost-backend. Originally,
> > these messages are used to implement a software vhost backend,
> > but now we will use these messages to setup a virtio ring
> > compatible hardware device. Then the hardware device will be
> > able to work with the guest virtio driver in the VM just like
> > what the software backend does. That is to say, we can implement
> > a hardware based vhost backend in QEMU, and any virtio ring
> > compatible devices potentially can be used with this backend.
> > (We also call it vDPA -- vhost Data Path Acceleration).
> > 
> > One problem is that, different virtio ring compatible devices
> > may have different device interfaces. That is to say, we will
> > need different drivers in QEMU. It could be troublesome. And
> > that's what this patch trying to fix. The idea behind this
> > patch is very simple: mdev is a standard way to emulate device
> > in kernel.
> 
> So you just move the abstraction layer from qemu to kernel, and you still
> need different drivers in kernel for different device interfaces of
> accelerators. This looks even more complex than leaving it in qemu. As you
> said, another idea is to implement userspace vhost backend for accelerators
> which seems easier and could co-work with other parts of qemu without
> inventing new type of messages.

I'm not quite sure. Do you think it's acceptable to
add various vendor specific hardware drivers in QEMU?

> 
> Need careful thought here to seek a best solution here.

Yeah, definitely! :)
And your opinions would be very helpful!

> 
> >   So we defined a standard device based on mdev, which
> > is able to accept vhost messages. When the mdev emulation code
> > (i.e. the generic mdev parent ops provided by this patch) gets
> > vhost messages, it will parse and deliver them to accelerator
> > drivers. Drivers can use these messages to setup accelerators.
> > 
> > That is to say, the generic mdev parent ops (e.g. read()/write()/
> > ioctl()/...) will be provided for accelerator drivers to register
> > accelerators as mdev parent devices. And each accelerator device
> > will support generating standard mdev instance(s).
> > 
> > With this standard device interface, we will be able to just
> > develop one userspace driver to implement the hardware based
> > vhost backend in QEMU.
> > 
> > Difference between vDPA and PCI passthru
> > ========================================
> > 
> > The key difference between vDPA and PCI passthru is that, in
> > vDPA only the data path of the device (e.g. DMA ring, notify
> > region and queue interrupt) is pass-throughed to the VM, the
> > device control path (e.g. PCI configuration space and MMIO
> > regions) is still defined and emulated by QEMU.
> > 
> > The benefits of keeping virtio device emulation in QEMU compared
> > with virtio device PCI passthru include (but not limit to):
> > 
> > - consistent device interface for guest OS in the VM;
> > - max flexibility on the hardware design, especially the
> >    accelerator for each vhost backend doesn't have to be a
> >    full PCI device;
> > - leveraging the existing virtio live-migration framework;
> > 
> > The interface of this mdev based device
> > =======================================
> > 
> > 1. BAR0
> > 
> > The MMIO region described by BAR0 is the main control
> > interface. Messages will be written to or read from
> > this region.
> > 
> > The message type is determined by the `request` field
> > in message header. The message size is encoded in the
> > message header too. The message format looks like this:
> > 
> > struct vhost_vfio_op {
> > 	__u64 request;
> > 	__u32 flags;
> > 	/* Flag values: */
> > #define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> > 	__u32 size;
> > 	union {
> > 		__u64 u64;
> > 		struct vhost_vring_state state;
> > 		struct vhost_vring_addr addr;
> > 		struct vhost_memory memory;
> > 	} payload;
> > };
> > 
> > The existing vhost-kernel ioctl cmds are reused as
> > the message requests in above structure.
> > 
> > Each message will be written to or read from this
> > region at offset 0:
> > 
> > int vhost_vfio_write(struct vhost_dev *dev, struct vhost_vfio_op *op)
> > {
> > 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> > 	struct vhost_vfio *vfio = dev->opaque;
> > 	int ret;
> > 
> > 	ret = pwrite64(vfio->device_fd, op, count, vfio->bar0_offset);
> > 	if (ret != count)
> > 		return -1;
> > 
> > 	return 0;
> > }
> > 
> > int vhost_vfio_read(struct vhost_dev *dev, struct vhost_vfio_op *op)
> > {
> > 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> > 	struct vhost_vfio *vfio = dev->opaque;
> > 	uint64_t request = op->request;
> > 	int ret;
> > 
> > 	ret = pread64(vfio->device_fd, op, count, vfio->bar0_offset);
> > 	if (ret != count || request != op->request)
> > 		return -1;
> > 
> > 	return 0;
> > }
> > 
> > It's quite straightforward to set things to the device.
> > Just need to write the message to device directly:
> > 
> > int vhost_vfio_set_features(struct vhost_dev *dev, uint64_t features)
> > {
> > 	struct vhost_vfio_op op;
> > 
> > 	op.request = VHOST_SET_FEATURES;
> > 	op.flags = 0;
> > 	op.size = sizeof(features);
> > 	op.payload.u64 = features;
> > 
> > 	return vhost_vfio_write(dev, &op);
> > }
> > 
> > To get things from the device, two steps are needed.
> > Take VHOST_GET_FEATURE as an example:
> > 
> > int vhost_vfio_get_features(struct vhost_dev *dev, uint64_t *features)
> > {
> > 	struct vhost_vfio_op op;
> > 	int ret;
> > 
> > 	op.request = VHOST_GET_FEATURES;
> > 	op.flags = VHOST_VFIO_NEED_REPLY;
> > 	op.size = 0;
> > 
> > 	/* Just need to write the header */
> > 	ret = vhost_vfio_write(dev, &op);
> > 	if (ret != 0)
> > 		goto out;
> > 
> > 	/* `op` wasn't changed during write */
> > 	op.flags = 0;
> > 	op.size = sizeof(*features);
> > 
> > 	ret = vhost_vfio_read(dev, &op);
> > 	if (ret != 0)
> > 		goto out;
> > 
> > 	*features = op.payload.u64;
> > out:
> > 	return ret;
> > }
> > 
> > 2. BAR1 (mmap-able)
> > 
> > The MMIO region described by BAR1 will be used to notify the
> > device.
> > 
> > Each queue will has a page for notification, and it can be
> > mapped to VM (if hardware also supports), and the virtio
> > driver in the VM will be able to notify the device directly.
> > 
> > The MMIO region described by BAR1 is also write-able. If the
> > accelerator's notification register(s) cannot be mapped to the
> > VM, write() can also be used to notify the device. Something
> > like this:
> > 
> > void notify_relay(void *opaque)
> > {
> > 	......
> > 	offset = 0x1000 * queue_idx; /* XXX assume page size is 4K here. */
> > 
> > 	ret = pwrite64(vfio->device_fd, &queue_idx, sizeof(queue_idx),
> > 			vfio->bar1_offset + offset);
> > 	......
> > }
> > 
> > Other BARs are reserved.
> > 
> > 3. VFIO interrupt ioctl API
> > 
> > VFIO interrupt ioctl API is used to setup device interrupts.
> > IRQ-bypass will also be supported.
> > 
> > Currently, only VFIO_PCI_MSIX_IRQ_INDEX is supported.
> > 
> > The API for drivers to provide mdev instances
> > =============================================
> > 
> > The read()/write()/ioctl()/mmap()/open()/release() mdev
> > parent ops have been provided for accelerators' drivers
> > to provide mdev instances.
> > 
> > ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> > 		  size_t count, loff_t *ppos);
> > ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> > 		   size_t count, loff_t *ppos);
> > long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> > int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> > int vdpa_open(struct mdev_device *mdev);
> > void vdpa_close(struct mdev_device *mdev);
> > 
> > Each accelerator driver just needs to implement its own
> > create()/remove() ops, and provide a vdpa device ops
> > which will be called by the generic mdev emulation code.
> > 
> > Currently, the vdpa device ops are defined as:
> > 
> > typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> > typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> > typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> > typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
> > 
> > struct vdpa_device_ops {
> > 	vdpa_start_device_t		start;
> > 	vdpa_stop_device_t		stop;
> > 	vdpa_dma_map_t			dma_map;
> > 	vdpa_dma_unmap_t		dma_unmap;
> > 	vdpa_set_eventfd_t		set_eventfd;
> > 	vdpa_supported_features_t	supported_features;
> > 	vdpa_notify_device_t		notify;
> > 	vdpa_get_notify_addr_t		get_notify_addr;
> > };
> > 
> > struct vdpa_dev {
> > 	struct mdev_device *mdev;
> > 	struct mutex ops_lock;
> > 	u8 vconfig[VDPA_CONFIG_SIZE];
> > 	int nr_vring;
> > 	u64 features;
> > 	u64 state;
> > 	struct vhost_memory *mem_table;
> > 	bool pending_reply;
> > 	struct vhost_vfio_op pending;
> > 	const struct vdpa_device_ops *ops;
> > 	void *private;
> > 	int max_vrings;
> > 	struct vdpa_vring_info vring_info[0];
> > };
> > 
> > struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> > 			    int max_vrings);
> > void vdpa_free(struct vdpa_dev *vdpa);
> > 
> > A simple example
> > ================
> > 
> > # Query the number of available mdev instances
> > $ cat /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/available_instances
> > 
> > # Create a mdev instance
> > $ echo $UUID > /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/create
> > 
> > # Launch QEMU with a virtio-net device
> > $ qemu \
> > 	...... \
> > 	-netdev type=vhost-vfio,sysfsdev=/sys/bus/mdev/devices/$UUID,id=$ID \
> > 	-device virtio-net-pci,netdev=$ID
> > 
> > -------- END --------
> > 
> > Most of above words will be refined and moved to a doc in
> > the formal patch. In this RFC, all introductions and code
> > are gathered in this patch, the idea is to make it easier
> > to find all the relevant information. Anyone who wants to
> > comment could use inline comment and just keep the relevant
> > parts. Sorry for the big RFC patch..
> > 
> > This patch is just a RFC for now, and something is still
> > missing or needs to be refined. But it's never too early
> > to hear the thoughts from the community. So any comments
> > would be appreciated! Thanks! :-)
> 
> I don't see vhost_vfio_write() and other above functions in the patch. Looks
> like some part of the patch is missed, it would be better to post a complete
> series with an example driver (vDPA) to get a full picture.

No problem. We will send out the QEMU changes soon!

Thanks!

> 
> Thanks
> 
[...]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  2:52   ` [virtio-dev] " Jason Wang
  (?)
  (?)
@ 2018-04-10  4:57   ` Tiwei Bie
  -1 siblings, 0 replies; 55+ messages in thread
From: Tiwei Bie @ 2018-04-10  4:57 UTC (permalink / raw)
  To: Jason Wang
  Cc: alexander.h.duyck, virtio-dev, kvm, mst, netdev, linux-kernel,
	virtualization, xiao.w.wang, ddutile, jianfeng.tan, zhihong.wang

On Tue, Apr 10, 2018 at 10:52:52AM +0800, Jason Wang wrote:
> On 2018年04月02日 23:23, Tiwei Bie wrote:
> > This patch introduces a mdev (mediated device) based hardware
> > vhost backend. This backend is an abstraction of the various
> > hardware vhost accelerators (potentially any device that uses
> > virtio ring can be used as a vhost accelerator). Some generic
> > mdev parent ops are provided for accelerator drivers to support
> > generating mdev instances.
> > 
> > What's this
> > ===========
> > 
> > The idea is that we can setup a virtio ring compatible device
> > with the messages available at the vhost-backend. Originally,
> > these messages are used to implement a software vhost backend,
> > but now we will use these messages to setup a virtio ring
> > compatible hardware device. Then the hardware device will be
> > able to work with the guest virtio driver in the VM just like
> > what the software backend does. That is to say, we can implement
> > a hardware based vhost backend in QEMU, and any virtio ring
> > compatible devices potentially can be used with this backend.
> > (We also call it vDPA -- vhost Data Path Acceleration).
> > 
> > One problem is that, different virtio ring compatible devices
> > may have different device interfaces. That is to say, we will
> > need different drivers in QEMU. It could be troublesome. And
> > that's what this patch trying to fix. The idea behind this
> > patch is very simple: mdev is a standard way to emulate device
> > in kernel.
> 
> So you just move the abstraction layer from qemu to kernel, and you still
> need different drivers in kernel for different device interfaces of
> accelerators. This looks even more complex than leaving it in qemu. As you
> said, another idea is to implement userspace vhost backend for accelerators
> which seems easier and could co-work with other parts of qemu without
> inventing new type of messages.

I'm not quite sure. Do you think it's acceptable to
add various vendor specific hardware drivers in QEMU?

> 
> Need careful thought here to seek a best solution here.

Yeah, definitely! :)
And your opinions would be very helpful!

> 
> >   So we defined a standard device based on mdev, which
> > is able to accept vhost messages. When the mdev emulation code
> > (i.e. the generic mdev parent ops provided by this patch) gets
> > vhost messages, it will parse and deliver them to accelerator
> > drivers. Drivers can use these messages to setup accelerators.
> > 
> > That is to say, the generic mdev parent ops (e.g. read()/write()/
> > ioctl()/...) will be provided for accelerator drivers to register
> > accelerators as mdev parent devices. And each accelerator device
> > will support generating standard mdev instance(s).
> > 
> > With this standard device interface, we will be able to just
> > develop one userspace driver to implement the hardware based
> > vhost backend in QEMU.
> > 
> > Difference between vDPA and PCI passthru
> > ========================================
> > 
> > The key difference between vDPA and PCI passthru is that, in
> > vDPA only the data path of the device (e.g. DMA ring, notify
> > region and queue interrupt) is pass-throughed to the VM, the
> > device control path (e.g. PCI configuration space and MMIO
> > regions) is still defined and emulated by QEMU.
> > 
> > The benefits of keeping virtio device emulation in QEMU compared
> > with virtio device PCI passthru include (but not limit to):
> > 
> > - consistent device interface for guest OS in the VM;
> > - max flexibility on the hardware design, especially the
> >    accelerator for each vhost backend doesn't have to be a
> >    full PCI device;
> > - leveraging the existing virtio live-migration framework;
> > 
> > The interface of this mdev based device
> > =======================================
> > 
> > 1. BAR0
> > 
> > The MMIO region described by BAR0 is the main control
> > interface. Messages will be written to or read from
> > this region.
> > 
> > The message type is determined by the `request` field
> > in message header. The message size is encoded in the
> > message header too. The message format looks like this:
> > 
> > struct vhost_vfio_op {
> > 	__u64 request;
> > 	__u32 flags;
> > 	/* Flag values: */
> > #define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> > 	__u32 size;
> > 	union {
> > 		__u64 u64;
> > 		struct vhost_vring_state state;
> > 		struct vhost_vring_addr addr;
> > 		struct vhost_memory memory;
> > 	} payload;
> > };
> > 
> > The existing vhost-kernel ioctl cmds are reused as
> > the message requests in above structure.
> > 
> > Each message will be written to or read from this
> > region at offset 0:
> > 
> > int vhost_vfio_write(struct vhost_dev *dev, struct vhost_vfio_op *op)
> > {
> > 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> > 	struct vhost_vfio *vfio = dev->opaque;
> > 	int ret;
> > 
> > 	ret = pwrite64(vfio->device_fd, op, count, vfio->bar0_offset);
> > 	if (ret != count)
> > 		return -1;
> > 
> > 	return 0;
> > }
> > 
> > int vhost_vfio_read(struct vhost_dev *dev, struct vhost_vfio_op *op)
> > {
> > 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> > 	struct vhost_vfio *vfio = dev->opaque;
> > 	uint64_t request = op->request;
> > 	int ret;
> > 
> > 	ret = pread64(vfio->device_fd, op, count, vfio->bar0_offset);
> > 	if (ret != count || request != op->request)
> > 		return -1;
> > 
> > 	return 0;
> > }
> > 
> > It's quite straightforward to set things to the device.
> > Just need to write the message to device directly:
> > 
> > int vhost_vfio_set_features(struct vhost_dev *dev, uint64_t features)
> > {
> > 	struct vhost_vfio_op op;
> > 
> > 	op.request = VHOST_SET_FEATURES;
> > 	op.flags = 0;
> > 	op.size = sizeof(features);
> > 	op.payload.u64 = features;
> > 
> > 	return vhost_vfio_write(dev, &op);
> > }
> > 
> > To get things from the device, two steps are needed.
> > Take VHOST_GET_FEATURE as an example:
> > 
> > int vhost_vfio_get_features(struct vhost_dev *dev, uint64_t *features)
> > {
> > 	struct vhost_vfio_op op;
> > 	int ret;
> > 
> > 	op.request = VHOST_GET_FEATURES;
> > 	op.flags = VHOST_VFIO_NEED_REPLY;
> > 	op.size = 0;
> > 
> > 	/* Just need to write the header */
> > 	ret = vhost_vfio_write(dev, &op);
> > 	if (ret != 0)
> > 		goto out;
> > 
> > 	/* `op` wasn't changed during write */
> > 	op.flags = 0;
> > 	op.size = sizeof(*features);
> > 
> > 	ret = vhost_vfio_read(dev, &op);
> > 	if (ret != 0)
> > 		goto out;
> > 
> > 	*features = op.payload.u64;
> > out:
> > 	return ret;
> > }
> > 
> > 2. BAR1 (mmap-able)
> > 
> > The MMIO region described by BAR1 will be used to notify the
> > device.
> > 
> > Each queue will has a page for notification, and it can be
> > mapped to VM (if hardware also supports), and the virtio
> > driver in the VM will be able to notify the device directly.
> > 
> > The MMIO region described by BAR1 is also write-able. If the
> > accelerator's notification register(s) cannot be mapped to the
> > VM, write() can also be used to notify the device. Something
> > like this:
> > 
> > void notify_relay(void *opaque)
> > {
> > 	......
> > 	offset = 0x1000 * queue_idx; /* XXX assume page size is 4K here. */
> > 
> > 	ret = pwrite64(vfio->device_fd, &queue_idx, sizeof(queue_idx),
> > 			vfio->bar1_offset + offset);
> > 	......
> > }
> > 
> > Other BARs are reserved.
> > 
> > 3. VFIO interrupt ioctl API
> > 
> > VFIO interrupt ioctl API is used to setup device interrupts.
> > IRQ-bypass will also be supported.
> > 
> > Currently, only VFIO_PCI_MSIX_IRQ_INDEX is supported.
> > 
> > The API for drivers to provide mdev instances
> > =============================================
> > 
> > The read()/write()/ioctl()/mmap()/open()/release() mdev
> > parent ops have been provided for accelerators' drivers
> > to provide mdev instances.
> > 
> > ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> > 		  size_t count, loff_t *ppos);
> > ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> > 		   size_t count, loff_t *ppos);
> > long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> > int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> > int vdpa_open(struct mdev_device *mdev);
> > void vdpa_close(struct mdev_device *mdev);
> > 
> > Each accelerator driver just needs to implement its own
> > create()/remove() ops, and provide a vdpa device ops
> > which will be called by the generic mdev emulation code.
> > 
> > Currently, the vdpa device ops are defined as:
> > 
> > typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> > typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> > typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> > typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
> > 
> > struct vdpa_device_ops {
> > 	vdpa_start_device_t		start;
> > 	vdpa_stop_device_t		stop;
> > 	vdpa_dma_map_t			dma_map;
> > 	vdpa_dma_unmap_t		dma_unmap;
> > 	vdpa_set_eventfd_t		set_eventfd;
> > 	vdpa_supported_features_t	supported_features;
> > 	vdpa_notify_device_t		notify;
> > 	vdpa_get_notify_addr_t		get_notify_addr;
> > };
> > 
> > struct vdpa_dev {
> > 	struct mdev_device *mdev;
> > 	struct mutex ops_lock;
> > 	u8 vconfig[VDPA_CONFIG_SIZE];
> > 	int nr_vring;
> > 	u64 features;
> > 	u64 state;
> > 	struct vhost_memory *mem_table;
> > 	bool pending_reply;
> > 	struct vhost_vfio_op pending;
> > 	const struct vdpa_device_ops *ops;
> > 	void *private;
> > 	int max_vrings;
> > 	struct vdpa_vring_info vring_info[0];
> > };
> > 
> > struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> > 			    int max_vrings);
> > void vdpa_free(struct vdpa_dev *vdpa);
> > 
> > A simple example
> > ================
> > 
> > # Query the number of available mdev instances
> > $ cat /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/available_instances
> > 
> > # Create a mdev instance
> > $ echo $UUID > /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/create
> > 
> > # Launch QEMU with a virtio-net device
> > $ qemu \
> > 	...... \
> > 	-netdev type=vhost-vfio,sysfsdev=/sys/bus/mdev/devices/$UUID,id=$ID \
> > 	-device virtio-net-pci,netdev=$ID
> > 
> > -------- END --------
> > 
> > Most of above words will be refined and moved to a doc in
> > the formal patch. In this RFC, all introductions and code
> > are gathered in this patch, the idea is to make it easier
> > to find all the relevant information. Anyone who wants to
> > comment could use inline comment and just keep the relevant
> > parts. Sorry for the big RFC patch..
> > 
> > This patch is just a RFC for now, and something is still
> > missing or needs to be refined. But it's never too early
> > to hear the thoughts from the community. So any comments
> > would be appreciated! Thanks! :-)
> 
> I don't see vhost_vfio_write() and other above functions in the patch. Looks
> like some part of the patch is missed, it would be better to post a complete
> series with an example driver (vDPA) to get a full picture.

No problem. We will send out the QEMU changes soon!

Thanks!

> 
> Thanks
> 
[...]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-10  4:57     ` Tiwei Bie
  0 siblings, 0 replies; 55+ messages in thread
From: Tiwei Bie @ 2018-04-10  4:57 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, alex.williamson, ddutile, alexander.h.duyck, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang

On Tue, Apr 10, 2018 at 10:52:52AM +0800, Jason Wang wrote:
> On 2018年04月02日 23:23, Tiwei Bie wrote:
> > This patch introduces a mdev (mediated device) based hardware
> > vhost backend. This backend is an abstraction of the various
> > hardware vhost accelerators (potentially any device that uses
> > virtio ring can be used as a vhost accelerator). Some generic
> > mdev parent ops are provided for accelerator drivers to support
> > generating mdev instances.
> > 
> > What's this
> > ===========
> > 
> > The idea is that we can setup a virtio ring compatible device
> > with the messages available at the vhost-backend. Originally,
> > these messages are used to implement a software vhost backend,
> > but now we will use these messages to setup a virtio ring
> > compatible hardware device. Then the hardware device will be
> > able to work with the guest virtio driver in the VM just like
> > what the software backend does. That is to say, we can implement
> > a hardware based vhost backend in QEMU, and any virtio ring
> > compatible devices potentially can be used with this backend.
> > (We also call it vDPA -- vhost Data Path Acceleration).
> > 
> > One problem is that, different virtio ring compatible devices
> > may have different device interfaces. That is to say, we will
> > need different drivers in QEMU. It could be troublesome. And
> > that's what this patch trying to fix. The idea behind this
> > patch is very simple: mdev is a standard way to emulate device
> > in kernel.
> 
> So you just move the abstraction layer from qemu to kernel, and you still
> need different drivers in kernel for different device interfaces of
> accelerators. This looks even more complex than leaving it in qemu. As you
> said, another idea is to implement userspace vhost backend for accelerators
> which seems easier and could co-work with other parts of qemu without
> inventing new type of messages.

I'm not quite sure. Do you think it's acceptable to
add various vendor specific hardware drivers in QEMU?

> 
> Need careful thought here to seek a best solution here.

Yeah, definitely! :)
And your opinions would be very helpful!

> 
> >   So we defined a standard device based on mdev, which
> > is able to accept vhost messages. When the mdev emulation code
> > (i.e. the generic mdev parent ops provided by this patch) gets
> > vhost messages, it will parse and deliver them to accelerator
> > drivers. Drivers can use these messages to setup accelerators.
> > 
> > That is to say, the generic mdev parent ops (e.g. read()/write()/
> > ioctl()/...) will be provided for accelerator drivers to register
> > accelerators as mdev parent devices. And each accelerator device
> > will support generating standard mdev instance(s).
> > 
> > With this standard device interface, we will be able to just
> > develop one userspace driver to implement the hardware based
> > vhost backend in QEMU.
> > 
> > Difference between vDPA and PCI passthru
> > ========================================
> > 
> > The key difference between vDPA and PCI passthru is that, in
> > vDPA only the data path of the device (e.g. DMA ring, notify
> > region and queue interrupt) is pass-throughed to the VM, the
> > device control path (e.g. PCI configuration space and MMIO
> > regions) is still defined and emulated by QEMU.
> > 
> > The benefits of keeping virtio device emulation in QEMU compared
> > with virtio device PCI passthru include (but not limit to):
> > 
> > - consistent device interface for guest OS in the VM;
> > - max flexibility on the hardware design, especially the
> >    accelerator for each vhost backend doesn't have to be a
> >    full PCI device;
> > - leveraging the existing virtio live-migration framework;
> > 
> > The interface of this mdev based device
> > =======================================
> > 
> > 1. BAR0
> > 
> > The MMIO region described by BAR0 is the main control
> > interface. Messages will be written to or read from
> > this region.
> > 
> > The message type is determined by the `request` field
> > in message header. The message size is encoded in the
> > message header too. The message format looks like this:
> > 
> > struct vhost_vfio_op {
> > 	__u64 request;
> > 	__u32 flags;
> > 	/* Flag values: */
> > #define VHOST_VFIO_NEED_REPLY 0x1 /* Whether need reply */
> > 	__u32 size;
> > 	union {
> > 		__u64 u64;
> > 		struct vhost_vring_state state;
> > 		struct vhost_vring_addr addr;
> > 		struct vhost_memory memory;
> > 	} payload;
> > };
> > 
> > The existing vhost-kernel ioctl cmds are reused as
> > the message requests in above structure.
> > 
> > Each message will be written to or read from this
> > region at offset 0:
> > 
> > int vhost_vfio_write(struct vhost_dev *dev, struct vhost_vfio_op *op)
> > {
> > 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> > 	struct vhost_vfio *vfio = dev->opaque;
> > 	int ret;
> > 
> > 	ret = pwrite64(vfio->device_fd, op, count, vfio->bar0_offset);
> > 	if (ret != count)
> > 		return -1;
> > 
> > 	return 0;
> > }
> > 
> > int vhost_vfio_read(struct vhost_dev *dev, struct vhost_vfio_op *op)
> > {
> > 	int count = VHOST_VFIO_OP_HDR_SIZE + op->size;
> > 	struct vhost_vfio *vfio = dev->opaque;
> > 	uint64_t request = op->request;
> > 	int ret;
> > 
> > 	ret = pread64(vfio->device_fd, op, count, vfio->bar0_offset);
> > 	if (ret != count || request != op->request)
> > 		return -1;
> > 
> > 	return 0;
> > }
> > 
> > It's quite straightforward to set things to the device.
> > Just need to write the message to device directly:
> > 
> > int vhost_vfio_set_features(struct vhost_dev *dev, uint64_t features)
> > {
> > 	struct vhost_vfio_op op;
> > 
> > 	op.request = VHOST_SET_FEATURES;
> > 	op.flags = 0;
> > 	op.size = sizeof(features);
> > 	op.payload.u64 = features;
> > 
> > 	return vhost_vfio_write(dev, &op);
> > }
> > 
> > To get things from the device, two steps are needed.
> > Take VHOST_GET_FEATURE as an example:
> > 
> > int vhost_vfio_get_features(struct vhost_dev *dev, uint64_t *features)
> > {
> > 	struct vhost_vfio_op op;
> > 	int ret;
> > 
> > 	op.request = VHOST_GET_FEATURES;
> > 	op.flags = VHOST_VFIO_NEED_REPLY;
> > 	op.size = 0;
> > 
> > 	/* Just need to write the header */
> > 	ret = vhost_vfio_write(dev, &op);
> > 	if (ret != 0)
> > 		goto out;
> > 
> > 	/* `op` wasn't changed during write */
> > 	op.flags = 0;
> > 	op.size = sizeof(*features);
> > 
> > 	ret = vhost_vfio_read(dev, &op);
> > 	if (ret != 0)
> > 		goto out;
> > 
> > 	*features = op.payload.u64;
> > out:
> > 	return ret;
> > }
> > 
> > 2. BAR1 (mmap-able)
> > 
> > The MMIO region described by BAR1 will be used to notify the
> > device.
> > 
> > Each queue will has a page for notification, and it can be
> > mapped to VM (if hardware also supports), and the virtio
> > driver in the VM will be able to notify the device directly.
> > 
> > The MMIO region described by BAR1 is also write-able. If the
> > accelerator's notification register(s) cannot be mapped to the
> > VM, write() can also be used to notify the device. Something
> > like this:
> > 
> > void notify_relay(void *opaque)
> > {
> > 	......
> > 	offset = 0x1000 * queue_idx; /* XXX assume page size is 4K here. */
> > 
> > 	ret = pwrite64(vfio->device_fd, &queue_idx, sizeof(queue_idx),
> > 			vfio->bar1_offset + offset);
> > 	......
> > }
> > 
> > Other BARs are reserved.
> > 
> > 3. VFIO interrupt ioctl API
> > 
> > VFIO interrupt ioctl API is used to setup device interrupts.
> > IRQ-bypass will also be supported.
> > 
> > Currently, only VFIO_PCI_MSIX_IRQ_INDEX is supported.
> > 
> > The API for drivers to provide mdev instances
> > =============================================
> > 
> > The read()/write()/ioctl()/mmap()/open()/release() mdev
> > parent ops have been provided for accelerators' drivers
> > to provide mdev instances.
> > 
> > ssize_t vdpa_read(struct mdev_device *mdev, char __user *buf,
> > 		  size_t count, loff_t *ppos);
> > ssize_t vdpa_write(struct mdev_device *mdev, const char __user *buf,
> > 		   size_t count, loff_t *ppos);
> > long vdpa_ioctl(struct mdev_device *mdev, unsigned int cmd, unsigned long arg);
> > int vdpa_mmap(struct mdev_device *mdev, struct vm_area_struct *vma);
> > int vdpa_open(struct mdev_device *mdev);
> > void vdpa_close(struct mdev_device *mdev);
> > 
> > Each accelerator driver just needs to implement its own
> > create()/remove() ops, and provide a vdpa device ops
> > which will be called by the generic mdev emulation code.
> > 
> > Currently, the vdpa device ops are defined as:
> > 
> > typedef int (*vdpa_start_device_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_stop_device_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_dma_map_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_dma_unmap_t)(struct vdpa_dev *vdpa);
> > typedef int (*vdpa_set_eventfd_t)(struct vdpa_dev *vdpa, int vector, int fd);
> > typedef u64 (*vdpa_supported_features_t)(struct vdpa_dev *vdpa);
> > typedef void (*vdpa_notify_device_t)(struct vdpa_dev *vdpa, int qid);
> > typedef u64 (*vdpa_get_notify_addr_t)(struct vdpa_dev *vdpa, int qid);
> > 
> > struct vdpa_device_ops {
> > 	vdpa_start_device_t		start;
> > 	vdpa_stop_device_t		stop;
> > 	vdpa_dma_map_t			dma_map;
> > 	vdpa_dma_unmap_t		dma_unmap;
> > 	vdpa_set_eventfd_t		set_eventfd;
> > 	vdpa_supported_features_t	supported_features;
> > 	vdpa_notify_device_t		notify;
> > 	vdpa_get_notify_addr_t		get_notify_addr;
> > };
> > 
> > struct vdpa_dev {
> > 	struct mdev_device *mdev;
> > 	struct mutex ops_lock;
> > 	u8 vconfig[VDPA_CONFIG_SIZE];
> > 	int nr_vring;
> > 	u64 features;
> > 	u64 state;
> > 	struct vhost_memory *mem_table;
> > 	bool pending_reply;
> > 	struct vhost_vfio_op pending;
> > 	const struct vdpa_device_ops *ops;
> > 	void *private;
> > 	int max_vrings;
> > 	struct vdpa_vring_info vring_info[0];
> > };
> > 
> > struct vdpa_dev *vdpa_alloc(struct mdev_device *mdev, void *private,
> > 			    int max_vrings);
> > void vdpa_free(struct vdpa_dev *vdpa);
> > 
> > A simple example
> > ================
> > 
> > # Query the number of available mdev instances
> > $ cat /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/available_instances
> > 
> > # Create a mdev instance
> > $ echo $UUID > /sys/class/mdev_bus/0000:06:00.2/mdev_supported_types/ifcvf_vdpa-vdpa_virtio/create
> > 
> > # Launch QEMU with a virtio-net device
> > $ qemu \
> > 	...... \
> > 	-netdev type=vhost-vfio,sysfsdev=/sys/bus/mdev/devices/$UUID,id=$ID \
> > 	-device virtio-net-pci,netdev=$ID
> > 
> > -------- END --------
> > 
> > Most of above words will be refined and moved to a doc in
> > the formal patch. In this RFC, all introductions and code
> > are gathered in this patch, the idea is to make it easier
> > to find all the relevant information. Anyone who wants to
> > comment could use inline comment and just keep the relevant
> > parts. Sorry for the big RFC patch..
> > 
> > This patch is just a RFC for now, and something is still
> > missing or needs to be refined. But it's never too early
> > to hear the thoughts from the community. So any comments
> > would be appreciated! Thanks! :-)
> 
> I don't see vhost_vfio_write() and other above functions in the patch. Looks
> like some part of the patch is missed, it would be better to post a complete
> series with an example driver (vDPA) to get a full picture.

No problem. We will send out the QEMU changes soon!

Thanks!

> 
> Thanks
> 
[...]

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  4:57     ` [virtio-dev] " Tiwei Bie
@ 2018-04-10  7:25       ` Jason Wang
  -1 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-10  7:25 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: mst, alex.williamson, ddutile, alexander.h.duyck, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang



On 2018年04月10日 12:57, Tiwei Bie wrote:
> On Tue, Apr 10, 2018 at 10:52:52AM +0800, Jason Wang wrote:
>> On 2018年04月02日 23:23, Tiwei Bie wrote:
>>> This patch introduces a mdev (mediated device) based hardware
>>> vhost backend. This backend is an abstraction of the various
>>> hardware vhost accelerators (potentially any device that uses
>>> virtio ring can be used as a vhost accelerator). Some generic
>>> mdev parent ops are provided for accelerator drivers to support
>>> generating mdev instances.
>>>
>>> What's this
>>> ===========
>>>
>>> The idea is that we can setup a virtio ring compatible device
>>> with the messages available at the vhost-backend. Originally,
>>> these messages are used to implement a software vhost backend,
>>> but now we will use these messages to setup a virtio ring
>>> compatible hardware device. Then the hardware device will be
>>> able to work with the guest virtio driver in the VM just like
>>> what the software backend does. That is to say, we can implement
>>> a hardware based vhost backend in QEMU, and any virtio ring
>>> compatible devices potentially can be used with this backend.
>>> (We also call it vDPA -- vhost Data Path Acceleration).
>>>
>>> One problem is that, different virtio ring compatible devices
>>> may have different device interfaces. That is to say, we will
>>> need different drivers in QEMU. It could be troublesome. And
>>> that's what this patch trying to fix. The idea behind this
>>> patch is very simple: mdev is a standard way to emulate device
>>> in kernel.
>> So you just move the abstraction layer from qemu to kernel, and you still
>> need different drivers in kernel for different device interfaces of
>> accelerators. This looks even more complex than leaving it in qemu. As you
>> said, another idea is to implement userspace vhost backend for accelerators
>> which seems easier and could co-work with other parts of qemu without
>> inventing new type of messages.
> I'm not quite sure. Do you think it's acceptable to
> add various vendor specific hardware drivers in QEMU?
>

I don't object but we need to figure out the advantages of doing it in 
qemu too.

Thanks

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  4:57     ` [virtio-dev] " Tiwei Bie
  (?)
@ 2018-04-10  7:25     ` Jason Wang
  -1 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-10  7:25 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: alexander.h.duyck, virtio-dev, kvm, mst, netdev, linux-kernel,
	virtualization, xiao.w.wang, ddutile, jianfeng.tan, zhihong.wang



On 2018年04月10日 12:57, Tiwei Bie wrote:
> On Tue, Apr 10, 2018 at 10:52:52AM +0800, Jason Wang wrote:
>> On 2018年04月02日 23:23, Tiwei Bie wrote:
>>> This patch introduces a mdev (mediated device) based hardware
>>> vhost backend. This backend is an abstraction of the various
>>> hardware vhost accelerators (potentially any device that uses
>>> virtio ring can be used as a vhost accelerator). Some generic
>>> mdev parent ops are provided for accelerator drivers to support
>>> generating mdev instances.
>>>
>>> What's this
>>> ===========
>>>
>>> The idea is that we can setup a virtio ring compatible device
>>> with the messages available at the vhost-backend. Originally,
>>> these messages are used to implement a software vhost backend,
>>> but now we will use these messages to setup a virtio ring
>>> compatible hardware device. Then the hardware device will be
>>> able to work with the guest virtio driver in the VM just like
>>> what the software backend does. That is to say, we can implement
>>> a hardware based vhost backend in QEMU, and any virtio ring
>>> compatible devices potentially can be used with this backend.
>>> (We also call it vDPA -- vhost Data Path Acceleration).
>>>
>>> One problem is that, different virtio ring compatible devices
>>> may have different device interfaces. That is to say, we will
>>> need different drivers in QEMU. It could be troublesome. And
>>> that's what this patch trying to fix. The idea behind this
>>> patch is very simple: mdev is a standard way to emulate device
>>> in kernel.
>> So you just move the abstraction layer from qemu to kernel, and you still
>> need different drivers in kernel for different device interfaces of
>> accelerators. This looks even more complex than leaving it in qemu. As you
>> said, another idea is to implement userspace vhost backend for accelerators
>> which seems easier and could co-work with other parts of qemu without
>> inventing new type of messages.
> I'm not quite sure. Do you think it's acceptable to
> add various vendor specific hardware drivers in QEMU?
>

I don't object but we need to figure out the advantages of doing it in 
qemu too.

Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-10  7:25       ` Jason Wang
  0 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-10  7:25 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: mst, alex.williamson, ddutile, alexander.h.duyck, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang



On 2018年04月10日 12:57, Tiwei Bie wrote:
> On Tue, Apr 10, 2018 at 10:52:52AM +0800, Jason Wang wrote:
>> On 2018年04月02日 23:23, Tiwei Bie wrote:
>>> This patch introduces a mdev (mediated device) based hardware
>>> vhost backend. This backend is an abstraction of the various
>>> hardware vhost accelerators (potentially any device that uses
>>> virtio ring can be used as a vhost accelerator). Some generic
>>> mdev parent ops are provided for accelerator drivers to support
>>> generating mdev instances.
>>>
>>> What's this
>>> ===========
>>>
>>> The idea is that we can setup a virtio ring compatible device
>>> with the messages available at the vhost-backend. Originally,
>>> these messages are used to implement a software vhost backend,
>>> but now we will use these messages to setup a virtio ring
>>> compatible hardware device. Then the hardware device will be
>>> able to work with the guest virtio driver in the VM just like
>>> what the software backend does. That is to say, we can implement
>>> a hardware based vhost backend in QEMU, and any virtio ring
>>> compatible devices potentially can be used with this backend.
>>> (We also call it vDPA -- vhost Data Path Acceleration).
>>>
>>> One problem is that, different virtio ring compatible devices
>>> may have different device interfaces. That is to say, we will
>>> need different drivers in QEMU. It could be troublesome. And
>>> that's what this patch trying to fix. The idea behind this
>>> patch is very simple: mdev is a standard way to emulate device
>>> in kernel.
>> So you just move the abstraction layer from qemu to kernel, and you still
>> need different drivers in kernel for different device interfaces of
>> accelerators. This looks even more complex than leaving it in qemu. As you
>> said, another idea is to implement userspace vhost backend for accelerators
>> which seems easier and could co-work with other parts of qemu without
>> inventing new type of messages.
> I'm not quite sure. Do you think it's acceptable to
> add various vendor specific hardware drivers in QEMU?
>

I don't object but we need to figure out the advantages of doing it in 
qemu too.

Thanks

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  4:57     ` [virtio-dev] " Tiwei Bie
  (?)
@ 2018-04-10  7:51       ` Paolo Bonzini
  -1 siblings, 0 replies; 55+ messages in thread
From: Paolo Bonzini @ 2018-04-10  7:51 UTC (permalink / raw)
  To: Tiwei Bie, Jason Wang
  Cc: mst, alex.williamson, ddutile, alexander.h.duyck, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang

On 10/04/2018 06:57, Tiwei Bie wrote:
>> So you just move the abstraction layer from qemu to kernel, and you still
>> need different drivers in kernel for different device interfaces of
>> accelerators. This looks even more complex than leaving it in qemu. As you
>> said, another idea is to implement userspace vhost backend for accelerators
>> which seems easier and could co-work with other parts of qemu without
>> inventing new type of messages.
> 
> I'm not quite sure. Do you think it's acceptable to
> add various vendor specific hardware drivers in QEMU?

I think so.  We have vendor-specific quirks, and at some point there was
an idea of using quirks to implement (vendor-specific) live migration
support for assigned devices.

Paolo

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-10  7:51       ` Paolo Bonzini
  0 siblings, 0 replies; 55+ messages in thread
From: Paolo Bonzini @ 2018-04-10  7:51 UTC (permalink / raw)
  To: Tiwei Bie, Jason Wang
  Cc: alexander.h.duyck, virtio-dev, kvm, mst, netdev, linux-kernel,
	virtualization, xiao.w.wang, ddutile, jianfeng.tan, zhihong.wang

On 10/04/2018 06:57, Tiwei Bie wrote:
>> So you just move the abstraction layer from qemu to kernel, and you still
>> need different drivers in kernel for different device interfaces of
>> accelerators. This looks even more complex than leaving it in qemu. As you
>> said, another idea is to implement userspace vhost backend for accelerators
>> which seems easier and could co-work with other parts of qemu without
>> inventing new type of messages.
> 
> I'm not quite sure. Do you think it's acceptable to
> add various vendor specific hardware drivers in QEMU?

I think so.  We have vendor-specific quirks, and at some point there was
an idea of using quirks to implement (vendor-specific) live migration
support for assigned devices.

Paolo

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-10  7:51       ` Paolo Bonzini
  0 siblings, 0 replies; 55+ messages in thread
From: Paolo Bonzini @ 2018-04-10  7:51 UTC (permalink / raw)
  To: Tiwei Bie, Jason Wang
  Cc: mst, alex.williamson, ddutile, alexander.h.duyck, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang

On 10/04/2018 06:57, Tiwei Bie wrote:
>> So you just move the abstraction layer from qemu to kernel, and you still
>> need different drivers in kernel for different device interfaces of
>> accelerators. This looks even more complex than leaving it in qemu. As you
>> said, another idea is to implement userspace vhost backend for accelerators
>> which seems easier and could co-work with other parts of qemu without
>> inventing new type of messages.
> 
> I'm not quite sure. Do you think it's acceptable to
> add various vendor specific hardware drivers in QEMU?

I think so.  We have vendor-specific quirks, and at some point there was
an idea of using quirks to implement (vendor-specific) live migration
support for assigned devices.

Paolo

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  7:51       ` Paolo Bonzini
  (?)
  (?)
@ 2018-04-10  9:23       ` Liang, Cunming
  2018-04-10 13:36         ` Michael S. Tsirkin
                           ` (5 more replies)
  -1 siblings, 6 replies; 55+ messages in thread
From: Liang, Cunming @ 2018-04-10  9:23 UTC (permalink / raw)
  To: Paolo Bonzini, Bie, Tiwei, Jason Wang
  Cc: mst, alex.williamson, ddutile, Duyck, Alexander H, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, Daly, Dan, Wang,
	Zhihong, Tan, Jianfeng, Wang, Xiao W



> -----Original Message-----
> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> Sent: Tuesday, April 10, 2018 3:52 PM
> To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
> Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
> Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
> open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
> <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
> Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
> Wang, Xiao W <xiao.w.wang@intel.com>
> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
> vhost backend
> 
> On 10/04/2018 06:57, Tiwei Bie wrote:
> >> So you just move the abstraction layer from qemu to kernel, and you
> >> still need different drivers in kernel for different device
> >> interfaces of accelerators. This looks even more complex than leaving
> >> it in qemu. As you said, another idea is to implement userspace vhost
> >> backend for accelerators which seems easier and could co-work with
> >> other parts of qemu without inventing new type of messages.
> >
> > I'm not quite sure. Do you think it's acceptable to add various vendor
> > specific hardware drivers in QEMU?
> 
> I think so.  We have vendor-specific quirks, and at some point there was an
> idea of using quirks to implement (vendor-specific) live migration support for
> assigned devices.

Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.

While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.

The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.

If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

Steve

> 
> Paolo

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  7:51       ` Paolo Bonzini
                         ` (2 preceding siblings ...)
  (?)
@ 2018-04-10  9:23       ` Liang, Cunming
  -1 siblings, 0 replies; 55+ messages in thread
From: Liang, Cunming @ 2018-04-10  9:23 UTC (permalink / raw)
  To: Paolo Bonzini, Bie, Tiwei, Jason Wang
  Cc: Duyck, Alexander H, virtio-dev, kvm, mst, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong



> -----Original Message-----
> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> Sent: Tuesday, April 10, 2018 3:52 PM
> To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
> Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
> Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
> open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
> <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
> Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
> Wang, Xiao W <xiao.w.wang@intel.com>
> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
> vhost backend
> 
> On 10/04/2018 06:57, Tiwei Bie wrote:
> >> So you just move the abstraction layer from qemu to kernel, and you
> >> still need different drivers in kernel for different device
> >> interfaces of accelerators. This looks even more complex than leaving
> >> it in qemu. As you said, another idea is to implement userspace vhost
> >> backend for accelerators which seems easier and could co-work with
> >> other parts of qemu without inventing new type of messages.
> >
> > I'm not quite sure. Do you think it's acceptable to add various vendor
> > specific hardware drivers in QEMU?
> 
> I think so.  We have vendor-specific quirks, and at some point there was an
> idea of using quirks to implement (vendor-specific) live migration support for
> assigned devices.

Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.

While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.

The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.

If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

Steve

> 
> Paolo

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  9:23       ` Liang, Cunming
@ 2018-04-10 13:36           ` Michael S. Tsirkin
  2018-04-10 13:36           ` Michael S. Tsirkin
                             ` (4 subsequent siblings)
  5 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-10 13:36 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Paolo Bonzini, Bie, Tiwei, Jason Wang, alex.williamson, ddutile,
	Duyck, Alexander H, virtio-dev, linux-kernel, kvm,
	virtualization, netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng,
	Wang, Xiao W

On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
> 
> 
> > -----Original Message-----
> > From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> > Sent: Tuesday, April 10, 2018 3:52 PM
> > To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
> > Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
> > Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
> > open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> > virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
> > <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
> > Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > Wang, Xiao W <xiao.w.wang@intel.com>
> > Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
> > vhost backend
> > 
> > On 10/04/2018 06:57, Tiwei Bie wrote:
> > >> So you just move the abstraction layer from qemu to kernel, and you
> > >> still need different drivers in kernel for different device
> > >> interfaces of accelerators. This looks even more complex than leaving
> > >> it in qemu. As you said, another idea is to implement userspace vhost
> > >> backend for accelerators which seems easier and could co-work with
> > >> other parts of qemu without inventing new type of messages.
> > >
> > > I'm not quite sure. Do you think it's acceptable to add various vendor
> > > specific hardware drivers in QEMU?
> > 
> > I think so.  We have vendor-specific quirks, and at some point there was an
> > idea of using quirks to implement (vendor-specific) live migration support for
> > assigned devices.
> 
> Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.
> 
> While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.
> 
> The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.
> 
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.
> 
> Steve

Dependency on a kernel driver is fine IMHO. It's the dependency on a
DPDK backend that makes people unhappy, since the functionality
in question is setup-time only.

As others said, we do not need to go overeboard. A couple of small
vendor-specific quirks in qemu isn't a big deal.

> > 
> > Paolo

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  9:23       ` Liang, Cunming
@ 2018-04-10 13:36         ` Michael S. Tsirkin
  2018-04-10 13:36           ` Michael S. Tsirkin
                           ` (4 subsequent siblings)
  5 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-10 13:36 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Duyck, Alexander H, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong, Paolo Bonzini

On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
> 
> 
> > -----Original Message-----
> > From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> > Sent: Tuesday, April 10, 2018 3:52 PM
> > To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
> > Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
> > Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
> > open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> > virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
> > <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
> > Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > Wang, Xiao W <xiao.w.wang@intel.com>
> > Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
> > vhost backend
> > 
> > On 10/04/2018 06:57, Tiwei Bie wrote:
> > >> So you just move the abstraction layer from qemu to kernel, and you
> > >> still need different drivers in kernel for different device
> > >> interfaces of accelerators. This looks even more complex than leaving
> > >> it in qemu. As you said, another idea is to implement userspace vhost
> > >> backend for accelerators which seems easier and could co-work with
> > >> other parts of qemu without inventing new type of messages.
> > >
> > > I'm not quite sure. Do you think it's acceptable to add various vendor
> > > specific hardware drivers in QEMU?
> > 
> > I think so.  We have vendor-specific quirks, and at some point there was an
> > idea of using quirks to implement (vendor-specific) live migration support for
> > assigned devices.
> 
> Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.
> 
> While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.
> 
> The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.
> 
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.
> 
> Steve

Dependency on a kernel driver is fine IMHO. It's the dependency on a
DPDK backend that makes people unhappy, since the functionality
in question is setup-time only.

As others said, we do not need to go overeboard. A couple of small
vendor-specific quirks in qemu isn't a big deal.

> > 
> > Paolo

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-10 13:36           ` Michael S. Tsirkin
  0 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-10 13:36 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Paolo Bonzini, Bie, Tiwei, Jason Wang, alex.williamson, ddutile,
	Duyck, Alexander H, virtio-dev, linux-kernel, kvm,
	virtualization, netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng,
	Wang, Xiao W

On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
> 
> 
> > -----Original Message-----
> > From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> > Sent: Tuesday, April 10, 2018 3:52 PM
> > To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
> > Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
> > Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
> > open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> > virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
> > <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
> > Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > Wang, Xiao W <xiao.w.wang@intel.com>
> > Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
> > vhost backend
> > 
> > On 10/04/2018 06:57, Tiwei Bie wrote:
> > >> So you just move the abstraction layer from qemu to kernel, and you
> > >> still need different drivers in kernel for different device
> > >> interfaces of accelerators. This looks even more complex than leaving
> > >> it in qemu. As you said, another idea is to implement userspace vhost
> > >> backend for accelerators which seems easier and could co-work with
> > >> other parts of qemu without inventing new type of messages.
> > >
> > > I'm not quite sure. Do you think it's acceptable to add various vendor
> > > specific hardware drivers in QEMU?
> > 
> > I think so.  We have vendor-specific quirks, and at some point there was an
> > idea of using quirks to implement (vendor-specific) live migration support for
> > assigned devices.
> 
> Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.
> 
> While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.
> 
> The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.
> 
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.
> 
> Steve

Dependency on a kernel driver is fine IMHO. It's the dependency on a
DPDK backend that makes people unhappy, since the functionality
in question is setup-time only.

As others said, we do not need to go overeboard. A couple of small
vendor-specific quirks in qemu isn't a big deal.

> > 
> > Paolo

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10 13:36           ` Michael S. Tsirkin
@ 2018-04-10 14:23             ` Liang, Cunming
  -1 siblings, 0 replies; 55+ messages in thread
From: Liang, Cunming @ 2018-04-10 14:23 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Paolo Bonzini, Bie, Tiwei, Jason Wang, alex.williamson, ddutile,
	Duyck, Alexander H, virtio-dev, linux-kernel, kvm,
	virtualization, netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng,
	Wang, Xiao W



> -----Original Message-----
> From: Michael S. Tsirkin [mailto:mst@redhat.com]
> Sent: Tuesday, April 10, 2018 9:36 PM
> To: Liang, Cunming <cunming.liang@intel.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>; Bie, Tiwei <tiwei.bie@intel.com>;
> Jason Wang <jasowang@redhat.com>; alex.williamson@redhat.com;
> ddutile@redhat.com; Duyck, Alexander H <alexander.h.duyck@intel.com>;
> virtio-dev@lists.oasis-open.org; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Wang, Zhihong
> <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>; Wang, Xiao
> W <xiao.w.wang@intel.com>
> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost
> backend
> 
> On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
> >
> >
> > > -----Original Message-----
> > > From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> > > Sent: Tuesday, April 10, 2018 3:52 PM
> > > To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang
> > > <jasowang@redhat.com>
> > > Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
> > > Duyck, Alexander H <alexander.h.duyck@intel.com>;
> > > virtio-dev@lists.oasis- open.org; linux-kernel@vger.kernel.org;
> > > kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > > netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Liang,
> > > Cunming <cunming.liang@intel.com>; Wang, Zhihong
> > > <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > > Wang, Xiao W <xiao.w.wang@intel.com>
> > > Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based
> > > hardware vhost backend
> > >
> > > On 10/04/2018 06:57, Tiwei Bie wrote:
> > > >> So you just move the abstraction layer from qemu to kernel, and
> > > >> you still need different drivers in kernel for different device
> > > >> interfaces of accelerators. This looks even more complex than
> > > >> leaving it in qemu. As you said, another idea is to implement
> > > >> userspace vhost backend for accelerators which seems easier and
> > > >> could co-work with other parts of qemu without inventing new type of
> messages.
> > > >
> > > > I'm not quite sure. Do you think it's acceptable to add various
> > > > vendor specific hardware drivers in QEMU?
> > >
> > > I think so.  We have vendor-specific quirks, and at some point there
> > > was an idea of using quirks to implement (vendor-specific) live
> > > migration support for assigned devices.
> >
> > Vendor-specific quirks of accessing VGA is a small portion. Other major portions
> are still handled by guest driver.
> >
> > While in this case, when saying various vendor specific drivers in QEMU, it says
> QEMU takes over and provides the entire user space device drivers. Some parts
> are even not relevant to vhost, they're basic device function enabling. Moreover,
> it could be different kinds of devices(network/block/...) under vhost. No matter
> # of vendors or # of types, total LOC is not small.
> >
> > The idea is to avoid introducing these extra complexity out of QEMU, keeping
> vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel
> device driver to provide the diversity. Changing once in QEMU, then it supports
> multi-vendor devices whose drivers naturally providing kernel driver there.
> >
> > If QEMU is going to build a user space driver framework there, we're open mind
> on that, even leveraging DPDK as the underlay library. Looking forward to more
> others' comments from community.
> >
> > Steve
> 
> Dependency on a kernel driver is fine IMHO. It's the dependency on a DPDK
> backend that makes people unhappy, since the functionality in question is setup-
> time only.

Agreed, we don't see dependency on kernel driver is a problem.

mdev based vhost backend (this patch set) is independent with vhost-user extension patch set. In fact, there're a few vhost-user providers, DPDK librte_vhost is one of them. FD.IO/VPP and snabbswitch have their own vhost-user providers. So I can't agree on vhost-user extension patch depends on DPDK backend. But anyway, that's the topic of another mail thread.

> 
> As others said, we do not need to go overeboard. A couple of small vendor-
> specific quirks in qemu isn't a big deal.

It's quite challenge to identify it's small or not, there's no uniform metric.

It's only dependent on QEMU itself, that's the obvious benefit. Tradeoff is to build the entire device driver. We don't object to do that in QEMU, but wanna make sure to understand the boundary size clearly.

> 
> > >
> > > Paolo

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-10 14:23             ` Liang, Cunming
  0 siblings, 0 replies; 55+ messages in thread
From: Liang, Cunming @ 2018-04-10 14:23 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Paolo Bonzini, Bie, Tiwei, Jason Wang, alex.williamson, ddutile,
	Duyck, Alexander H, virtio-dev, linux-kernel, kvm,
	virtualization, netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng,
	Wang, Xiao W



> -----Original Message-----
> From: Michael S. Tsirkin [mailto:mst@redhat.com]
> Sent: Tuesday, April 10, 2018 9:36 PM
> To: Liang, Cunming <cunming.liang@intel.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>; Bie, Tiwei <tiwei.bie@intel.com>;
> Jason Wang <jasowang@redhat.com>; alex.williamson@redhat.com;
> ddutile@redhat.com; Duyck, Alexander H <alexander.h.duyck@intel.com>;
> virtio-dev@lists.oasis-open.org; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Wang, Zhihong
> <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>; Wang, Xiao
> W <xiao.w.wang@intel.com>
> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost
> backend
> 
> On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
> >
> >
> > > -----Original Message-----
> > > From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> > > Sent: Tuesday, April 10, 2018 3:52 PM
> > > To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang
> > > <jasowang@redhat.com>
> > > Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
> > > Duyck, Alexander H <alexander.h.duyck@intel.com>;
> > > virtio-dev@lists.oasis- open.org; linux-kernel@vger.kernel.org;
> > > kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > > netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Liang,
> > > Cunming <cunming.liang@intel.com>; Wang, Zhihong
> > > <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
> > > Wang, Xiao W <xiao.w.wang@intel.com>
> > > Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based
> > > hardware vhost backend
> > >
> > > On 10/04/2018 06:57, Tiwei Bie wrote:
> > > >> So you just move the abstraction layer from qemu to kernel, and
> > > >> you still need different drivers in kernel for different device
> > > >> interfaces of accelerators. This looks even more complex than
> > > >> leaving it in qemu. As you said, another idea is to implement
> > > >> userspace vhost backend for accelerators which seems easier and
> > > >> could co-work with other parts of qemu without inventing new type of
> messages.
> > > >
> > > > I'm not quite sure. Do you think it's acceptable to add various
> > > > vendor specific hardware drivers in QEMU?
> > >
> > > I think so.  We have vendor-specific quirks, and at some point there
> > > was an idea of using quirks to implement (vendor-specific) live
> > > migration support for assigned devices.
> >
> > Vendor-specific quirks of accessing VGA is a small portion. Other major portions
> are still handled by guest driver.
> >
> > While in this case, when saying various vendor specific drivers in QEMU, it says
> QEMU takes over and provides the entire user space device drivers. Some parts
> are even not relevant to vhost, they're basic device function enabling. Moreover,
> it could be different kinds of devices(network/block/...) under vhost. No matter
> # of vendors or # of types, total LOC is not small.
> >
> > The idea is to avoid introducing these extra complexity out of QEMU, keeping
> vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel
> device driver to provide the diversity. Changing once in QEMU, then it supports
> multi-vendor devices whose drivers naturally providing kernel driver there.
> >
> > If QEMU is going to build a user space driver framework there, we're open mind
> on that, even leveraging DPDK as the underlay library. Looking forward to more
> others' comments from community.
> >
> > Steve
> 
> Dependency on a kernel driver is fine IMHO. It's the dependency on a DPDK
> backend that makes people unhappy, since the functionality in question is setup-
> time only.

Agreed, we don't see dependency on kernel driver is a problem.

mdev based vhost backend (this patch set) is independent with vhost-user extension patch set. In fact, there're a few vhost-user providers, DPDK librte_vhost is one of them. FD.IO/VPP and snabbswitch have their own vhost-user providers. So I can't agree on vhost-user extension patch depends on DPDK backend. But anyway, that's the topic of another mail thread.

> 
> As others said, we do not need to go overeboard. A couple of small vendor-
> specific quirks in qemu isn't a big deal.

It's quite challenge to identify it's small or not, there's no uniform metric.

It's only dependent on QEMU itself, that's the obvious benefit. Tradeoff is to build the entire device driver. We don't object to do that in QEMU, but wanna make sure to understand the boundary size clearly.

> 
> > >
> > > Paolo

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10 14:23             ` Liang, Cunming
@ 2018-04-11  1:38               ` Tian, Kevin
  -1 siblings, 0 replies; 55+ messages in thread
From: Tian, Kevin @ 2018-04-11  1:38 UTC (permalink / raw)
  To: Liang, Cunming, Michael S. Tsirkin
  Cc: Duyck, Alexander H, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong, Paolo Bonzini

> From: Liang, Cunming
> Sent: Tuesday, April 10, 2018 10:24 PM
> 
[...]
> >
> > As others said, we do not need to go overeboard. A couple of small
> vendor-
> > specific quirks in qemu isn't a big deal.
> 
> It's quite challenge to identify it's small or not, there's no uniform metric.
> 
> It's only dependent on QEMU itself, that's the obvious benefit. Tradeoff is
> to build the entire device driver. We don't object to do that in QEMU, but
> wanna make sure to understand the boundary size clearly.
> 

It might be helpful if you can post some sample code using proposed 
framework - then people have a clear feeling about what size is talked 
about here and whether it falls into the concept of 'small quirks'.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10 14:23             ` Liang, Cunming
  (?)
  (?)
@ 2018-04-11  1:38             ` Tian, Kevin
  -1 siblings, 0 replies; 55+ messages in thread
From: Tian, Kevin @ 2018-04-11  1:38 UTC (permalink / raw)
  To: Liang, Cunming, Michael S. Tsirkin
  Cc: Duyck, Alexander H, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong, Paolo Bonzini

> From: Liang, Cunming
> Sent: Tuesday, April 10, 2018 10:24 PM
> 
[...]
> >
> > As others said, we do not need to go overeboard. A couple of small
> vendor-
> > specific quirks in qemu isn't a big deal.
> 
> It's quite challenge to identify it's small or not, there's no uniform metric.
> 
> It's only dependent on QEMU itself, that's the obvious benefit. Tradeoff is
> to build the entire device driver. We don't object to do that in QEMU, but
> wanna make sure to understand the boundary size clearly.
> 

It might be helpful if you can post some sample code using proposed 
framework - then people have a clear feeling about what size is talked 
about here and whether it falls into the concept of 'small quirks'.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-11  1:38               ` Tian, Kevin
  0 siblings, 0 replies; 55+ messages in thread
From: Tian, Kevin @ 2018-04-11  1:38 UTC (permalink / raw)
  To: Liang, Cunming, Michael S. Tsirkin
  Cc: Duyck, Alexander H, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong, Paolo Bonzini

> From: Liang, Cunming
> Sent: Tuesday, April 10, 2018 10:24 PM
> 
[...]
> >
> > As others said, we do not need to go overeboard. A couple of small
> vendor-
> > specific quirks in qemu isn't a big deal.
> 
> It's quite challenge to identify it's small or not, there's no uniform metric.
> 
> It's only dependent on QEMU itself, that's the obvious benefit. Tradeoff is
> to build the entire device driver. We don't object to do that in QEMU, but
> wanna make sure to understand the boundary size clearly.
> 

It might be helpful if you can post some sample code using proposed 
framework - then people have a clear feeling about what size is talked 
about here and whether it falls into the concept of 'small quirks'.

Thanks
Kevin

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  9:23       ` Liang, Cunming
@ 2018-04-11  2:01           ` Stefan Hajnoczi
  2018-04-10 13:36           ` Michael S. Tsirkin
                             ` (4 subsequent siblings)
  5 siblings, 0 replies; 55+ messages in thread
From: Stefan Hajnoczi @ 2018-04-11  2:01 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Paolo Bonzini, Bie, Tiwei, Jason Wang, mst, alex.williamson,
	ddutile, Duyck, Alexander H, virtio-dev, linux-kernel, kvm,
	virtualization, netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng,
	Wang, Xiao W

[-- Attachment #1: Type: text/plain, Size: 401 bytes --]

On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

There is already an NVMe VFIO driver in QEMU (see block/nvme.c).  So in
principle there's no reason against userspace drivers in QEMU.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-11  2:01           ` Stefan Hajnoczi
  0 siblings, 0 replies; 55+ messages in thread
From: Stefan Hajnoczi @ 2018-04-11  2:01 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Paolo Bonzini, Bie, Tiwei, Jason Wang, mst, alex.williamson,
	ddutile, Duyck, Alexander H, virtio-dev, linux-kernel, kvm,
	virtualization, netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng,
	Wang, Xiao W

[-- Attachment #1: Type: text/plain, Size: 401 bytes --]

On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

There is already an NVMe VFIO driver in QEMU (see block/nvme.c).  So in
principle there's no reason against userspace drivers in QEMU.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  9:23       ` Liang, Cunming
  2018-04-10 13:36         ` Michael S. Tsirkin
  2018-04-10 13:36           ` Michael S. Tsirkin
@ 2018-04-11  2:01         ` Stefan Hajnoczi
  2018-04-11  2:01           ` Stefan Hajnoczi
                           ` (2 subsequent siblings)
  5 siblings, 0 replies; 55+ messages in thread
From: Stefan Hajnoczi @ 2018-04-11  2:01 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Duyck, Alexander H, virtio-dev, kvm, mst, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong, Paolo Bonzini


[-- Attachment #1.1: Type: text/plain, Size: 401 bytes --]

On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

There is already an NVMe VFIO driver in QEMU (see block/nvme.c).  So in
principle there's no reason against userspace drivers in QEMU.

Stefan

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  9:23       ` Liang, Cunming
@ 2018-04-11  2:08           ` Jason Wang
  2018-04-10 13:36           ` Michael S. Tsirkin
                             ` (4 subsequent siblings)
  5 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-11  2:08 UTC (permalink / raw)
  To: Liang, Cunming, Paolo Bonzini, Bie, Tiwei
  Cc: mst, alex.williamson, ddutile, Duyck, Alexander H, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, Daly, Dan, Wang,
	Zhihong, Tan, Jianfeng, Wang, Xiao W



On 2018年04月10日 17:23, Liang, Cunming wrote:
>
>> -----Original Message-----
>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>> Sent: Tuesday, April 10, 2018 3:52 PM
>> To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
>> Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
>> Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
>> open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
>> virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
>> <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
>> Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
>> Wang, Xiao W <xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
>> vhost backend
>>
>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>> So you just move the abstraction layer from qemu to kernel, and you
>>>> still need different drivers in kernel for different device
>>>> interfaces of accelerators. This looks even more complex than leaving
>>>> it in qemu. As you said, another idea is to implement userspace vhost
>>>> backend for accelerators which seems easier and could co-work with
>>>> other parts of qemu without inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to add various vendor
>>> specific hardware drivers in QEMU?
>> I think so.  We have vendor-specific quirks, and at some point there was an
>> idea of using quirks to implement (vendor-specific) live migration support for
>> assigned devices.
> Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.
>
> While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.
>
> The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.

Let me clarify my question, it's not qemu vs kenrel but userspace vs 
kernel. It could be a library which could be linked to qemu. Doing it in 
userspace have the following obvious advantages:

- attack surface was limited to userspace
- easier to be maintained (compared to kernel driver)
- easier to be extended without introducing new userspace/kernel interfaces
- not tied to a specific operating system

If we want to do it in the kernel, need to consider to unify code 
between mdev device driver and generic driver. For net driver, maybe we 
can even consider to do it on top of exist drivers.

>
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

I'm doing this now by implementing vhost inside qemu IOThreads. Hope I 
can post RFC in few months.

Thanks

> Steve
>
>> Paolo

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  9:23       ` Liang, Cunming
                           ` (4 preceding siblings ...)
  2018-04-11  2:08           ` Jason Wang
@ 2018-04-11  2:08         ` Jason Wang
  5 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-11  2:08 UTC (permalink / raw)
  To: Liang, Cunming, Paolo Bonzini, Bie, Tiwei
  Cc: Duyck, Alexander H, virtio-dev, kvm, mst, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong



On 2018年04月10日 17:23, Liang, Cunming wrote:
>
>> -----Original Message-----
>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>> Sent: Tuesday, April 10, 2018 3:52 PM
>> To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
>> Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
>> Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
>> open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
>> virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
>> <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
>> Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
>> Wang, Xiao W <xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
>> vhost backend
>>
>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>> So you just move the abstraction layer from qemu to kernel, and you
>>>> still need different drivers in kernel for different device
>>>> interfaces of accelerators. This looks even more complex than leaving
>>>> it in qemu. As you said, another idea is to implement userspace vhost
>>>> backend for accelerators which seems easier and could co-work with
>>>> other parts of qemu without inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to add various vendor
>>> specific hardware drivers in QEMU?
>> I think so.  We have vendor-specific quirks, and at some point there was an
>> idea of using quirks to implement (vendor-specific) live migration support for
>> assigned devices.
> Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.
>
> While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.
>
> The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.

Let me clarify my question, it's not qemu vs kenrel but userspace vs 
kernel. It could be a library which could be linked to qemu. Doing it in 
userspace have the following obvious advantages:

- attack surface was limited to userspace
- easier to be maintained (compared to kernel driver)
- easier to be extended without introducing new userspace/kernel interfaces
- not tied to a specific operating system

If we want to do it in the kernel, need to consider to unify code 
between mdev device driver and generic driver. For net driver, maybe we 
can even consider to do it on top of exist drivers.

>
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

I'm doing this now by implementing vhost inside qemu IOThreads. Hope I 
can post RFC in few months.

Thanks

> Steve
>
>> Paolo

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-11  2:08           ` Jason Wang
  0 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-11  2:08 UTC (permalink / raw)
  To: Liang, Cunming, Paolo Bonzini, Bie, Tiwei
  Cc: mst, alex.williamson, ddutile, Duyck, Alexander H, virtio-dev,
	linux-kernel, kvm, virtualization, netdev, Daly, Dan, Wang,
	Zhihong, Tan, Jianfeng, Wang, Xiao W



On 2018年04月10日 17:23, Liang, Cunming wrote:
>
>> -----Original Message-----
>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>> Sent: Tuesday, April 10, 2018 3:52 PM
>> To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
>> Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
>> Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
>> open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
>> virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
>> <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
>> Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
>> Wang, Xiao W <xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
>> vhost backend
>>
>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>> So you just move the abstraction layer from qemu to kernel, and you
>>>> still need different drivers in kernel for different device
>>>> interfaces of accelerators. This looks even more complex than leaving
>>>> it in qemu. As you said, another idea is to implement userspace vhost
>>>> backend for accelerators which seems easier and could co-work with
>>>> other parts of qemu without inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to add various vendor
>>> specific hardware drivers in QEMU?
>> I think so.  We have vendor-specific quirks, and at some point there was an
>> idea of using quirks to implement (vendor-specific) live migration support for
>> assigned devices.
> Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.
>
> While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.
>
> The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.

Let me clarify my question, it's not qemu vs kenrel but userspace vs 
kernel. It could be a library which could be linked to qemu. Doing it in 
userspace have the following obvious advantages:

- attack surface was limited to userspace
- easier to be maintained (compared to kernel driver)
- easier to be extended without introducing new userspace/kernel interfaces
- not tied to a specific operating system

If we want to do it in the kernel, need to consider to unify code 
between mdev device driver and generic driver. For net driver, maybe we 
can even consider to do it on top of exist drivers.

>
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

I'm doing this now by implementing vhost inside qemu IOThreads. Hope I 
can post RFC in few months.

Thanks

> Steve
>
>> Paolo


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10 14:23             ` Liang, Cunming
  (?)
@ 2018-04-11  2:18               ` Jason Wang
  -1 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-11  2:18 UTC (permalink / raw)
  To: Liang, Cunming, Michael S. Tsirkin
  Cc: Paolo Bonzini, Bie, Tiwei, alex.williamson, ddutile, Duyck,
	Alexander H, virtio-dev, linux-kernel, kvm, virtualization,
	netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng, Wang, Xiao W



On 2018年04月10日 22:23, Liang, Cunming wrote:
>> -----Original Message-----
>> From: Michael S. Tsirkin [mailto:mst@redhat.com]
>> Sent: Tuesday, April 10, 2018 9:36 PM
>> To: Liang, Cunming<cunming.liang@intel.com>
>> Cc: Paolo Bonzini<pbonzini@redhat.com>; Bie, Tiwei<tiwei.bie@intel.com>;
>> Jason Wang<jasowang@redhat.com>;alex.williamson@redhat.com;
>> ddutile@redhat.com; Duyck, Alexander H<alexander.h.duyck@intel.com>;
>> virtio-dev@lists.oasis-open.org;linux-kernel@vger.kernel.org;
>> kvm@vger.kernel.org;virtualization@lists.linux-foundation.org;
>> netdev@vger.kernel.org; Daly, Dan<dan.daly@intel.com>; Wang, Zhihong
>> <zhihong.wang@intel.com>; Tan, Jianfeng<jianfeng.tan@intel.com>; Wang, Xiao
>> W<xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost
>> backend
>>
>> On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
>>>> -----Original Message-----
>>>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>>>> Sent: Tuesday, April 10, 2018 3:52 PM
>>>> To: Bie, Tiwei<tiwei.bie@intel.com>; Jason Wang
>>>> <jasowang@redhat.com>
>>>> Cc:mst@redhat.com;alex.williamson@redhat.com;ddutile@redhat.com;
>>>> Duyck, Alexander H<alexander.h.duyck@intel.com>;
>>>> virtio-dev@lists.oasis- open.org;linux-kernel@vger.kernel.org;
>>>> kvm@vger.kernel.org;virtualization@lists.linux-foundation.org;
>>>> netdev@vger.kernel.org; Daly, Dan<dan.daly@intel.com>; Liang,
>>>> Cunming<cunming.liang@intel.com>; Wang, Zhihong
>>>> <zhihong.wang@intel.com>; Tan, Jianfeng<jianfeng.tan@intel.com>;
>>>> Wang, Xiao W<xiao.w.wang@intel.com>
>>>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based
>>>> hardware vhost backend
>>>>
>>>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>>>> So you just move the abstraction layer from qemu to kernel, and
>>>>>> you still need different drivers in kernel for different device
>>>>>> interfaces of accelerators. This looks even more complex than
>>>>>> leaving it in qemu. As you said, another idea is to implement
>>>>>> userspace vhost backend for accelerators which seems easier and
>>>>>> could co-work with other parts of qemu without inventing new type of
>> messages.
>>>>> I'm not quite sure. Do you think it's acceptable to add various
>>>>> vendor specific hardware drivers in QEMU?
>>>> I think so.  We have vendor-specific quirks, and at some point there
>>>> was an idea of using quirks to implement (vendor-specific) live
>>>> migration support for assigned devices.
>>> Vendor-specific quirks of accessing VGA is a small portion. Other major portions
>> are still handled by guest driver.
>>> While in this case, when saying various vendor specific drivers in QEMU, it says
>> QEMU takes over and provides the entire user space device drivers. Some parts
>> are even not relevant to vhost, they're basic device function enabling. Moreover,
>> it could be different kinds of devices(network/block/...) under vhost. No matter
>> # of vendors or # of types, total LOC is not small.
>>> The idea is to avoid introducing these extra complexity out of QEMU, keeping
>> vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel
>> device driver to provide the diversity. Changing once in QEMU, then it supports
>> multi-vendor devices whose drivers naturally providing kernel driver there.
>>> If QEMU is going to build a user space driver framework there, we're open mind
>> on that, even leveraging DPDK as the underlay library. Looking forward to more
>> others' comments from community.
>>> Steve
>> Dependency on a kernel driver is fine IMHO. It's the dependency on a DPDK
>> backend that makes people unhappy, since the functionality in question is setup-
>> time only.
> Agreed, we don't see dependency on kernel driver is a problem.

At engineering level, kernel driver is harder than userspace driver.

>
> mdev based vhost backend (this patch set) is independent with vhost-user extension patch set. In fact, there're a few vhost-user providers, DPDK librte_vhost is one of them. FD.IO/VPP and snabbswitch have their own vhost-user providers. So I can't agree on vhost-user extension patch depends on DPDK backend. But anyway, that's the topic of another mail thread.
>

Well we can treat mdev as another kind of transport of vhost-user. And 
technically we can even implement a relay mdev than forward vhost-user 
messages to dpdk.

Thanks

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-11  2:18               ` Jason Wang
  0 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-11  2:18 UTC (permalink / raw)
  To: Liang, Cunming, Michael S. Tsirkin
  Cc: Paolo Bonzini, Bie, Tiwei, alex.williamson, ddutile, Duyck,
	Alexander H, virtio-dev, linux-kernel, kvm, virtualization,
	netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng, Wang, Xiao W



On 2018年04月10日 22:23, Liang, Cunming wrote:
>> -----Original Message-----
>> From: Michael S. Tsirkin [mailto:mst@redhat.com]
>> Sent: Tuesday, April 10, 2018 9:36 PM
>> To: Liang, Cunming<cunming.liang@intel.com>
>> Cc: Paolo Bonzini<pbonzini@redhat.com>; Bie, Tiwei<tiwei.bie@intel.com>;
>> Jason Wang<jasowang@redhat.com>;alex.williamson@redhat.com;
>> ddutile@redhat.com; Duyck, Alexander H<alexander.h.duyck@intel.com>;
>> virtio-dev@lists.oasis-open.org;linux-kernel@vger.kernel.org;
>> kvm@vger.kernel.org;virtualization@lists.linux-foundation.org;
>> netdev@vger.kernel.org; Daly, Dan<dan.daly@intel.com>; Wang, Zhihong
>> <zhihong.wang@intel.com>; Tan, Jianfeng<jianfeng.tan@intel.com>; Wang, Xiao
>> W<xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost
>> backend
>>
>> On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
>>>> -----Original Message-----
>>>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>>>> Sent: Tuesday, April 10, 2018 3:52 PM
>>>> To: Bie, Tiwei<tiwei.bie@intel.com>; Jason Wang
>>>> <jasowang@redhat.com>
>>>> Cc:mst@redhat.com;alex.williamson@redhat.com;ddutile@redhat.com;
>>>> Duyck, Alexander H<alexander.h.duyck@intel.com>;
>>>> virtio-dev@lists.oasis- open.org;linux-kernel@vger.kernel.org;
>>>> kvm@vger.kernel.org;virtualization@lists.linux-foundation.org;
>>>> netdev@vger.kernel.org; Daly, Dan<dan.daly@intel.com>; Liang,
>>>> Cunming<cunming.liang@intel.com>; Wang, Zhihong
>>>> <zhihong.wang@intel.com>; Tan, Jianfeng<jianfeng.tan@intel.com>;
>>>> Wang, Xiao W<xiao.w.wang@intel.com>
>>>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based
>>>> hardware vhost backend
>>>>
>>>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>>>> So you just move the abstraction layer from qemu to kernel, and
>>>>>> you still need different drivers in kernel for different device
>>>>>> interfaces of accelerators. This looks even more complex than
>>>>>> leaving it in qemu. As you said, another idea is to implement
>>>>>> userspace vhost backend for accelerators which seems easier and
>>>>>> could co-work with other parts of qemu without inventing new type of
>> messages.
>>>>> I'm not quite sure. Do you think it's acceptable to add various
>>>>> vendor specific hardware drivers in QEMU?
>>>> I think so.  We have vendor-specific quirks, and at some point there
>>>> was an idea of using quirks to implement (vendor-specific) live
>>>> migration support for assigned devices.
>>> Vendor-specific quirks of accessing VGA is a small portion. Other major portions
>> are still handled by guest driver.
>>> While in this case, when saying various vendor specific drivers in QEMU, it says
>> QEMU takes over and provides the entire user space device drivers. Some parts
>> are even not relevant to vhost, they're basic device function enabling. Moreover,
>> it could be different kinds of devices(network/block/...) under vhost. No matter
>> # of vendors or # of types, total LOC is not small.
>>> The idea is to avoid introducing these extra complexity out of QEMU, keeping
>> vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel
>> device driver to provide the diversity. Changing once in QEMU, then it supports
>> multi-vendor devices whose drivers naturally providing kernel driver there.
>>> If QEMU is going to build a user space driver framework there, we're open mind
>> on that, even leveraging DPDK as the underlay library. Looking forward to more
>> others' comments from community.
>>> Steve
>> Dependency on a kernel driver is fine IMHO. It's the dependency on a DPDK
>> backend that makes people unhappy, since the functionality in question is setup-
>> time only.
> Agreed, we don't see dependency on kernel driver is a problem.

At engineering level, kernel driver is harder than userspace driver.

>
> mdev based vhost backend (this patch set) is independent with vhost-user extension patch set. In fact, there're a few vhost-user providers, DPDK librte_vhost is one of them. FD.IO/VPP and snabbswitch have their own vhost-user providers. So I can't agree on vhost-user extension patch depends on DPDK backend. But anyway, that's the topic of another mail thread.
>

Well we can treat mdev as another kind of transport of vhost-user. And 
technically we can even implement a relay mdev than forward vhost-user 
messages to dpdk.

Thanks

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10 14:23             ` Liang, Cunming
                               ` (2 preceding siblings ...)
  (?)
@ 2018-04-11  2:18             ` Jason Wang
  -1 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-11  2:18 UTC (permalink / raw)
  To: Liang, Cunming, Michael S. Tsirkin
  Cc: Duyck, Alexander H, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong, Paolo Bonzini



On 2018年04月10日 22:23, Liang, Cunming wrote:
>> -----Original Message-----
>> From: Michael S. Tsirkin [mailto:mst@redhat.com]
>> Sent: Tuesday, April 10, 2018 9:36 PM
>> To: Liang, Cunming<cunming.liang@intel.com>
>> Cc: Paolo Bonzini<pbonzini@redhat.com>; Bie, Tiwei<tiwei.bie@intel.com>;
>> Jason Wang<jasowang@redhat.com>;alex.williamson@redhat.com;
>> ddutile@redhat.com; Duyck, Alexander H<alexander.h.duyck@intel.com>;
>> virtio-dev@lists.oasis-open.org;linux-kernel@vger.kernel.org;
>> kvm@vger.kernel.org;virtualization@lists.linux-foundation.org;
>> netdev@vger.kernel.org; Daly, Dan<dan.daly@intel.com>; Wang, Zhihong
>> <zhihong.wang@intel.com>; Tan, Jianfeng<jianfeng.tan@intel.com>; Wang, Xiao
>> W<xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost
>> backend
>>
>> On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
>>>> -----Original Message-----
>>>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>>>> Sent: Tuesday, April 10, 2018 3:52 PM
>>>> To: Bie, Tiwei<tiwei.bie@intel.com>; Jason Wang
>>>> <jasowang@redhat.com>
>>>> Cc:mst@redhat.com;alex.williamson@redhat.com;ddutile@redhat.com;
>>>> Duyck, Alexander H<alexander.h.duyck@intel.com>;
>>>> virtio-dev@lists.oasis- open.org;linux-kernel@vger.kernel.org;
>>>> kvm@vger.kernel.org;virtualization@lists.linux-foundation.org;
>>>> netdev@vger.kernel.org; Daly, Dan<dan.daly@intel.com>; Liang,
>>>> Cunming<cunming.liang@intel.com>; Wang, Zhihong
>>>> <zhihong.wang@intel.com>; Tan, Jianfeng<jianfeng.tan@intel.com>;
>>>> Wang, Xiao W<xiao.w.wang@intel.com>
>>>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based
>>>> hardware vhost backend
>>>>
>>>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>>>> So you just move the abstraction layer from qemu to kernel, and
>>>>>> you still need different drivers in kernel for different device
>>>>>> interfaces of accelerators. This looks even more complex than
>>>>>> leaving it in qemu. As you said, another idea is to implement
>>>>>> userspace vhost backend for accelerators which seems easier and
>>>>>> could co-work with other parts of qemu without inventing new type of
>> messages.
>>>>> I'm not quite sure. Do you think it's acceptable to add various
>>>>> vendor specific hardware drivers in QEMU?
>>>> I think so.  We have vendor-specific quirks, and at some point there
>>>> was an idea of using quirks to implement (vendor-specific) live
>>>> migration support for assigned devices.
>>> Vendor-specific quirks of accessing VGA is a small portion. Other major portions
>> are still handled by guest driver.
>>> While in this case, when saying various vendor specific drivers in QEMU, it says
>> QEMU takes over and provides the entire user space device drivers. Some parts
>> are even not relevant to vhost, they're basic device function enabling. Moreover,
>> it could be different kinds of devices(network/block/...) under vhost. No matter
>> # of vendors or # of types, total LOC is not small.
>>> The idea is to avoid introducing these extra complexity out of QEMU, keeping
>> vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel
>> device driver to provide the diversity. Changing once in QEMU, then it supports
>> multi-vendor devices whose drivers naturally providing kernel driver there.
>>> If QEMU is going to build a user space driver framework there, we're open mind
>> on that, even leveraging DPDK as the underlay library. Looking forward to more
>> others' comments from community.
>>> Steve
>> Dependency on a kernel driver is fine IMHO. It's the dependency on a DPDK
>> backend that makes people unhappy, since the functionality in question is setup-
>> time only.
> Agreed, we don't see dependency on kernel driver is a problem.

At engineering level, kernel driver is harder than userspace driver.

>
> mdev based vhost backend (this patch set) is independent with vhost-user extension patch set. In fact, there're a few vhost-user providers, DPDK librte_vhost is one of them. FD.IO/VPP and snabbswitch have their own vhost-user providers. So I can't agree on vhost-user extension patch depends on DPDK backend. But anyway, that's the topic of another mail thread.
>

Well we can treat mdev as another kind of transport of vhost-user. And 
technically we can even implement a relay mdev than forward vhost-user 
messages to dpdk.

Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-11  2:18               ` Jason Wang
  0 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-11  2:18 UTC (permalink / raw)
  To: Liang, Cunming, Michael S. Tsirkin
  Cc: Paolo Bonzini, Bie, Tiwei, alex.williamson, ddutile, Duyck,
	Alexander H, virtio-dev, linux-kernel, kvm, virtualization,
	netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng, Wang, Xiao W



On 2018年04月10日 22:23, Liang, Cunming wrote:
>> -----Original Message-----
>> From: Michael S. Tsirkin [mailto:mst@redhat.com]
>> Sent: Tuesday, April 10, 2018 9:36 PM
>> To: Liang, Cunming<cunming.liang@intel.com>
>> Cc: Paolo Bonzini<pbonzini@redhat.com>; Bie, Tiwei<tiwei.bie@intel.com>;
>> Jason Wang<jasowang@redhat.com>;alex.williamson@redhat.com;
>> ddutile@redhat.com; Duyck, Alexander H<alexander.h.duyck@intel.com>;
>> virtio-dev@lists.oasis-open.org;linux-kernel@vger.kernel.org;
>> kvm@vger.kernel.org;virtualization@lists.linux-foundation.org;
>> netdev@vger.kernel.org; Daly, Dan<dan.daly@intel.com>; Wang, Zhihong
>> <zhihong.wang@intel.com>; Tan, Jianfeng<jianfeng.tan@intel.com>; Wang, Xiao
>> W<xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost
>> backend
>>
>> On Tue, Apr 10, 2018 at 09:23:53AM +0000, Liang, Cunming wrote:
>>>> -----Original Message-----
>>>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>>>> Sent: Tuesday, April 10, 2018 3:52 PM
>>>> To: Bie, Tiwei<tiwei.bie@intel.com>; Jason Wang
>>>> <jasowang@redhat.com>
>>>> Cc:mst@redhat.com;alex.williamson@redhat.com;ddutile@redhat.com;
>>>> Duyck, Alexander H<alexander.h.duyck@intel.com>;
>>>> virtio-dev@lists.oasis- open.org;linux-kernel@vger.kernel.org;
>>>> kvm@vger.kernel.org;virtualization@lists.linux-foundation.org;
>>>> netdev@vger.kernel.org; Daly, Dan<dan.daly@intel.com>; Liang,
>>>> Cunming<cunming.liang@intel.com>; Wang, Zhihong
>>>> <zhihong.wang@intel.com>; Tan, Jianfeng<jianfeng.tan@intel.com>;
>>>> Wang, Xiao W<xiao.w.wang@intel.com>
>>>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based
>>>> hardware vhost backend
>>>>
>>>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>>>> So you just move the abstraction layer from qemu to kernel, and
>>>>>> you still need different drivers in kernel for different device
>>>>>> interfaces of accelerators. This looks even more complex than
>>>>>> leaving it in qemu. As you said, another idea is to implement
>>>>>> userspace vhost backend for accelerators which seems easier and
>>>>>> could co-work with other parts of qemu without inventing new type of
>> messages.
>>>>> I'm not quite sure. Do you think it's acceptable to add various
>>>>> vendor specific hardware drivers in QEMU?
>>>> I think so.  We have vendor-specific quirks, and at some point there
>>>> was an idea of using quirks to implement (vendor-specific) live
>>>> migration support for assigned devices.
>>> Vendor-specific quirks of accessing VGA is a small portion. Other major portions
>> are still handled by guest driver.
>>> While in this case, when saying various vendor specific drivers in QEMU, it says
>> QEMU takes over and provides the entire user space device drivers. Some parts
>> are even not relevant to vhost, they're basic device function enabling. Moreover,
>> it could be different kinds of devices(network/block/...) under vhost. No matter
>> # of vendors or # of types, total LOC is not small.
>>> The idea is to avoid introducing these extra complexity out of QEMU, keeping
>> vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel
>> device driver to provide the diversity. Changing once in QEMU, then it supports
>> multi-vendor devices whose drivers naturally providing kernel driver there.
>>> If QEMU is going to build a user space driver framework there, we're open mind
>> on that, even leveraging DPDK as the underlay library. Looking forward to more
>> others' comments from community.
>>> Steve
>> Dependency on a kernel driver is fine IMHO. It's the dependency on a DPDK
>> backend that makes people unhappy, since the functionality in question is setup-
>> time only.
> Agreed, we don't see dependency on kernel driver is a problem.

At engineering level, kernel driver is harder than userspace driver.

>
> mdev based vhost backend (this patch set) is independent with vhost-user extension patch set. In fact, there're a few vhost-user providers, DPDK librte_vhost is one of them. FD.IO/VPP and snabbswitch have their own vhost-user providers. So I can't agree on vhost-user extension patch depends on DPDK backend. But anyway, that's the topic of another mail thread.
>

Well we can treat mdev as another kind of transport of vhost-user. And 
technically we can even implement a relay mdev than forward vhost-user 
messages to dpdk.

Thanks

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  7:25       ` [virtio-dev] " Jason Wang
@ 2018-04-19 18:40         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-19 18:40 UTC (permalink / raw)
  To: Jason Wang
  Cc: Tiwei Bie, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang

On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > One problem is that, different virtio ring compatible devices
> > > > may have different device interfaces. That is to say, we will
> > > > need different drivers in QEMU. It could be troublesome. And
> > > > that's what this patch trying to fix. The idea behind this
> > > > patch is very simple: mdev is a standard way to emulate device
> > > > in kernel.
> > > So you just move the abstraction layer from qemu to kernel, and you still
> > > need different drivers in kernel for different device interfaces of
> > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > said, another idea is to implement userspace vhost backend for accelerators
> > > which seems easier and could co-work with other parts of qemu without
> > > inventing new type of messages.
> > I'm not quite sure. Do you think it's acceptable to
> > add various vendor specific hardware drivers in QEMU?
> > 
> 
> I don't object but we need to figure out the advantages of doing it in qemu
> too.
> 
> Thanks

To be frank kernel is exactly where device drivers belong.  DPDK did
move them to userspace but that's merely a requirement for data path.
*If* you can have them in kernel that is best:
- update kernel and there's no need to rebuild userspace
- apps can be written in any language no need to maintain multiple
  libraries or add wrappers
- security concerns are much smaller (ok people are trying to
  raise the bar with IOMMUs and such, but it's already pretty
  good even without)

The biggest issue is that you let userspace poke at the
device which is also allowed by the IOMMU to poke at
kernel memory (needed for kernel driver to work).

Yes, maybe if device is not buggy it's all fine, but
it's better if we do not have to trust the device
otherwise the security picture becomes more murky.

I suggested attaching a PASID to (some) queues - see my old post "using
PASIDs to enable a safe variant of direct ring access".

Then using IOMMU with VFIO to limit access through queue to corrent
ranges of memory.


-- 
MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-10  7:25       ` [virtio-dev] " Jason Wang
  (?)
@ 2018-04-19 18:40       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-19 18:40 UTC (permalink / raw)
  To: Jason Wang
  Cc: alexander.h.duyck, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, xiao.w.wang, ddutile, jianfeng.tan, zhihong.wang

On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > One problem is that, different virtio ring compatible devices
> > > > may have different device interfaces. That is to say, we will
> > > > need different drivers in QEMU. It could be troublesome. And
> > > > that's what this patch trying to fix. The idea behind this
> > > > patch is very simple: mdev is a standard way to emulate device
> > > > in kernel.
> > > So you just move the abstraction layer from qemu to kernel, and you still
> > > need different drivers in kernel for different device interfaces of
> > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > said, another idea is to implement userspace vhost backend for accelerators
> > > which seems easier and could co-work with other parts of qemu without
> > > inventing new type of messages.
> > I'm not quite sure. Do you think it's acceptable to
> > add various vendor specific hardware drivers in QEMU?
> > 
> 
> I don't object but we need to figure out the advantages of doing it in qemu
> too.
> 
> Thanks

To be frank kernel is exactly where device drivers belong.  DPDK did
move them to userspace but that's merely a requirement for data path.
*If* you can have them in kernel that is best:
- update kernel and there's no need to rebuild userspace
- apps can be written in any language no need to maintain multiple
  libraries or add wrappers
- security concerns are much smaller (ok people are trying to
  raise the bar with IOMMUs and such, but it's already pretty
  good even without)

The biggest issue is that you let userspace poke at the
device which is also allowed by the IOMMU to poke at
kernel memory (needed for kernel driver to work).

Yes, maybe if device is not buggy it's all fine, but
it's better if we do not have to trust the device
otherwise the security picture becomes more murky.

I suggested attaching a PASID to (some) queues - see my old post "using
PASIDs to enable a safe variant of direct ring access".

Then using IOMMU with VFIO to limit access through queue to corrent
ranges of memory.


-- 
MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-19 18:40         ` Michael S. Tsirkin
  0 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-19 18:40 UTC (permalink / raw)
  To: Jason Wang
  Cc: Tiwei Bie, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang

On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > One problem is that, different virtio ring compatible devices
> > > > may have different device interfaces. That is to say, we will
> > > > need different drivers in QEMU. It could be troublesome. And
> > > > that's what this patch trying to fix. The idea behind this
> > > > patch is very simple: mdev is a standard way to emulate device
> > > > in kernel.
> > > So you just move the abstraction layer from qemu to kernel, and you still
> > > need different drivers in kernel for different device interfaces of
> > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > said, another idea is to implement userspace vhost backend for accelerators
> > > which seems easier and could co-work with other parts of qemu without
> > > inventing new type of messages.
> > I'm not quite sure. Do you think it's acceptable to
> > add various vendor specific hardware drivers in QEMU?
> > 
> 
> I don't object but we need to figure out the advantages of doing it in qemu
> too.
> 
> Thanks

To be frank kernel is exactly where device drivers belong.  DPDK did
move them to userspace but that's merely a requirement for data path.
*If* you can have them in kernel that is best:
- update kernel and there's no need to rebuild userspace
- apps can be written in any language no need to maintain multiple
  libraries or add wrappers
- security concerns are much smaller (ok people are trying to
  raise the bar with IOMMUs and such, but it's already pretty
  good even without)

The biggest issue is that you let userspace poke at the
device which is also allowed by the IOMMU to poke at
kernel memory (needed for kernel driver to work).

Yes, maybe if device is not buggy it's all fine, but
it's better if we do not have to trust the device
otherwise the security picture becomes more murky.

I suggested attaching a PASID to (some) queues - see my old post "using
PASIDs to enable a safe variant of direct ring access".

Then using IOMMU with VFIO to limit access through queue to corrent
ranges of memory.


-- 
MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-19 18:40         ` [virtio-dev] " Michael S. Tsirkin
@ 2018-04-20  3:28           ` Tiwei Bie
  -1 siblings, 0 replies; 55+ messages in thread
From: Tiwei Bie @ 2018-04-20  3:28 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang,
	kevin.tian

On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > One problem is that, different virtio ring compatible devices
> > > > > may have different device interfaces. That is to say, we will
> > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > that's what this patch trying to fix. The idea behind this
> > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > in kernel.
> > > > So you just move the abstraction layer from qemu to kernel, and you still
> > > > need different drivers in kernel for different device interfaces of
> > > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > > said, another idea is to implement userspace vhost backend for accelerators
> > > > which seems easier and could co-work with other parts of qemu without
> > > > inventing new type of messages.
> > > I'm not quite sure. Do you think it's acceptable to
> > > add various vendor specific hardware drivers in QEMU?
> > > 
> > 
> > I don't object but we need to figure out the advantages of doing it in qemu
> > too.
> > 
> > Thanks
> 
> To be frank kernel is exactly where device drivers belong.  DPDK did
> move them to userspace but that's merely a requirement for data path.
> *If* you can have them in kernel that is best:
> - update kernel and there's no need to rebuild userspace
> - apps can be written in any language no need to maintain multiple
>   libraries or add wrappers
> - security concerns are much smaller (ok people are trying to
>   raise the bar with IOMMUs and such, but it's already pretty
>   good even without)
> 
> The biggest issue is that you let userspace poke at the
> device which is also allowed by the IOMMU to poke at
> kernel memory (needed for kernel driver to work).

I think the device won't and shouldn't be allowed to
poke at kernel memory. Its kernel driver needs some
kernel memory to work. But the device doesn't have
the access to them. Instead, the device only has the
access to:

(1) the entire memory of the VM (if vIOMMU isn't used)
or
(2) the memory belongs to the guest virtio device (if
    vIOMMU is being used).

Below is the reason:

For the first case, we should program the IOMMU for
the hardware device based on the info in the memory
table which is the entire memory of the VM.

For the second case, we should program the IOMMU for
the hardware device based on the info in the shadow
page table of the vIOMMU.

So the memory can be accessed by the device is limited,
it should be safe especially for the second case.

My concern is that, in this RFC, we don't program the
IOMMU for the mdev device in the userspace via the VFIO
API directly. Instead, we pass the memory table to the
kernel driver via the mdev device (BAR0) and ask the
driver to do the IOMMU programming. Someone may don't
like it. The main reason why we don't program IOMMU via
VFIO API in userspace directly is that, currently IOMMU
drivers don't support mdev bus.

> 
> Yes, maybe if device is not buggy it's all fine, but
> it's better if we do not have to trust the device
> otherwise the security picture becomes more murky.
> 
> I suggested attaching a PASID to (some) queues - see my old post "using
> PASIDs to enable a safe variant of direct ring access".

It's pretty cool. We also have some similar ideas.
Cunming will talk more about this.

Best regards,
Tiwei Bie

> 
> Then using IOMMU with VFIO to limit access through queue to corrent
> ranges of memory.
> 
> 
> -- 
> MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-19 18:40         ` [virtio-dev] " Michael S. Tsirkin
  (?)
@ 2018-04-20  3:28         ` Tiwei Bie
  -1 siblings, 0 replies; 55+ messages in thread
From: Tiwei Bie @ 2018-04-20  3:28 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: alexander.h.duyck, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, xiao.w.wang, ddutile, jianfeng.tan, zhihong.wang

On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > One problem is that, different virtio ring compatible devices
> > > > > may have different device interfaces. That is to say, we will
> > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > that's what this patch trying to fix. The idea behind this
> > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > in kernel.
> > > > So you just move the abstraction layer from qemu to kernel, and you still
> > > > need different drivers in kernel for different device interfaces of
> > > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > > said, another idea is to implement userspace vhost backend for accelerators
> > > > which seems easier and could co-work with other parts of qemu without
> > > > inventing new type of messages.
> > > I'm not quite sure. Do you think it's acceptable to
> > > add various vendor specific hardware drivers in QEMU?
> > > 
> > 
> > I don't object but we need to figure out the advantages of doing it in qemu
> > too.
> > 
> > Thanks
> 
> To be frank kernel is exactly where device drivers belong.  DPDK did
> move them to userspace but that's merely a requirement for data path.
> *If* you can have them in kernel that is best:
> - update kernel and there's no need to rebuild userspace
> - apps can be written in any language no need to maintain multiple
>   libraries or add wrappers
> - security concerns are much smaller (ok people are trying to
>   raise the bar with IOMMUs and such, but it's already pretty
>   good even without)
> 
> The biggest issue is that you let userspace poke at the
> device which is also allowed by the IOMMU to poke at
> kernel memory (needed for kernel driver to work).

I think the device won't and shouldn't be allowed to
poke at kernel memory. Its kernel driver needs some
kernel memory to work. But the device doesn't have
the access to them. Instead, the device only has the
access to:

(1) the entire memory of the VM (if vIOMMU isn't used)
or
(2) the memory belongs to the guest virtio device (if
    vIOMMU is being used).

Below is the reason:

For the first case, we should program the IOMMU for
the hardware device based on the info in the memory
table which is the entire memory of the VM.

For the second case, we should program the IOMMU for
the hardware device based on the info in the shadow
page table of the vIOMMU.

So the memory can be accessed by the device is limited,
it should be safe especially for the second case.

My concern is that, in this RFC, we don't program the
IOMMU for the mdev device in the userspace via the VFIO
API directly. Instead, we pass the memory table to the
kernel driver via the mdev device (BAR0) and ask the
driver to do the IOMMU programming. Someone may don't
like it. The main reason why we don't program IOMMU via
VFIO API in userspace directly is that, currently IOMMU
drivers don't support mdev bus.

> 
> Yes, maybe if device is not buggy it's all fine, but
> it's better if we do not have to trust the device
> otherwise the security picture becomes more murky.
> 
> I suggested attaching a PASID to (some) queues - see my old post "using
> PASIDs to enable a safe variant of direct ring access".

It's pretty cool. We also have some similar ideas.
Cunming will talk more about this.

Best regards,
Tiwei Bie

> 
> Then using IOMMU with VFIO to limit access through queue to corrent
> ranges of memory.
> 
> 
> -- 
> MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-20  3:28           ` Tiwei Bie
  0 siblings, 0 replies; 55+ messages in thread
From: Tiwei Bie @ 2018-04-20  3:28 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang,
	kevin.tian

On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > One problem is that, different virtio ring compatible devices
> > > > > may have different device interfaces. That is to say, we will
> > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > that's what this patch trying to fix. The idea behind this
> > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > in kernel.
> > > > So you just move the abstraction layer from qemu to kernel, and you still
> > > > need different drivers in kernel for different device interfaces of
> > > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > > said, another idea is to implement userspace vhost backend for accelerators
> > > > which seems easier and could co-work with other parts of qemu without
> > > > inventing new type of messages.
> > > I'm not quite sure. Do you think it's acceptable to
> > > add various vendor specific hardware drivers in QEMU?
> > > 
> > 
> > I don't object but we need to figure out the advantages of doing it in qemu
> > too.
> > 
> > Thanks
> 
> To be frank kernel is exactly where device drivers belong.  DPDK did
> move them to userspace but that's merely a requirement for data path.
> *If* you can have them in kernel that is best:
> - update kernel and there's no need to rebuild userspace
> - apps can be written in any language no need to maintain multiple
>   libraries or add wrappers
> - security concerns are much smaller (ok people are trying to
>   raise the bar with IOMMUs and such, but it's already pretty
>   good even without)
> 
> The biggest issue is that you let userspace poke at the
> device which is also allowed by the IOMMU to poke at
> kernel memory (needed for kernel driver to work).

I think the device won't and shouldn't be allowed to
poke at kernel memory. Its kernel driver needs some
kernel memory to work. But the device doesn't have
the access to them. Instead, the device only has the
access to:

(1) the entire memory of the VM (if vIOMMU isn't used)
or
(2) the memory belongs to the guest virtio device (if
    vIOMMU is being used).

Below is the reason:

For the first case, we should program the IOMMU for
the hardware device based on the info in the memory
table which is the entire memory of the VM.

For the second case, we should program the IOMMU for
the hardware device based on the info in the shadow
page table of the vIOMMU.

So the memory can be accessed by the device is limited,
it should be safe especially for the second case.

My concern is that, in this RFC, we don't program the
IOMMU for the mdev device in the userspace via the VFIO
API directly. Instead, we pass the memory table to the
kernel driver via the mdev device (BAR0) and ask the
driver to do the IOMMU programming. Someone may don't
like it. The main reason why we don't program IOMMU via
VFIO API in userspace directly is that, currently IOMMU
drivers don't support mdev bus.

> 
> Yes, maybe if device is not buggy it's all fine, but
> it's better if we do not have to trust the device
> otherwise the security picture becomes more murky.
> 
> I suggested attaching a PASID to (some) queues - see my old post "using
> PASIDs to enable a safe variant of direct ring access".

It's pretty cool. We also have some similar ideas.
Cunming will talk more about this.

Best regards,
Tiwei Bie

> 
> Then using IOMMU with VFIO to limit access through queue to corrent
> ranges of memory.
> 
> 
> -- 
> MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-20  3:28           ` [virtio-dev] " Tiwei Bie
@ 2018-04-20  3:50             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20  3:50 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: Jason Wang, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang,
	kevin.tian

On Fri, Apr 20, 2018 at 11:28:07AM +0800, Tiwei Bie wrote:
> On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > One problem is that, different virtio ring compatible devices
> > > > > > may have different device interfaces. That is to say, we will
> > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > in kernel.
> > > > > So you just move the abstraction layer from qemu to kernel, and you still
> > > > > need different drivers in kernel for different device interfaces of
> > > > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > > > said, another idea is to implement userspace vhost backend for accelerators
> > > > > which seems easier and could co-work with other parts of qemu without
> > > > > inventing new type of messages.
> > > > I'm not quite sure. Do you think it's acceptable to
> > > > add various vendor specific hardware drivers in QEMU?
> > > > 
> > > 
> > > I don't object but we need to figure out the advantages of doing it in qemu
> > > too.
> > > 
> > > Thanks
> > 
> > To be frank kernel is exactly where device drivers belong.  DPDK did
> > move them to userspace but that's merely a requirement for data path.
> > *If* you can have them in kernel that is best:
> > - update kernel and there's no need to rebuild userspace
> > - apps can be written in any language no need to maintain multiple
> >   libraries or add wrappers
> > - security concerns are much smaller (ok people are trying to
> >   raise the bar with IOMMUs and such, but it's already pretty
> >   good even without)
> > 
> > The biggest issue is that you let userspace poke at the
> > device which is also allowed by the IOMMU to poke at
> > kernel memory (needed for kernel driver to work).
> 
> I think the device won't and shouldn't be allowed to
> poke at kernel memory. Its kernel driver needs some
> kernel memory to work. But the device doesn't have
> the access to them. Instead, the device only has the
> access to:
> 
> (1) the entire memory of the VM (if vIOMMU isn't used)
> or
> (2) the memory belongs to the guest virtio device (if
>     vIOMMU is being used).
> 
> Below is the reason:
> 
> For the first case, we should program the IOMMU for
> the hardware device based on the info in the memory
> table which is the entire memory of the VM.
> 
> For the second case, we should program the IOMMU for
> the hardware device based on the info in the shadow
> page table of the vIOMMU.
> 
> So the memory can be accessed by the device is limited,
> it should be safe especially for the second case.
> 
> My concern is that, in this RFC, we don't program the
> IOMMU for the mdev device in the userspace via the VFIO
> API directly. Instead, we pass the memory table to the
> kernel driver via the mdev device (BAR0) and ask the
> driver to do the IOMMU programming. Someone may don't
> like it. The main reason why we don't program IOMMU via
> VFIO API in userspace directly is that, currently IOMMU
> drivers don't support mdev bus.

But it is a pci device after all, isn't it?
IOMMU drivers certainly support that ...

Another issue with this approach is that internal
kernel issues leak out to the interface.

> > 
> > Yes, maybe if device is not buggy it's all fine, but
> > it's better if we do not have to trust the device
> > otherwise the security picture becomes more murky.
> > 
> > I suggested attaching a PASID to (some) queues - see my old post "using
> > PASIDs to enable a safe variant of direct ring access".
> 
> It's pretty cool. We also have some similar ideas.
> Cunming will talk more about this.
> 
> Best regards,
> Tiwei Bie

An extra benefit to this could be that requests with PASID
undergo an extra level of translation.
We could use it to avoid the need for shadowing on intel.



Something like this:
- expose to guest a standard virtio device (no pasid support)
- back it by virtio device with pasid support on the host
  by attaching same pasid to all queues

now - guest will build 1 level of page tables

we build first level page tables for requests with pasid
and point the IOMMU to use the guest supplied page tables
for the second level of translation.

Now we do need to forward invalidations but we no
longer need to set the CM bit and shadow valid entries.



> > 
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> > 
> > 
> > -- 
> > MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-20  3:28           ` [virtio-dev] " Tiwei Bie
  (?)
@ 2018-04-20  3:50           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20  3:50 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: alexander.h.duyck, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, xiao.w.wang, ddutile, jianfeng.tan, zhihong.wang

On Fri, Apr 20, 2018 at 11:28:07AM +0800, Tiwei Bie wrote:
> On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > One problem is that, different virtio ring compatible devices
> > > > > > may have different device interfaces. That is to say, we will
> > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > in kernel.
> > > > > So you just move the abstraction layer from qemu to kernel, and you still
> > > > > need different drivers in kernel for different device interfaces of
> > > > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > > > said, another idea is to implement userspace vhost backend for accelerators
> > > > > which seems easier and could co-work with other parts of qemu without
> > > > > inventing new type of messages.
> > > > I'm not quite sure. Do you think it's acceptable to
> > > > add various vendor specific hardware drivers in QEMU?
> > > > 
> > > 
> > > I don't object but we need to figure out the advantages of doing it in qemu
> > > too.
> > > 
> > > Thanks
> > 
> > To be frank kernel is exactly where device drivers belong.  DPDK did
> > move them to userspace but that's merely a requirement for data path.
> > *If* you can have them in kernel that is best:
> > - update kernel and there's no need to rebuild userspace
> > - apps can be written in any language no need to maintain multiple
> >   libraries or add wrappers
> > - security concerns are much smaller (ok people are trying to
> >   raise the bar with IOMMUs and such, but it's already pretty
> >   good even without)
> > 
> > The biggest issue is that you let userspace poke at the
> > device which is also allowed by the IOMMU to poke at
> > kernel memory (needed for kernel driver to work).
> 
> I think the device won't and shouldn't be allowed to
> poke at kernel memory. Its kernel driver needs some
> kernel memory to work. But the device doesn't have
> the access to them. Instead, the device only has the
> access to:
> 
> (1) the entire memory of the VM (if vIOMMU isn't used)
> or
> (2) the memory belongs to the guest virtio device (if
>     vIOMMU is being used).
> 
> Below is the reason:
> 
> For the first case, we should program the IOMMU for
> the hardware device based on the info in the memory
> table which is the entire memory of the VM.
> 
> For the second case, we should program the IOMMU for
> the hardware device based on the info in the shadow
> page table of the vIOMMU.
> 
> So the memory can be accessed by the device is limited,
> it should be safe especially for the second case.
> 
> My concern is that, in this RFC, we don't program the
> IOMMU for the mdev device in the userspace via the VFIO
> API directly. Instead, we pass the memory table to the
> kernel driver via the mdev device (BAR0) and ask the
> driver to do the IOMMU programming. Someone may don't
> like it. The main reason why we don't program IOMMU via
> VFIO API in userspace directly is that, currently IOMMU
> drivers don't support mdev bus.

But it is a pci device after all, isn't it?
IOMMU drivers certainly support that ...

Another issue with this approach is that internal
kernel issues leak out to the interface.

> > 
> > Yes, maybe if device is not buggy it's all fine, but
> > it's better if we do not have to trust the device
> > otherwise the security picture becomes more murky.
> > 
> > I suggested attaching a PASID to (some) queues - see my old post "using
> > PASIDs to enable a safe variant of direct ring access".
> 
> It's pretty cool. We also have some similar ideas.
> Cunming will talk more about this.
> 
> Best regards,
> Tiwei Bie

An extra benefit to this could be that requests with PASID
undergo an extra level of translation.
We could use it to avoid the need for shadowing on intel.



Something like this:
- expose to guest a standard virtio device (no pasid support)
- back it by virtio device with pasid support on the host
  by attaching same pasid to all queues

now - guest will build 1 level of page tables

we build first level page tables for requests with pasid
and point the IOMMU to use the guest supplied page tables
for the second level of translation.

Now we do need to forward invalidations but we no
longer need to set the CM bit and shadow valid entries.



> > 
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> > 
> > 
> > -- 
> > MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-20  3:50             ` Michael S. Tsirkin
  0 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20  3:50 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: Jason Wang, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang,
	kevin.tian

On Fri, Apr 20, 2018 at 11:28:07AM +0800, Tiwei Bie wrote:
> On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > One problem is that, different virtio ring compatible devices
> > > > > > may have different device interfaces. That is to say, we will
> > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > in kernel.
> > > > > So you just move the abstraction layer from qemu to kernel, and you still
> > > > > need different drivers in kernel for different device interfaces of
> > > > > accelerators. This looks even more complex than leaving it in qemu. As you
> > > > > said, another idea is to implement userspace vhost backend for accelerators
> > > > > which seems easier and could co-work with other parts of qemu without
> > > > > inventing new type of messages.
> > > > I'm not quite sure. Do you think it's acceptable to
> > > > add various vendor specific hardware drivers in QEMU?
> > > > 
> > > 
> > > I don't object but we need to figure out the advantages of doing it in qemu
> > > too.
> > > 
> > > Thanks
> > 
> > To be frank kernel is exactly where device drivers belong.  DPDK did
> > move them to userspace but that's merely a requirement for data path.
> > *If* you can have them in kernel that is best:
> > - update kernel and there's no need to rebuild userspace
> > - apps can be written in any language no need to maintain multiple
> >   libraries or add wrappers
> > - security concerns are much smaller (ok people are trying to
> >   raise the bar with IOMMUs and such, but it's already pretty
> >   good even without)
> > 
> > The biggest issue is that you let userspace poke at the
> > device which is also allowed by the IOMMU to poke at
> > kernel memory (needed for kernel driver to work).
> 
> I think the device won't and shouldn't be allowed to
> poke at kernel memory. Its kernel driver needs some
> kernel memory to work. But the device doesn't have
> the access to them. Instead, the device only has the
> access to:
> 
> (1) the entire memory of the VM (if vIOMMU isn't used)
> or
> (2) the memory belongs to the guest virtio device (if
>     vIOMMU is being used).
> 
> Below is the reason:
> 
> For the first case, we should program the IOMMU for
> the hardware device based on the info in the memory
> table which is the entire memory of the VM.
> 
> For the second case, we should program the IOMMU for
> the hardware device based on the info in the shadow
> page table of the vIOMMU.
> 
> So the memory can be accessed by the device is limited,
> it should be safe especially for the second case.
> 
> My concern is that, in this RFC, we don't program the
> IOMMU for the mdev device in the userspace via the VFIO
> API directly. Instead, we pass the memory table to the
> kernel driver via the mdev device (BAR0) and ask the
> driver to do the IOMMU programming. Someone may don't
> like it. The main reason why we don't program IOMMU via
> VFIO API in userspace directly is that, currently IOMMU
> drivers don't support mdev bus.

But it is a pci device after all, isn't it?
IOMMU drivers certainly support that ...

Another issue with this approach is that internal
kernel issues leak out to the interface.

> > 
> > Yes, maybe if device is not buggy it's all fine, but
> > it's better if we do not have to trust the device
> > otherwise the security picture becomes more murky.
> > 
> > I suggested attaching a PASID to (some) queues - see my old post "using
> > PASIDs to enable a safe variant of direct ring access".
> 
> It's pretty cool. We also have some similar ideas.
> Cunming will talk more about this.
> 
> Best regards,
> Tiwei Bie

An extra benefit to this could be that requests with PASID
undergo an extra level of translation.
We could use it to avoid the need for shadowing on intel.



Something like this:
- expose to guest a standard virtio device (no pasid support)
- back it by virtio device with pasid support on the host
  by attaching same pasid to all queues

now - guest will build 1 level of page tables

we build first level page tables for requests with pasid
and point the IOMMU to use the guest supplied page tables
for the second level of translation.

Now we do need to forward invalidations but we no
longer need to set the CM bit and shadow valid entries.



> > 
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> > 
> > 
> > -- 
> > MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-20  3:28           ` [virtio-dev] " Tiwei Bie
@ 2018-04-20  3:50             ` Liang, Cunming
  -1 siblings, 0 replies; 55+ messages in thread
From: Liang, Cunming @ 2018-04-20  3:50 UTC (permalink / raw)
  To: Bie, Tiwei, Michael S. Tsirkin
  Cc: Jason Wang, alex.williamson, ddutile, Duyck, Alexander H,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, Daly, Dan,
	Wang, Zhihong, Tan, Jianfeng, Wang, Xiao W, Tian, Kevin



> -----Original Message-----
> From: Bie, Tiwei
> Sent: Friday, April 20, 2018 11:28 AM
> To: Michael S. Tsirkin <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>; alex.williamson@redhat.com;
> ddutile@redhat.com; Duyck, Alexander H <alexander.h.duyck@intel.com>;
> virtio-dev@lists.oasis-open.org; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Liang, Cunming
> <cunming.liang@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; Tan,
> Jianfeng <jianfeng.tan@intel.com>; Wang, Xiao W <xiao.w.wang@intel.com>;
> Tian, Kevin <kevin.tian@intel.com>
> Subject: Re: [RFC] vhost: introduce mdev based hardware vhost backend
> 
> On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > One problem is that, different virtio ring compatible devices
> > > > > > may have different device interfaces. That is to say, we will
> > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > in kernel.
> > > > > So you just move the abstraction layer from qemu to kernel, and
> > > > > you still need different drivers in kernel for different device
> > > > > interfaces of accelerators. This looks even more complex than
> > > > > leaving it in qemu. As you said, another idea is to implement
> > > > > userspace vhost backend for accelerators which seems easier and
> > > > > could co-work with other parts of qemu without inventing new type of
> messages.
> > > > I'm not quite sure. Do you think it's acceptable to add various
> > > > vendor specific hardware drivers in QEMU?
> > > >
> > >
> > > I don't object but we need to figure out the advantages of doing it
> > > in qemu too.
> > >
> > > Thanks
> >
> > To be frank kernel is exactly where device drivers belong.  DPDK did
> > move them to userspace but that's merely a requirement for data path.
> > *If* you can have them in kernel that is best:
> > - update kernel and there's no need to rebuild userspace
> > - apps can be written in any language no need to maintain multiple
> >   libraries or add wrappers
> > - security concerns are much smaller (ok people are trying to
> >   raise the bar with IOMMUs and such, but it's already pretty
> >   good even without)
> >
> > The biggest issue is that you let userspace poke at the device which
> > is also allowed by the IOMMU to poke at kernel memory (needed for
> > kernel driver to work).
> 
> I think the device won't and shouldn't be allowed to poke at kernel memory. Its
> kernel driver needs some kernel memory to work. But the device doesn't have
> the access to them. Instead, the device only has the access to:
> 
> (1) the entire memory of the VM (if vIOMMU isn't used) or
> (2) the memory belongs to the guest virtio device (if
>     vIOMMU is being used).
> 
> Below is the reason:
> 
> For the first case, we should program the IOMMU for the hardware device based
> on the info in the memory table which is the entire memory of the VM.
> 
> For the second case, we should program the IOMMU for the hardware device
> based on the info in the shadow page table of the vIOMMU.
> 
> So the memory can be accessed by the device is limited, it should be safe
> especially for the second case.
> 
> My concern is that, in this RFC, we don't program the IOMMU for the mdev
> device in the userspace via the VFIO API directly. Instead, we pass the memory
> table to the kernel driver via the mdev device (BAR0) and ask the driver to do the
> IOMMU programming. Someone may don't like it. The main reason why we don't
> program IOMMU via VFIO API in userspace directly is that, currently IOMMU
> drivers don't support mdev bus.
> 
> >
> > Yes, maybe if device is not buggy it's all fine, but it's better if we
> > do not have to trust the device otherwise the security picture becomes
> > more murky.
> >
> > I suggested attaching a PASID to (some) queues - see my old post
> > "using PASIDs to enable a safe variant of direct ring access".
> 
Ideally we can have a device binding with normal driver in host, meanwhile support to allocate a few queues attaching with PASID on-demand. By vhost mdev transport channel, the data path ability of queues(as a device) can expose to qemu vhost adaptor as a vDPA instance. Then we can avoid VF number limitation, providing vhost data path acceleration in a small granularity.

> It's pretty cool. We also have some similar ideas.
> Cunming will talk more about this.
> 
> Best regards,
> Tiwei Bie
> 
> >
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> >
> >
> > --
> > MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-20  3:28           ` [virtio-dev] " Tiwei Bie
                             ` (3 preceding siblings ...)
  (?)
@ 2018-04-20  3:50           ` Liang, Cunming
  -1 siblings, 0 replies; 55+ messages in thread
From: Liang, Cunming @ 2018-04-20  3:50 UTC (permalink / raw)
  To: Bie, Tiwei, Michael S. Tsirkin
  Cc: Duyck, Alexander H, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong



> -----Original Message-----
> From: Bie, Tiwei
> Sent: Friday, April 20, 2018 11:28 AM
> To: Michael S. Tsirkin <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>; alex.williamson@redhat.com;
> ddutile@redhat.com; Duyck, Alexander H <alexander.h.duyck@intel.com>;
> virtio-dev@lists.oasis-open.org; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Liang, Cunming
> <cunming.liang@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; Tan,
> Jianfeng <jianfeng.tan@intel.com>; Wang, Xiao W <xiao.w.wang@intel.com>;
> Tian, Kevin <kevin.tian@intel.com>
> Subject: Re: [RFC] vhost: introduce mdev based hardware vhost backend
> 
> On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > One problem is that, different virtio ring compatible devices
> > > > > > may have different device interfaces. That is to say, we will
> > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > in kernel.
> > > > > So you just move the abstraction layer from qemu to kernel, and
> > > > > you still need different drivers in kernel for different device
> > > > > interfaces of accelerators. This looks even more complex than
> > > > > leaving it in qemu. As you said, another idea is to implement
> > > > > userspace vhost backend for accelerators which seems easier and
> > > > > could co-work with other parts of qemu without inventing new type of
> messages.
> > > > I'm not quite sure. Do you think it's acceptable to add various
> > > > vendor specific hardware drivers in QEMU?
> > > >
> > >
> > > I don't object but we need to figure out the advantages of doing it
> > > in qemu too.
> > >
> > > Thanks
> >
> > To be frank kernel is exactly where device drivers belong.  DPDK did
> > move them to userspace but that's merely a requirement for data path.
> > *If* you can have them in kernel that is best:
> > - update kernel and there's no need to rebuild userspace
> > - apps can be written in any language no need to maintain multiple
> >   libraries or add wrappers
> > - security concerns are much smaller (ok people are trying to
> >   raise the bar with IOMMUs and such, but it's already pretty
> >   good even without)
> >
> > The biggest issue is that you let userspace poke at the device which
> > is also allowed by the IOMMU to poke at kernel memory (needed for
> > kernel driver to work).
> 
> I think the device won't and shouldn't be allowed to poke at kernel memory. Its
> kernel driver needs some kernel memory to work. But the device doesn't have
> the access to them. Instead, the device only has the access to:
> 
> (1) the entire memory of the VM (if vIOMMU isn't used) or
> (2) the memory belongs to the guest virtio device (if
>     vIOMMU is being used).
> 
> Below is the reason:
> 
> For the first case, we should program the IOMMU for the hardware device based
> on the info in the memory table which is the entire memory of the VM.
> 
> For the second case, we should program the IOMMU for the hardware device
> based on the info in the shadow page table of the vIOMMU.
> 
> So the memory can be accessed by the device is limited, it should be safe
> especially for the second case.
> 
> My concern is that, in this RFC, we don't program the IOMMU for the mdev
> device in the userspace via the VFIO API directly. Instead, we pass the memory
> table to the kernel driver via the mdev device (BAR0) and ask the driver to do the
> IOMMU programming. Someone may don't like it. The main reason why we don't
> program IOMMU via VFIO API in userspace directly is that, currently IOMMU
> drivers don't support mdev bus.
> 
> >
> > Yes, maybe if device is not buggy it's all fine, but it's better if we
> > do not have to trust the device otherwise the security picture becomes
> > more murky.
> >
> > I suggested attaching a PASID to (some) queues - see my old post
> > "using PASIDs to enable a safe variant of direct ring access".
> 
Ideally we can have a device binding with normal driver in host, meanwhile support to allocate a few queues attaching with PASID on-demand. By vhost mdev transport channel, the data path ability of queues(as a device) can expose to qemu vhost adaptor as a vDPA instance. Then we can avoid VF number limitation, providing vhost data path acceleration in a small granularity.

> It's pretty cool. We also have some similar ideas.
> Cunming will talk more about this.
> 
> Best regards,
> Tiwei Bie
> 
> >
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> >
> >
> > --
> > MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] RE: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-20  3:50             ` Liang, Cunming
  0 siblings, 0 replies; 55+ messages in thread
From: Liang, Cunming @ 2018-04-20  3:50 UTC (permalink / raw)
  To: Bie, Tiwei, Michael S. Tsirkin
  Cc: Jason Wang, alex.williamson, ddutile, Duyck, Alexander H,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, Daly, Dan,
	Wang, Zhihong, Tan, Jianfeng, Wang, Xiao W, Tian, Kevin



> -----Original Message-----
> From: Bie, Tiwei
> Sent: Friday, April 20, 2018 11:28 AM
> To: Michael S. Tsirkin <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>; alex.williamson@redhat.com;
> ddutile@redhat.com; Duyck, Alexander H <alexander.h.duyck@intel.com>;
> virtio-dev@lists.oasis-open.org; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Liang, Cunming
> <cunming.liang@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; Tan,
> Jianfeng <jianfeng.tan@intel.com>; Wang, Xiao W <xiao.w.wang@intel.com>;
> Tian, Kevin <kevin.tian@intel.com>
> Subject: Re: [RFC] vhost: introduce mdev based hardware vhost backend
> 
> On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > One problem is that, different virtio ring compatible devices
> > > > > > may have different device interfaces. That is to say, we will
> > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > in kernel.
> > > > > So you just move the abstraction layer from qemu to kernel, and
> > > > > you still need different drivers in kernel for different device
> > > > > interfaces of accelerators. This looks even more complex than
> > > > > leaving it in qemu. As you said, another idea is to implement
> > > > > userspace vhost backend for accelerators which seems easier and
> > > > > could co-work with other parts of qemu without inventing new type of
> messages.
> > > > I'm not quite sure. Do you think it's acceptable to add various
> > > > vendor specific hardware drivers in QEMU?
> > > >
> > >
> > > I don't object but we need to figure out the advantages of doing it
> > > in qemu too.
> > >
> > > Thanks
> >
> > To be frank kernel is exactly where device drivers belong.  DPDK did
> > move them to userspace but that's merely a requirement for data path.
> > *If* you can have them in kernel that is best:
> > - update kernel and there's no need to rebuild userspace
> > - apps can be written in any language no need to maintain multiple
> >   libraries or add wrappers
> > - security concerns are much smaller (ok people are trying to
> >   raise the bar with IOMMUs and such, but it's already pretty
> >   good even without)
> >
> > The biggest issue is that you let userspace poke at the device which
> > is also allowed by the IOMMU to poke at kernel memory (needed for
> > kernel driver to work).
> 
> I think the device won't and shouldn't be allowed to poke at kernel memory. Its
> kernel driver needs some kernel memory to work. But the device doesn't have
> the access to them. Instead, the device only has the access to:
> 
> (1) the entire memory of the VM (if vIOMMU isn't used) or
> (2) the memory belongs to the guest virtio device (if
>     vIOMMU is being used).
> 
> Below is the reason:
> 
> For the first case, we should program the IOMMU for the hardware device based
> on the info in the memory table which is the entire memory of the VM.
> 
> For the second case, we should program the IOMMU for the hardware device
> based on the info in the shadow page table of the vIOMMU.
> 
> So the memory can be accessed by the device is limited, it should be safe
> especially for the second case.
> 
> My concern is that, in this RFC, we don't program the IOMMU for the mdev
> device in the userspace via the VFIO API directly. Instead, we pass the memory
> table to the kernel driver via the mdev device (BAR0) and ask the driver to do the
> IOMMU programming. Someone may don't like it. The main reason why we don't
> program IOMMU via VFIO API in userspace directly is that, currently IOMMU
> drivers don't support mdev bus.
> 
> >
> > Yes, maybe if device is not buggy it's all fine, but it's better if we
> > do not have to trust the device otherwise the security picture becomes
> > more murky.
> >
> > I suggested attaching a PASID to (some) queues - see my old post
> > "using PASIDs to enable a safe variant of direct ring access".
> 
Ideally we can have a device binding with normal driver in host, meanwhile support to allocate a few queues attaching with PASID on-demand. By vhost mdev transport channel, the data path ability of queues(as a device) can expose to qemu vhost adaptor as a vDPA instance. Then we can avoid VF number limitation, providing vhost data path acceleration in a small granularity.

> It's pretty cool. We also have some similar ideas.
> Cunming will talk more about this.
> 
> Best regards,
> Tiwei Bie
> 
> >
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> >
> >
> > --
> > MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-19 18:40         ` [virtio-dev] " Michael S. Tsirkin
  (?)
@ 2018-04-20  3:52           ` Jason Wang
  -1 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-20  3:52 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Tiwei Bie, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang



On 2018年04月20日 02:40, Michael S. Tsirkin wrote:
> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
>>>>> One problem is that, different virtio ring compatible devices
>>>>> may have different device interfaces. That is to say, we will
>>>>> need different drivers in QEMU. It could be troublesome. And
>>>>> that's what this patch trying to fix. The idea behind this
>>>>> patch is very simple: mdev is a standard way to emulate device
>>>>> in kernel.
>>>> So you just move the abstraction layer from qemu to kernel, and you still
>>>> need different drivers in kernel for different device interfaces of
>>>> accelerators. This looks even more complex than leaving it in qemu. As you
>>>> said, another idea is to implement userspace vhost backend for accelerators
>>>> which seems easier and could co-work with other parts of qemu without
>>>> inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to
>>> add various vendor specific hardware drivers in QEMU?
>>>
>> I don't object but we need to figure out the advantages of doing it in qemu
>> too.
>>
>> Thanks
> To be frank kernel is exactly where device drivers belong.  DPDK did
> move them to userspace but that's merely a requirement for data path.
> *If* you can have them in kernel that is best:
> - update kernel and there's no need to rebuild userspace

Well, you still need to rebuild userspace since a new vhost backend is 
required which relies vhost protocol through mdev API. And I believe 
upgrading userspace package is considered to be more lightweight than 
upgrading kernel. With mdev, we're likely to repeat the story of vhost 
API, dealing with features/versions and inventing new API endless for 
new features. And you will still need to rebuild the userspace.

> - apps can be written in any language no need to maintain multiple
>    libraries or add wrappers

This is not a big issue consider It's not a generic network driver but a 
mdev driver, the only possible user is VM.

> - security concerns are much smaller (ok people are trying to
>    raise the bar with IOMMUs and such, but it's already pretty
>    good even without)

Well, I think not, kernel bugs are much more serious than userspace 
ones. And I beg the kernel driver itself won't be small.

>
> The biggest issue is that you let userspace poke at the
> device which is also allowed by the IOMMU to poke at
> kernel memory (needed for kernel driver to work).

I don't quite get. The userspace driver could be built on top of VFIO 
for sure. So kernel memory were perfectly isolated in this case.

>
> Yes, maybe if device is not buggy it's all fine, but
> it's better if we do not have to trust the device
> otherwise the security picture becomes more murky.
>
> I suggested attaching a PASID to (some) queues - see my old post "using
> PASIDs to enable a safe variant of direct ring access".
>
> Then using IOMMU with VFIO to limit access through queue to corrent
> ranges of memory.

Well userspace driver could benefit from this too. And we can even go 
further by using nested IO page tables to share IOVA address space 
between devices and a VM.

Thanks

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-20  3:52           ` Jason Wang
  0 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-20  3:52 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: alexander.h.duyck, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, xiao.w.wang, ddutile, jianfeng.tan, zhihong.wang



On 2018年04月20日 02:40, Michael S. Tsirkin wrote:
> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
>>>>> One problem is that, different virtio ring compatible devices
>>>>> may have different device interfaces. That is to say, we will
>>>>> need different drivers in QEMU. It could be troublesome. And
>>>>> that's what this patch trying to fix. The idea behind this
>>>>> patch is very simple: mdev is a standard way to emulate device
>>>>> in kernel.
>>>> So you just move the abstraction layer from qemu to kernel, and you still
>>>> need different drivers in kernel for different device interfaces of
>>>> accelerators. This looks even more complex than leaving it in qemu. As you
>>>> said, another idea is to implement userspace vhost backend for accelerators
>>>> which seems easier and could co-work with other parts of qemu without
>>>> inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to
>>> add various vendor specific hardware drivers in QEMU?
>>>
>> I don't object but we need to figure out the advantages of doing it in qemu
>> too.
>>
>> Thanks
> To be frank kernel is exactly where device drivers belong.  DPDK did
> move them to userspace but that's merely a requirement for data path.
> *If* you can have them in kernel that is best:
> - update kernel and there's no need to rebuild userspace

Well, you still need to rebuild userspace since a new vhost backend is 
required which relies vhost protocol through mdev API. And I believe 
upgrading userspace package is considered to be more lightweight than 
upgrading kernel. With mdev, we're likely to repeat the story of vhost 
API, dealing with features/versions and inventing new API endless for 
new features. And you will still need to rebuild the userspace.

> - apps can be written in any language no need to maintain multiple
>    libraries or add wrappers

This is not a big issue consider It's not a generic network driver but a 
mdev driver, the only possible user is VM.

> - security concerns are much smaller (ok people are trying to
>    raise the bar with IOMMUs and such, but it's already pretty
>    good even without)

Well, I think not, kernel bugs are much more serious than userspace 
ones. And I beg the kernel driver itself won't be small.

>
> The biggest issue is that you let userspace poke at the
> device which is also allowed by the IOMMU to poke at
> kernel memory (needed for kernel driver to work).

I don't quite get. The userspace driver could be built on top of VFIO 
for sure. So kernel memory were perfectly isolated in this case.

>
> Yes, maybe if device is not buggy it's all fine, but
> it's better if we do not have to trust the device
> otherwise the security picture becomes more murky.
>
> I suggested attaching a PASID to (some) queues - see my old post "using
> PASIDs to enable a safe variant of direct ring access".
>
> Then using IOMMU with VFIO to limit access through queue to corrent
> ranges of memory.

Well userspace driver could benefit from this too. And we can even go 
further by using nested IO page tables to share IOVA address space 
between devices and a VM.

Thanks

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-20  3:52           ` Jason Wang
  0 siblings, 0 replies; 55+ messages in thread
From: Jason Wang @ 2018-04-20  3:52 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Tiwei Bie, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang



On 2018年04月20日 02:40, Michael S. Tsirkin wrote:
> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
>>>>> One problem is that, different virtio ring compatible devices
>>>>> may have different device interfaces. That is to say, we will
>>>>> need different drivers in QEMU. It could be troublesome. And
>>>>> that's what this patch trying to fix. The idea behind this
>>>>> patch is very simple: mdev is a standard way to emulate device
>>>>> in kernel.
>>>> So you just move the abstraction layer from qemu to kernel, and you still
>>>> need different drivers in kernel for different device interfaces of
>>>> accelerators. This looks even more complex than leaving it in qemu. As you
>>>> said, another idea is to implement userspace vhost backend for accelerators
>>>> which seems easier and could co-work with other parts of qemu without
>>>> inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to
>>> add various vendor specific hardware drivers in QEMU?
>>>
>> I don't object but we need to figure out the advantages of doing it in qemu
>> too.
>>
>> Thanks
> To be frank kernel is exactly where device drivers belong.  DPDK did
> move them to userspace but that's merely a requirement for data path.
> *If* you can have them in kernel that is best:
> - update kernel and there's no need to rebuild userspace

Well, you still need to rebuild userspace since a new vhost backend is 
required which relies vhost protocol through mdev API. And I believe 
upgrading userspace package is considered to be more lightweight than 
upgrading kernel. With mdev, we're likely to repeat the story of vhost 
API, dealing with features/versions and inventing new API endless for 
new features. And you will still need to rebuild the userspace.

> - apps can be written in any language no need to maintain multiple
>    libraries or add wrappers

This is not a big issue consider It's not a generic network driver but a 
mdev driver, the only possible user is VM.

> - security concerns are much smaller (ok people are trying to
>    raise the bar with IOMMUs and such, but it's already pretty
>    good even without)

Well, I think not, kernel bugs are much more serious than userspace 
ones. And I beg the kernel driver itself won't be small.

>
> The biggest issue is that you let userspace poke at the
> device which is also allowed by the IOMMU to poke at
> kernel memory (needed for kernel driver to work).

I don't quite get. The userspace driver could be built on top of VFIO 
for sure. So kernel memory were perfectly isolated in this case.

>
> Yes, maybe if device is not buggy it's all fine, but
> it's better if we do not have to trust the device
> otherwise the security picture becomes more murky.
>
> I suggested attaching a PASID to (some) queues - see my old post "using
> PASIDs to enable a safe variant of direct ring access".
>
> Then using IOMMU with VFIO to limit access through queue to corrent
> ranges of memory.

Well userspace driver could benefit from this too. And we can even go 
further by using nested IO page tables to share IOVA address space 
between devices and a VM.

Thanks


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-20  3:50             ` [virtio-dev] " Liang, Cunming
@ 2018-04-20 13:52               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20 13:52 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Bie, Tiwei, Jason Wang, alex.williamson, ddutile, Duyck,
	Alexander H, virtio-dev, linux-kernel, kvm, virtualization,
	netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng, Wang, Xiao W,
	Tian, Kevin

On Fri, Apr 20, 2018 at 03:50:41AM +0000, Liang, Cunming wrote:
> 
> 
> > -----Original Message-----
> > From: Bie, Tiwei
> > Sent: Friday, April 20, 2018 11:28 AM
> > To: Michael S. Tsirkin <mst@redhat.com>
> > Cc: Jason Wang <jasowang@redhat.com>; alex.williamson@redhat.com;
> > ddutile@redhat.com; Duyck, Alexander H <alexander.h.duyck@intel.com>;
> > virtio-dev@lists.oasis-open.org; linux-kernel@vger.kernel.org;
> > kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Liang, Cunming
> > <cunming.liang@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; Tan,
> > Jianfeng <jianfeng.tan@intel.com>; Wang, Xiao W <xiao.w.wang@intel.com>;
> > Tian, Kevin <kevin.tian@intel.com>
> > Subject: Re: [RFC] vhost: introduce mdev based hardware vhost backend
> > 
> > On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > > One problem is that, different virtio ring compatible devices
> > > > > > > may have different device interfaces. That is to say, we will
> > > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > > in kernel.
> > > > > > So you just move the abstraction layer from qemu to kernel, and
> > > > > > you still need different drivers in kernel for different device
> > > > > > interfaces of accelerators. This looks even more complex than
> > > > > > leaving it in qemu. As you said, another idea is to implement
> > > > > > userspace vhost backend for accelerators which seems easier and
> > > > > > could co-work with other parts of qemu without inventing new type of
> > messages.
> > > > > I'm not quite sure. Do you think it's acceptable to add various
> > > > > vendor specific hardware drivers in QEMU?
> > > > >
> > > >
> > > > I don't object but we need to figure out the advantages of doing it
> > > > in qemu too.
> > > >
> > > > Thanks
> > >
> > > To be frank kernel is exactly where device drivers belong.  DPDK did
> > > move them to userspace but that's merely a requirement for data path.
> > > *If* you can have them in kernel that is best:
> > > - update kernel and there's no need to rebuild userspace
> > > - apps can be written in any language no need to maintain multiple
> > >   libraries or add wrappers
> > > - security concerns are much smaller (ok people are trying to
> > >   raise the bar with IOMMUs and such, but it's already pretty
> > >   good even without)
> > >
> > > The biggest issue is that you let userspace poke at the device which
> > > is also allowed by the IOMMU to poke at kernel memory (needed for
> > > kernel driver to work).
> > 
> > I think the device won't and shouldn't be allowed to poke at kernel memory. Its
> > kernel driver needs some kernel memory to work. But the device doesn't have
> > the access to them. Instead, the device only has the access to:
> > 
> > (1) the entire memory of the VM (if vIOMMU isn't used) or
> > (2) the memory belongs to the guest virtio device (if
> >     vIOMMU is being used).
> > 
> > Below is the reason:
> > 
> > For the first case, we should program the IOMMU for the hardware device based
> > on the info in the memory table which is the entire memory of the VM.
> > 
> > For the second case, we should program the IOMMU for the hardware device
> > based on the info in the shadow page table of the vIOMMU.
> > 
> > So the memory can be accessed by the device is limited, it should be safe
> > especially for the second case.
> > 
> > My concern is that, in this RFC, we don't program the IOMMU for the mdev
> > device in the userspace via the VFIO API directly. Instead, we pass the memory
> > table to the kernel driver via the mdev device (BAR0) and ask the driver to do the
> > IOMMU programming. Someone may don't like it. The main reason why we don't
> > program IOMMU via VFIO API in userspace directly is that, currently IOMMU
> > drivers don't support mdev bus.
> > 
> > >
> > > Yes, maybe if device is not buggy it's all fine, but it's better if we
> > > do not have to trust the device otherwise the security picture becomes
> > > more murky.
> > >
> > > I suggested attaching a PASID to (some) queues - see my old post
> > > "using PASIDs to enable a safe variant of direct ring access".
> > 
> Ideally we can have a device binding with normal driver in host, meanwhile support to allocate a few queues attaching with PASID on-demand. By vhost mdev transport channel, the data path ability of queues(as a device) can expose to qemu vhost adaptor as a vDPA instance. Then we can avoid VF number limitation, providing vhost data path acceleration in a small granularity.

Exactly my point.

> > It's pretty cool. We also have some similar ideas.
> > Cunming will talk more about this.
> > 
> > Best regards,
> > Tiwei Bie
> > 
> > >
> > > Then using IOMMU with VFIO to limit access through queue to corrent
> > > ranges of memory.
> > >
> > >
> > > --
> > > MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-20  3:50             ` [virtio-dev] " Liang, Cunming
  (?)
@ 2018-04-20 13:52             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20 13:52 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Duyck, Alexander H, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, Wang, Xiao W, ddutile, Tan, Jianfeng, Wang,
	Zhihong

On Fri, Apr 20, 2018 at 03:50:41AM +0000, Liang, Cunming wrote:
> 
> 
> > -----Original Message-----
> > From: Bie, Tiwei
> > Sent: Friday, April 20, 2018 11:28 AM
> > To: Michael S. Tsirkin <mst@redhat.com>
> > Cc: Jason Wang <jasowang@redhat.com>; alex.williamson@redhat.com;
> > ddutile@redhat.com; Duyck, Alexander H <alexander.h.duyck@intel.com>;
> > virtio-dev@lists.oasis-open.org; linux-kernel@vger.kernel.org;
> > kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Liang, Cunming
> > <cunming.liang@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; Tan,
> > Jianfeng <jianfeng.tan@intel.com>; Wang, Xiao W <xiao.w.wang@intel.com>;
> > Tian, Kevin <kevin.tian@intel.com>
> > Subject: Re: [RFC] vhost: introduce mdev based hardware vhost backend
> > 
> > On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > > One problem is that, different virtio ring compatible devices
> > > > > > > may have different device interfaces. That is to say, we will
> > > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > > in kernel.
> > > > > > So you just move the abstraction layer from qemu to kernel, and
> > > > > > you still need different drivers in kernel for different device
> > > > > > interfaces of accelerators. This looks even more complex than
> > > > > > leaving it in qemu. As you said, another idea is to implement
> > > > > > userspace vhost backend for accelerators which seems easier and
> > > > > > could co-work with other parts of qemu without inventing new type of
> > messages.
> > > > > I'm not quite sure. Do you think it's acceptable to add various
> > > > > vendor specific hardware drivers in QEMU?
> > > > >
> > > >
> > > > I don't object but we need to figure out the advantages of doing it
> > > > in qemu too.
> > > >
> > > > Thanks
> > >
> > > To be frank kernel is exactly where device drivers belong.  DPDK did
> > > move them to userspace but that's merely a requirement for data path.
> > > *If* you can have them in kernel that is best:
> > > - update kernel and there's no need to rebuild userspace
> > > - apps can be written in any language no need to maintain multiple
> > >   libraries or add wrappers
> > > - security concerns are much smaller (ok people are trying to
> > >   raise the bar with IOMMUs and such, but it's already pretty
> > >   good even without)
> > >
> > > The biggest issue is that you let userspace poke at the device which
> > > is also allowed by the IOMMU to poke at kernel memory (needed for
> > > kernel driver to work).
> > 
> > I think the device won't and shouldn't be allowed to poke at kernel memory. Its
> > kernel driver needs some kernel memory to work. But the device doesn't have
> > the access to them. Instead, the device only has the access to:
> > 
> > (1) the entire memory of the VM (if vIOMMU isn't used) or
> > (2) the memory belongs to the guest virtio device (if
> >     vIOMMU is being used).
> > 
> > Below is the reason:
> > 
> > For the first case, we should program the IOMMU for the hardware device based
> > on the info in the memory table which is the entire memory of the VM.
> > 
> > For the second case, we should program the IOMMU for the hardware device
> > based on the info in the shadow page table of the vIOMMU.
> > 
> > So the memory can be accessed by the device is limited, it should be safe
> > especially for the second case.
> > 
> > My concern is that, in this RFC, we don't program the IOMMU for the mdev
> > device in the userspace via the VFIO API directly. Instead, we pass the memory
> > table to the kernel driver via the mdev device (BAR0) and ask the driver to do the
> > IOMMU programming. Someone may don't like it. The main reason why we don't
> > program IOMMU via VFIO API in userspace directly is that, currently IOMMU
> > drivers don't support mdev bus.
> > 
> > >
> > > Yes, maybe if device is not buggy it's all fine, but it's better if we
> > > do not have to trust the device otherwise the security picture becomes
> > > more murky.
> > >
> > > I suggested attaching a PASID to (some) queues - see my old post
> > > "using PASIDs to enable a safe variant of direct ring access".
> > 
> Ideally we can have a device binding with normal driver in host, meanwhile support to allocate a few queues attaching with PASID on-demand. By vhost mdev transport channel, the data path ability of queues(as a device) can expose to qemu vhost adaptor as a vDPA instance. Then we can avoid VF number limitation, providing vhost data path acceleration in a small granularity.

Exactly my point.

> > It's pretty cool. We also have some similar ideas.
> > Cunming will talk more about this.
> > 
> > Best regards,
> > Tiwei Bie
> > 
> > >
> > > Then using IOMMU with VFIO to limit access through queue to corrent
> > > ranges of memory.
> > >
> > >
> > > --
> > > MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-20 13:52               ` Michael S. Tsirkin
  0 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20 13:52 UTC (permalink / raw)
  To: Liang, Cunming
  Cc: Bie, Tiwei, Jason Wang, alex.williamson, ddutile, Duyck,
	Alexander H, virtio-dev, linux-kernel, kvm, virtualization,
	netdev, Daly, Dan, Wang, Zhihong, Tan, Jianfeng, Wang, Xiao W,
	Tian, Kevin

On Fri, Apr 20, 2018 at 03:50:41AM +0000, Liang, Cunming wrote:
> 
> 
> > -----Original Message-----
> > From: Bie, Tiwei
> > Sent: Friday, April 20, 2018 11:28 AM
> > To: Michael S. Tsirkin <mst@redhat.com>
> > Cc: Jason Wang <jasowang@redhat.com>; alex.williamson@redhat.com;
> > ddutile@redhat.com; Duyck, Alexander H <alexander.h.duyck@intel.com>;
> > virtio-dev@lists.oasis-open.org; linux-kernel@vger.kernel.org;
> > kvm@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > netdev@vger.kernel.org; Daly, Dan <dan.daly@intel.com>; Liang, Cunming
> > <cunming.liang@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; Tan,
> > Jianfeng <jianfeng.tan@intel.com>; Wang, Xiao W <xiao.w.wang@intel.com>;
> > Tian, Kevin <kevin.tian@intel.com>
> > Subject: Re: [RFC] vhost: introduce mdev based hardware vhost backend
> > 
> > On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:
> > > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
> > > > > > > One problem is that, different virtio ring compatible devices
> > > > > > > may have different device interfaces. That is to say, we will
> > > > > > > need different drivers in QEMU. It could be troublesome. And
> > > > > > > that's what this patch trying to fix. The idea behind this
> > > > > > > patch is very simple: mdev is a standard way to emulate device
> > > > > > > in kernel.
> > > > > > So you just move the abstraction layer from qemu to kernel, and
> > > > > > you still need different drivers in kernel for different device
> > > > > > interfaces of accelerators. This looks even more complex than
> > > > > > leaving it in qemu. As you said, another idea is to implement
> > > > > > userspace vhost backend for accelerators which seems easier and
> > > > > > could co-work with other parts of qemu without inventing new type of
> > messages.
> > > > > I'm not quite sure. Do you think it's acceptable to add various
> > > > > vendor specific hardware drivers in QEMU?
> > > > >
> > > >
> > > > I don't object but we need to figure out the advantages of doing it
> > > > in qemu too.
> > > >
> > > > Thanks
> > >
> > > To be frank kernel is exactly where device drivers belong.  DPDK did
> > > move them to userspace but that's merely a requirement for data path.
> > > *If* you can have them in kernel that is best:
> > > - update kernel and there's no need to rebuild userspace
> > > - apps can be written in any language no need to maintain multiple
> > >   libraries or add wrappers
> > > - security concerns are much smaller (ok people are trying to
> > >   raise the bar with IOMMUs and such, but it's already pretty
> > >   good even without)
> > >
> > > The biggest issue is that you let userspace poke at the device which
> > > is also allowed by the IOMMU to poke at kernel memory (needed for
> > > kernel driver to work).
> > 
> > I think the device won't and shouldn't be allowed to poke at kernel memory. Its
> > kernel driver needs some kernel memory to work. But the device doesn't have
> > the access to them. Instead, the device only has the access to:
> > 
> > (1) the entire memory of the VM (if vIOMMU isn't used) or
> > (2) the memory belongs to the guest virtio device (if
> >     vIOMMU is being used).
> > 
> > Below is the reason:
> > 
> > For the first case, we should program the IOMMU for the hardware device based
> > on the info in the memory table which is the entire memory of the VM.
> > 
> > For the second case, we should program the IOMMU for the hardware device
> > based on the info in the shadow page table of the vIOMMU.
> > 
> > So the memory can be accessed by the device is limited, it should be safe
> > especially for the second case.
> > 
> > My concern is that, in this RFC, we don't program the IOMMU for the mdev
> > device in the userspace via the VFIO API directly. Instead, we pass the memory
> > table to the kernel driver via the mdev device (BAR0) and ask the driver to do the
> > IOMMU programming. Someone may don't like it. The main reason why we don't
> > program IOMMU via VFIO API in userspace directly is that, currently IOMMU
> > drivers don't support mdev bus.
> > 
> > >
> > > Yes, maybe if device is not buggy it's all fine, but it's better if we
> > > do not have to trust the device otherwise the security picture becomes
> > > more murky.
> > >
> > > I suggested attaching a PASID to (some) queues - see my old post
> > > "using PASIDs to enable a safe variant of direct ring access".
> > 
> Ideally we can have a device binding with normal driver in host, meanwhile support to allocate a few queues attaching with PASID on-demand. By vhost mdev transport channel, the data path ability of queues(as a device) can expose to qemu vhost adaptor as a vDPA instance. Then we can avoid VF number limitation, providing vhost data path acceleration in a small granularity.

Exactly my point.

> > It's pretty cool. We also have some similar ideas.
> > Cunming will talk more about this.
> > 
> > Best regards,
> > Tiwei Bie
> > 
> > >
> > > Then using IOMMU with VFIO to limit access through queue to corrent
> > > ranges of memory.
> > >
> > >
> > > --
> > > MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-20  3:52           ` Jason Wang
@ 2018-04-20 14:12             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20 14:12 UTC (permalink / raw)
  To: Jason Wang
  Cc: Tiwei Bie, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang

On Fri, Apr 20, 2018 at 11:52:47AM +0800, Jason Wang wrote:
> > The biggest issue is that you let userspace poke at the
> > device which is also allowed by the IOMMU to poke at
> > kernel memory (needed for kernel driver to work).
> 
> I don't quite get. The userspace driver could be built on top of VFIO for
> sure. So kernel memory were perfectly isolated in this case.

VFIO does what it can but it mostly just has the IOMMU to play with.
So don't overestimate what it can do - it assumes a high level
of spec compliance for protections to work. For example,
ATS is enabled by default if device has it, and that
treats translated requests are trusted. FLS is assumed to reset
the device for when VFIO is unbound from the device. etc.


> > 
> > Yes, maybe if device is not buggy it's all fine, but
> > it's better if we do not have to trust the device
> > otherwise the security picture becomes more murky.
> > 
> > I suggested attaching a PASID to (some) queues - see my old post "using
> > PASIDs to enable a safe variant of direct ring access".
> > 
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> 
> Well userspace driver could benefit from this too. And we can even go
> further by using nested IO page tables to share IOVA address space between
> devices and a VM.
> 
> Thanks

Yes I suggested this separately.

-- 
MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [RFC] vhost: introduce mdev based hardware vhost backend
  2018-04-20  3:52           ` Jason Wang
                             ` (2 preceding siblings ...)
  (?)
@ 2018-04-20 14:12           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20 14:12 UTC (permalink / raw)
  To: Jason Wang
  Cc: alexander.h.duyck, virtio-dev, kvm, netdev, linux-kernel,
	virtualization, xiao.w.wang, ddutile, jianfeng.tan, zhihong.wang

On Fri, Apr 20, 2018 at 11:52:47AM +0800, Jason Wang wrote:
> > The biggest issue is that you let userspace poke at the
> > device which is also allowed by the IOMMU to poke at
> > kernel memory (needed for kernel driver to work).
> 
> I don't quite get. The userspace driver could be built on top of VFIO for
> sure. So kernel memory were perfectly isolated in this case.

VFIO does what it can but it mostly just has the IOMMU to play with.
So don't overestimate what it can do - it assumes a high level
of spec compliance for protections to work. For example,
ATS is enabled by default if device has it, and that
treats translated requests are trusted. FLS is assumed to reset
the device for when VFIO is unbound from the device. etc.


> > 
> > Yes, maybe if device is not buggy it's all fine, but
> > it's better if we do not have to trust the device
> > otherwise the security picture becomes more murky.
> > 
> > I suggested attaching a PASID to (some) queues - see my old post "using
> > PASIDs to enable a safe variant of direct ring access".
> > 
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> 
> Well userspace driver could benefit from this too. And we can even go
> further by using nested IO page tables to share IOVA address space between
> devices and a VM.
> 
> Thanks

Yes I suggested this separately.

-- 
MST

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
@ 2018-04-20 14:12             ` Michael S. Tsirkin
  0 siblings, 0 replies; 55+ messages in thread
From: Michael S. Tsirkin @ 2018-04-20 14:12 UTC (permalink / raw)
  To: Jason Wang
  Cc: Tiwei Bie, alex.williamson, ddutile, alexander.h.duyck,
	virtio-dev, linux-kernel, kvm, virtualization, netdev, dan.daly,
	cunming.liang, zhihong.wang, jianfeng.tan, xiao.w.wang

On Fri, Apr 20, 2018 at 11:52:47AM +0800, Jason Wang wrote:
> > The biggest issue is that you let userspace poke at the
> > device which is also allowed by the IOMMU to poke at
> > kernel memory (needed for kernel driver to work).
> 
> I don't quite get. The userspace driver could be built on top of VFIO for
> sure. So kernel memory were perfectly isolated in this case.

VFIO does what it can but it mostly just has the IOMMU to play with.
So don't overestimate what it can do - it assumes a high level
of spec compliance for protections to work. For example,
ATS is enabled by default if device has it, and that
treats translated requests are trusted. FLS is assumed to reset
the device for when VFIO is unbound from the device. etc.


> > 
> > Yes, maybe if device is not buggy it's all fine, but
> > it's better if we do not have to trust the device
> > otherwise the security picture becomes more murky.
> > 
> > I suggested attaching a PASID to (some) queues - see my old post "using
> > PASIDs to enable a safe variant of direct ring access".
> > 
> > Then using IOMMU with VFIO to limit access through queue to corrent
> > ranges of memory.
> 
> Well userspace driver could benefit from this too. And we can even go
> further by using nested IO page tables to share IOVA address space between
> devices and a VM.
> 
> Thanks

Yes I suggested this separately.

-- 
MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 55+ messages in thread

end of thread, other threads:[~2018-04-20 14:12 UTC | newest]

Thread overview: 55+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-02 15:23 [RFC] vhost: introduce mdev based hardware vhost backend Tiwei Bie
2018-04-02 15:23 ` [virtio-dev] " Tiwei Bie
2018-04-10  2:52 ` Jason Wang
2018-04-10  2:52   ` [virtio-dev] " Jason Wang
2018-04-10  4:57   ` Tiwei Bie
2018-04-10  4:57     ` [virtio-dev] " Tiwei Bie
2018-04-10  7:25     ` Jason Wang
2018-04-10  7:25     ` Jason Wang
2018-04-10  7:25       ` [virtio-dev] " Jason Wang
2018-04-19 18:40       ` Michael S. Tsirkin
2018-04-19 18:40       ` Michael S. Tsirkin
2018-04-19 18:40         ` [virtio-dev] " Michael S. Tsirkin
2018-04-20  3:28         ` Tiwei Bie
2018-04-20  3:28         ` Tiwei Bie
2018-04-20  3:28           ` [virtio-dev] " Tiwei Bie
2018-04-20  3:50           ` Michael S. Tsirkin
2018-04-20  3:50           ` Michael S. Tsirkin
2018-04-20  3:50             ` [virtio-dev] " Michael S. Tsirkin
2018-04-20  3:50           ` Liang, Cunming
2018-04-20  3:50             ` [virtio-dev] " Liang, Cunming
2018-04-20 13:52             ` Michael S. Tsirkin
2018-04-20 13:52             ` Michael S. Tsirkin
2018-04-20 13:52               ` [virtio-dev] " Michael S. Tsirkin
2018-04-20  3:50           ` Liang, Cunming
2018-04-20  3:52         ` Jason Wang
2018-04-20  3:52           ` [virtio-dev] " Jason Wang
2018-04-20  3:52           ` Jason Wang
2018-04-20 14:12           ` Michael S. Tsirkin
2018-04-20 14:12             ` [virtio-dev] " Michael S. Tsirkin
2018-04-20 14:12           ` Michael S. Tsirkin
2018-04-10  7:51     ` [virtio-dev] " Paolo Bonzini
2018-04-10  7:51       ` Paolo Bonzini
2018-04-10  7:51       ` Paolo Bonzini
2018-04-10  9:23       ` Liang, Cunming
2018-04-10 13:36         ` Michael S. Tsirkin
2018-04-10 13:36         ` Michael S. Tsirkin
2018-04-10 13:36           ` Michael S. Tsirkin
2018-04-10 14:23           ` Liang, Cunming
2018-04-10 14:23             ` Liang, Cunming
2018-04-11  1:38             ` Tian, Kevin
2018-04-11  1:38               ` Tian, Kevin
2018-04-11  1:38             ` Tian, Kevin
2018-04-11  2:18             ` Jason Wang
2018-04-11  2:18             ` Jason Wang
2018-04-11  2:18               ` Jason Wang
2018-04-11  2:18               ` Jason Wang
2018-04-11  2:01         ` [virtio-dev] " Stefan Hajnoczi
2018-04-11  2:01         ` Stefan Hajnoczi
2018-04-11  2:01           ` Stefan Hajnoczi
2018-04-11  2:08         ` Jason Wang
2018-04-11  2:08           ` Jason Wang
2018-04-11  2:08         ` Jason Wang
2018-04-10  9:23       ` Liang, Cunming
2018-04-10  4:57   ` Tiwei Bie
2018-04-10  2:52 ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.