From: Jason Wang <jasowang@redhat.com>
To: Liu Xiaodong <xiaodong.liu@intel.com>,
Xie Yongji <xieyongji@bytedance.com>,
mst@redhat.com, stefanha@redhat.com, sgarzare@redhat.com,
parav@nvidia.com, hch@infradead.org,
christian.brauner@canonical.com, rdunlap@infradead.org,
willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk,
bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com,
dan.carpenter@oracle.com, joro@8bytes.org,
gregkh@linuxfoundation.org
Cc: songmuchun@bytedance.com,
virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, kvm@vger.kernel.org,
linux-fsdevel@vger.kernel.org, iommu@lists.linux-foundation.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v8 00/10] Introduce VDUSE - vDPA Device in Userspace
Date: Mon, 28 Jun 2021 12:35:06 +0800 [thread overview]
Message-ID: <bdbe3a79-e5ce-c3a5-4c68-c11c65857377@redhat.com> (raw)
In-Reply-To: <20210628103309.GA205554@storage2.sh.intel.com>
在 2021/6/28 下午6:33, Liu Xiaodong 写道:
> On Tue, Jun 15, 2021 at 10:13:21PM +0800, Xie Yongji wrote:
>> This series introduces a framework that makes it possible to implement
>> software-emulated vDPA devices in userspace. And to make it simple, the
>> emulated vDPA device's control path is handled in the kernel and only the
>> data path is implemented in the userspace.
>>
>> Since the emuldated vDPA device's control path is handled in the kernel,
>> a message mechnism is introduced to make userspace be aware of the data
>> path related changes. Userspace can use read()/write() to receive/reply
>> the control messages.
>>
>> In the data path, the core is mapping dma buffer into VDUSE daemon's
>> address space, which can be implemented in different ways depending on
>> the vdpa bus to which the vDPA device is attached.
>>
>> In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver with
>> bounce-buffering mechanism to achieve that. And in vhost-vdpa case, the dma
>> buffer is reside in a userspace memory region which can be shared to the
>> VDUSE userspace processs via transferring the shmfd.
>>
>> The details and our user case is shown below:
>>
>> ------------------------ ------------------------- ----------------------------------------------
>> | Container | | QEMU(VM) | | VDUSE daemon |
>> | --------- | | ------------------- | | ------------------------- ---------------- |
>> | |dev/vdx| | | |/dev/vhost-vdpa-x| | | | vDPA device emulation | | block driver | |
>> ------------+----------- -----------+------------ -------------+----------------------+---------
>> | | | |
>> | | | |
>> ------------+---------------------------+----------------------------+----------------------+---------
>> | | block device | | vhost device | | vduse driver | | TCP/IP | |
>> | -------+-------- --------+-------- -------+-------- -----+---- |
>> | | | | | |
>> | ----------+---------- ----------+----------- -------+------- | |
>> | | virtio-blk driver | | vhost-vdpa driver | | vdpa device | | |
>> | ----------+---------- ----------+----------- -------+------- | |
>> | | virtio bus | | | |
>> | --------+----+----------- | | | |
>> | | | | | |
>> | ----------+---------- | | | |
>> | | virtio-blk device | | | | |
>> | ----------+---------- | | | |
>> | | | | | |
>> | -----------+----------- | | | |
>> | | virtio-vdpa driver | | | | |
>> | -----------+----------- | | | |
>> | | | | vdpa bus | |
>> | -----------+----------------------+---------------------------+------------ | |
>> | ---+--- |
>> -----------------------------------------------------------------------------------------| NIC |------
>> ---+---
>> |
>> ---------+---------
>> | Remote Storages |
>> -------------------
>>
>> We make use of it to implement a block device connecting to
>> our distributed storage, which can be used both in containers and
>> VMs. Thus, we can have an unified technology stack in this two cases.
>>
>> To test it with null-blk:
>>
>> $ qemu-storage-daemon \
>> --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
>> --monitor chardev=charmonitor \
>> --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0 \
>> --export type=vduse-blk,id=test,node-name=disk0,writable=on,name=vduse-null,num-queues=16,queue-size=128
>>
>> The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse
>>
>> To make the userspace VDUSE processes such as qemu-storage-daemon able to
>> be run by an unprivileged user. We did some works on virtio driver to avoid
>> trusting device, including:
>>
>> - validating the used length:
>>
>> * https://lore.kernel.org/lkml/20210531135852.113-1-xieyongji@bytedance.com/
>> * https://lore.kernel.org/lkml/20210525125622.1203-1-xieyongji@bytedance.com/
>>
>> - validating the device config:
>>
>> * https://lore.kernel.org/lkml/20210615104810.151-1-xieyongji@bytedance.com/
>>
>> - validating the device response:
>>
>> * https://lore.kernel.org/lkml/20210615105218.214-1-xieyongji@bytedance.com/
>>
>> Since I'm not sure if I missing something during auditing, especially on some
>> virtio device drivers that I'm not familiar with, we limit the supported device
>> type to virtio block device currently. The support for other device types can be
>> added after the security issue of corresponding device driver is clarified or
>> fixed in the future.
>>
>> Future work:
>> - Improve performance
>> - Userspace library (find a way to reuse device emulation code in qemu/rust-vmm)
>> - Support more device types
>>
>> V7 to V8:
>> - Rebased to newest kernel tree
>> - Rework VDUSE driver to handle the device's control path in kernel
>> - Limit the supported device type to virtio block device
>> - Export free_iova_fast()
>> - Remove the virtio-blk and virtio-scsi patches (will send them alone)
>> - Remove all module parameters
>> - Use the same MAJOR for both control device and VDUSE devices
>> - Avoid eventfd cleanup in vduse_dev_release()
>>
>> V6 to V7:
>> - Export alloc_iova_fast()
>> - Add get_config_size() callback
>> - Add some patches to avoid trusting virtio devices
>> - Add limited device emulation
>> - Add some documents
>> - Use workqueue to inject config irq
>> - Add parameter on vq irq injecting
>> - Rename vduse_domain_get_mapping_page() to vduse_domain_get_coherent_page()
>> - Add WARN_ON() to catch message failure
>> - Add some padding/reserved fields to uAPI structure
>> - Fix some bugs
>> - Rebase to vhost.git
>>
>> V5 to V6:
>> - Export receive_fd() instead of __receive_fd()
>> - Factor out the unmapping logic of pa and va separatedly
>> - Remove the logic of bounce page allocation in page fault handler
>> - Use PAGE_SIZE as IOVA allocation granule
>> - Add EPOLLOUT support
>> - Enable setting API version in userspace
>> - Fix some bugs
>>
>> V4 to V5:
>> - Remove the patch for irq binding
>> - Use a single IOTLB for all types of mapping
>> - Factor out vhost_vdpa_pa_map()
>> - Add some sample codes in document
>> - Use receice_fd_user() to pass file descriptor
>> - Fix some bugs
>>
>> V3 to V4:
>> - Rebase to vhost.git
>> - Split some patches
>> - Add some documents
>> - Use ioctl to inject interrupt rather than eventfd
>> - Enable config interrupt support
>> - Support binding irq to the specified cpu
>> - Add two module parameter to limit bounce/iova size
>> - Create char device rather than anon inode per vduse
>> - Reuse vhost IOTLB for iova domain
>> - Rework the message mechnism in control path
>>
>> V2 to V3:
>> - Rework the MMU-based IOMMU driver
>> - Use the iova domain as iova allocator instead of genpool
>> - Support transferring vma->vm_file in vhost-vdpa
>> - Add SVA support in vhost-vdpa
>> - Remove the patches on bounce pages reclaim
>>
>> V1 to V2:
>> - Add vhost-vdpa support
>> - Add some documents
>> - Based on the vdpa management tool
>> - Introduce a workqueue for irq injection
>> - Replace interval tree with array map to store the iova_map
>>
>> Xie Yongji (10):
>> iova: Export alloc_iova_fast() and free_iova_fast();
>> file: Export receive_fd() to modules
>> eventfd: Increase the recursion depth of eventfd_signal()
>> vhost-iotlb: Add an opaque pointer for vhost IOTLB
>> vdpa: Add an opaque pointer for vdpa_config_ops.dma_map()
>> vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap()
>> vdpa: Support transferring virtual addressing during DMA mapping
>> vduse: Implement an MMU-based IOMMU driver
>> vduse: Introduce VDUSE - vDPA Device in Userspace
>> Documentation: Add documentation for VDUSE
>>
>> Documentation/userspace-api/index.rst | 1 +
>> Documentation/userspace-api/ioctl/ioctl-number.rst | 1 +
>> Documentation/userspace-api/vduse.rst | 222 +++
>> drivers/iommu/iova.c | 2 +
>> drivers/vdpa/Kconfig | 10 +
>> drivers/vdpa/Makefile | 1 +
>> drivers/vdpa/ifcvf/ifcvf_main.c | 2 +-
>> drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
>> drivers/vdpa/vdpa.c | 9 +-
>> drivers/vdpa/vdpa_sim/vdpa_sim.c | 8 +-
>> drivers/vdpa/vdpa_user/Makefile | 5 +
>> drivers/vdpa/vdpa_user/iova_domain.c | 545 ++++++++
>> drivers/vdpa/vdpa_user/iova_domain.h | 73 +
>> drivers/vdpa/vdpa_user/vduse_dev.c | 1453 ++++++++++++++++++++
>> drivers/vdpa/virtio_pci/vp_vdpa.c | 2 +-
>> drivers/vhost/iotlb.c | 20 +-
>> drivers/vhost/vdpa.c | 148 +-
>> fs/eventfd.c | 2 +-
>> fs/file.c | 6 +
>> include/linux/eventfd.h | 5 +-
>> include/linux/file.h | 7 +-
>> include/linux/vdpa.h | 21 +-
>> include/linux/vhost_iotlb.h | 3 +
>> include/uapi/linux/vduse.h | 143 ++
>> 24 files changed, 2641 insertions(+), 50 deletions(-)
>> create mode 100644 Documentation/userspace-api/vduse.rst
>> create mode 100644 drivers/vdpa/vdpa_user/Makefile
>> create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
>> create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
>> create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
>> create mode 100644 include/uapi/linux/vduse.h
>>
>> --
>> 2.11.0
> Hi, Yongji
>
> Great work! your method is really wise that implements a software IOMMU
> so that data path gets processed by userspace application efficiently.
> Sorry, I've just realized your work and patches.
>
>
> I was working on a similar thing aiming to get vhost-user-blk device
> from SPDK vhost-target to be exported as local host kernel block device.
> It's diagram is like this:
>
>
> -----------------------------
> ------------------------ | ----------------- | ---------------------------------------
> | <RunC Container> | <<<<<<<<| Shared-Memory |>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> |
> | --------- | v | ----------------- | | v |
> | |dev/vdx| | v | <virtio-local-agent> | | <Vhost-user Target> v |
> ------------+----------- v | ------------------------ | | --------------------------v------ |
> | v | |/dev/virtio-local-ctrl| | | | unix socket | |block driver | |
> | v ------------+---------------- --------+--------------------v---------
> | v | | v
> ------------+----------------v--------------+----------------------------+--------------------v--------|
> | | block device | v | Misc device | | v |
> | -------+-------- v --------+------- | v |
> | | v | | v |
> | ----------+---------- v | | v |
> | | virtio-blk driver | v | | v |
> | ----------+---------- v | | v |
> | | virtio bus v | | v |
> | --------+---+------- v | | v |
> | | v | | v |
> | | v | | v |
> | ----------+---------- v ---------+----------- | v |
> | | virtio-blk device |--<----| virtio-local driver |----------------< v |
> | ----------+---------- ----------+----------- v |
> | ---------+--------|
> -------------------------------------------------------------------------------------| RNIC |--| PCIe |-
> ----+--- | NVMe |
> | --------
> ---------+---------
> | Remote Storages |
> -------------------
>
>
> I just draft out an initial proof version. When seeing your RFC mail,
> I'm thinking that SPDK target may depends on your work, so I could
> directly drop mine.
> But after a glance of the RFC patches, seems it is not so easy or
> efficient to get vduse leveraged by SPDK.
> (Please correct me, if I get wrong understanding on vduse. :) )
>
> The large barrier is bounce-buffer mapping: SPDK requires hugepages
> for NVMe over PCIe and RDMA, so take some preallcoated hugepages to
> map as bounce buffer is necessary. Or it's hard to avoid an extra
> memcpy from bounce-buffer to hugepage.
> If you can add an option to map hugepages as bounce-buffer,
> then SPDK could also be a potential user of vduse.
Several issues:
- VDUSE needs to limit the total size of the bounce buffers (64M if I
was not wrong). Does it work for SPDK?
- VDUSE can use hugepages but I'm not sure we can mandate hugepages (or
we need introduce new flags for supporting this)
Thanks
>
> It would be better if SPDK vhost-target could leverage the datapath of
> vduse directly and efficiently. Even the control path is vdpa based,
> we may work out one daemon as agent to bridge SPDK vhost-target with vduse.
> Then users who already deployed SPDK vhost-target, can smoothly run
> some agent daemon without code modification on SPDK vhost-target itself.
> (It is only better-to-have for SPDK vhost-target app, not mandatory for SPDK) :)
> At least, some small barrier is there that blocked a vhost-target use vduse
> datapath efficiently:
> - Current IO completion irq of vduse is IOCTL based. If add one option
> to get it eventfd based, then vhost-target can directly notify IO
> completion via negotiated eventfd.
>
>
> Thanks
> From Xiaodong
>
>
>
>
>
>
>
next prev parent reply other threads:[~2021-06-28 4:35 UTC|newest]
Thread overview: 76+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-15 14:13 [PATCH v8 00/10] Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2021-06-15 14:13 ` [PATCH v8 01/10] iova: Export alloc_iova_fast() and free_iova_fast(); Xie Yongji
2021-06-15 14:13 ` [PATCH v8 02/10] file: Export receive_fd() to modules Xie Yongji
2021-06-15 14:13 ` [PATCH v8 03/10] eventfd: Increase the recursion depth of eventfd_signal() Xie Yongji
2021-06-17 8:33 ` He Zhe
2021-06-18 3:29 ` Yongji Xie
2021-06-18 8:41 ` He Zhe
2021-06-18 8:44 ` [PATCH] eventfd: Enlarge recursion limit to allow vhost to work He Zhe
2021-07-03 8:31 ` Michael S. Tsirkin
2021-08-25 7:57 ` Yongji Xie
2021-06-15 14:13 ` [PATCH v8 04/10] vhost-iotlb: Add an opaque pointer for vhost IOTLB Xie Yongji
2021-06-15 14:13 ` [PATCH v8 05/10] vdpa: Add an opaque pointer for vdpa_config_ops.dma_map() Xie Yongji
2021-06-15 14:13 ` [PATCH v8 06/10] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() Xie Yongji
2021-06-15 14:13 ` [PATCH v8 07/10] vdpa: Support transferring virtual addressing during DMA mapping Xie Yongji
2021-06-15 14:13 ` [PATCH v8 08/10] vduse: Implement an MMU-based IOMMU driver Xie Yongji
2021-06-15 14:13 ` [PATCH v8 09/10] vduse: Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2021-06-21 9:13 ` Jason Wang
2021-06-21 10:41 ` Yongji Xie
2021-06-22 5:06 ` Jason Wang
2021-06-22 7:22 ` Yongji Xie
2021-06-22 7:49 ` Jason Wang
2021-06-22 8:14 ` Yongji Xie
2021-06-23 3:30 ` Jason Wang
2021-06-23 5:50 ` Yongji Xie
2021-06-24 3:34 ` Jason Wang
2021-06-24 4:46 ` Yongji Xie
2021-06-24 8:13 ` Jason Wang
2021-06-24 9:16 ` Yongji Xie
2021-06-25 3:08 ` Jason Wang
2021-06-25 4:19 ` Yongji Xie
2021-06-28 4:40 ` Jason Wang
2021-06-29 2:26 ` Yongji Xie
2021-06-29 3:29 ` Jason Wang
2021-06-29 3:56 ` Yongji Xie
2021-06-29 4:03 ` Jason Wang
2021-06-24 14:46 ` Stefan Hajnoczi
2021-06-29 2:59 ` Yongji Xie
2021-06-30 9:51 ` Stefan Hajnoczi
2021-07-01 6:50 ` Yongji Xie
2021-07-01 7:55 ` Jason Wang
2021-07-01 10:26 ` Yongji Xie
2021-07-02 3:25 ` Jason Wang
2021-07-07 8:52 ` Stefan Hajnoczi
2021-07-07 9:19 ` Yongji Xie
2021-06-15 14:13 ` [PATCH v8 10/10] Documentation: Add documentation for VDUSE Xie Yongji
2021-06-24 13:01 ` Stefan Hajnoczi
2021-06-29 5:43 ` Yongji Xie
2021-06-30 10:06 ` Stefan Hajnoczi
2021-07-01 10:00 ` Yongji Xie
2021-07-01 13:15 ` Stefan Hajnoczi
2021-07-04 9:49 ` Yongji Xie
2021-07-05 3:36 ` Jason Wang
2021-07-05 12:49 ` Stefan Hajnoczi
2021-07-06 2:34 ` Jason Wang
2021-07-06 10:14 ` Stefan Hajnoczi
[not found] ` <CACGkMEs2HHbUfarum8uQ6wuXoDwLQUSXTsa-huJFiqr__4cwRg@mail.gmail.com>
[not found] ` <YOSOsrQWySr0andk@stefanha-x1.localdomain>
[not found] ` <100e6788-7fdf-1505-d69c-bc28a8bc7a78@redhat.com>
[not found] ` <YOVr801d01YOPzLL@stefanha-x1.localdomain>
2021-07-07 9:24 ` Jason Wang
2021-07-07 15:54 ` Stefan Hajnoczi
2021-07-08 4:17 ` Jason Wang
2021-07-08 9:06 ` Stefan Hajnoczi
2021-07-08 12:35 ` Yongji Xie
2021-07-06 3:04 ` Yongji Xie
2021-07-06 10:22 ` Stefan Hajnoczi
2021-07-07 9:09 ` Yongji Xie
2021-07-08 9:07 ` Stefan Hajnoczi
2021-06-24 15:12 ` [PATCH v8 00/10] Introduce VDUSE - vDPA Device in Userspace Stefan Hajnoczi
2021-06-29 3:15 ` Yongji Xie
2021-06-28 10:33 ` Liu Xiaodong
2021-06-28 4:35 ` Jason Wang [this message]
2021-06-28 5:54 ` Liu, Xiaodong
2021-06-29 4:10 ` Jason Wang
2021-06-29 7:56 ` Liu, Xiaodong
2021-06-29 8:14 ` Yongji Xie
2021-06-28 10:32 ` Yongji Xie
2021-06-29 4:12 ` Jason Wang
2021-06-29 6:40 ` Yongji Xie
2021-06-29 7:33 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bdbe3a79-e5ce-c3a5-4c68-c11c65857377@redhat.com \
--to=jasowang@redhat.com \
--cc=axboe@kernel.dk \
--cc=bcrl@kvack.org \
--cc=christian.brauner@canonical.com \
--cc=corbet@lwn.net \
--cc=dan.carpenter@oracle.com \
--cc=gregkh@linuxfoundation.org \
--cc=hch@infradead.org \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mika.penttila@nextfour.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=rdunlap@infradead.org \
--cc=sgarzare@redhat.com \
--cc=songmuchun@bytedance.com \
--cc=stefanha@redhat.com \
--cc=viro@zeniv.linux.org.uk \
--cc=virtualization@lists.linux-foundation.org \
--cc=willy@infradead.org \
--cc=xiaodong.liu@intel.com \
--cc=xieyongji@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).