linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Xie Yongji <xieyongji@bytedance.com>,
	mst@redhat.com, stefanha@redhat.com, sgarzare@redhat.com,
	parav@nvidia.com, akpm@linux-foundation.org,
	rdunlap@infradead.org, willy@infradead.org,
	viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org,
	corbet@lwn.net
Cc: virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, kvm@vger.kernel.org, linux-aio@kvack.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC v2 00/13] Introduce VDUSE - vDPA Device in Userspace
Date: Wed, 23 Dec 2020 14:38:35 +0800	[thread overview]
Message-ID: <c892652a-3f57-c337-8c67-084ba6d10834@redhat.com> (raw)
In-Reply-To: <20201222145221.711-1-xieyongji@bytedance.com>


On 2020/12/22 下午10:52, Xie Yongji wrote:
> This series introduces a framework, which can be used to implement
> vDPA Devices in a userspace program. The work consist of two parts:
> control path forwarding and data path offloading.
>
> In the control path, the VDUSE driver will make use of message
> mechnism to forward the config operation from vdpa bus driver
> to userspace. Userspace can use read()/write() to receive/reply
> those control messages.
>
> In the data path, the core is mapping dma buffer into VDUSE
> daemon's address space, which can be implemented in different ways
> depending on the vdpa bus to which the vDPA device is attached.
>
> In virtio-vdpa case, we implements a MMU-based on-chip IOMMU driver with
> bounce-buffering mechanism to achieve that.


Rethink about the bounce buffer stuffs. I wonder instead of using kernel 
pages with mmap(), how about just use userspace pages like what vhost did?

It means we need a worker to do bouncing but we don't need to care about 
annoying stuffs like page reclaiming?


> And in vhost-vdpa case, the dma
> buffer is reside in a userspace memory region which can be shared to the
> VDUSE userspace processs via transferring the shmfd.
>
> The details and our user case is shown below:
>
> ------------------------    -------------------------   ----------------------------------------------
> |            Container |    |              QEMU(VM) |   |                               VDUSE daemon |
> |       ---------      |    |  -------------------  |   | ------------------------- ---------------- |
> |       |dev/vdx|      |    |  |/dev/vhost-vdpa-x|  |   | | vDPA device emulation | | block driver | |
> ------------+-----------     -----------+------------   -------------+----------------------+---------
>              |                           |                            |                      |
>              |                           |                            |                      |
> ------------+---------------------------+----------------------------+----------------------+---------
> |    | block device |           |  vhost device |            | vduse driver |          | TCP/IP |    |
> |    -------+--------           --------+--------            -------+--------          -----+----    |
> |           |                           |                           |                       |        |
> | ----------+----------       ----------+-----------         -------+-------                |        |
> | | virtio-blk driver |       |  vhost-vdpa driver |         | vdpa device |                |        |
> | ----------+----------       ----------+-----------         -------+-------                |        |
> |           |      virtio bus           |                           |                       |        |
> |   --------+----+-----------           |                           |                       |        |
> |                |                      |                           |                       |        |
> |      ----------+----------            |                           |                       |        |
> |      | virtio-blk device |            |                           |                       |        |
> |      ----------+----------            |                           |                       |        |
> |                |                      |                           |                       |        |
> |     -----------+-----------           |                           |                       |        |
> |     |  virtio-vdpa driver |           |                           |                       |        |
> |     -----------+-----------           |                           |                       |        |
> |                |                      |                           |    vdpa bus           |        |
> |     -----------+----------------------+---------------------------+------------           |        |
> |                                                                                        ---+---     |
> -----------------------------------------------------------------------------------------| NIC |------
>                                                                                           ---+---
>                                                                                              |
>                                                                                     ---------+---------
>                                                                                     | Remote Storages |
>                                                                                     -------------------
>
> We make use of it to implement a block device connecting to
> our distributed storage, which can be used both in containers and
> VMs. Thus, we can have an unified technology stack in this two cases.
>
> To test it with null-blk:
>
>    $ qemu-storage-daemon \
>        --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
>        --monitor chardev=charmonitor \
>        --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0 \
>        --export vduse-blk,id=test,node-name=disk0,writable=on,vduse-id=1,num-queues=16,queue-size=128
>
> The qemu-storage-daemon can be found at https://github.com/bytedance/qemu/tree/vduse
>
> Future work:
>    - Improve performance (e.g. zero copy implementation in datapath)
>    - Config interrupt support
>    - Userspace library (find a way to reuse device emulation code in qemu/rust-vmm)
>
> This is now based on below series:
> https://lore.kernel.org/netdev/20201112064005.349268-1-parav@nvidia.com/
>
> V1 to V2:
> - Add vhost-vdpa support


I may miss something but I don't see any code to support that. E.g 
neither set_map nor dma_map/unmap is implemented in the config ops.

Thanks


> - Add some documents
> - Based on the vdpa management tool
> - Introduce a workqueue for irq injection
> - Replace interval tree with array map to store the iova_map
>
> Xie Yongji (13):
>    mm: export zap_page_range() for driver use
>    eventfd: track eventfd_signal() recursion depth separately in different cases
>    eventfd: Increase the recursion depth of eventfd_signal()
>    vdpa: Remove the restriction that only supports virtio-net devices
>    vdpa: Pass the netlink attributes to ops.dev_add()
>    vduse: Introduce VDUSE - vDPA Device in Userspace
>    vduse: support get/set virtqueue state
>    vdpa: Introduce process_iotlb_msg() in vdpa_config_ops
>    vduse: Add support for processing vhost iotlb message
>    vduse: grab the module's references until there is no vduse device
>    vduse/iova_domain: Support reclaiming bounce pages
>    vduse: Add memory shrinker to reclaim bounce pages
>    vduse: Introduce a workqueue for irq injection
>
>   Documentation/driver-api/vduse.rst                 |   91 ++
>   Documentation/userspace-api/ioctl/ioctl-number.rst |    1 +
>   drivers/vdpa/Kconfig                               |    8 +
>   drivers/vdpa/Makefile                              |    1 +
>   drivers/vdpa/vdpa.c                                |    2 +-
>   drivers/vdpa/vdpa_sim/vdpa_sim.c                   |    3 +-
>   drivers/vdpa/vdpa_user/Makefile                    |    5 +
>   drivers/vdpa/vdpa_user/eventfd.c                   |  229 ++++
>   drivers/vdpa/vdpa_user/eventfd.h                   |   48 +
>   drivers/vdpa/vdpa_user/iova_domain.c               |  517 ++++++++
>   drivers/vdpa/vdpa_user/iova_domain.h               |  103 ++
>   drivers/vdpa/vdpa_user/vduse.h                     |   59 +
>   drivers/vdpa/vdpa_user/vduse_dev.c                 | 1373 ++++++++++++++++++++
>   drivers/vhost/vdpa.c                               |   34 +-
>   fs/aio.c                                           |    3 +-
>   fs/eventfd.c                                       |   20 +-
>   include/linux/eventfd.h                            |    5 +-
>   include/linux/vdpa.h                               |   11 +-
>   include/uapi/linux/vdpa.h                          |    1 +
>   include/uapi/linux/vduse.h                         |  119 ++
>   mm/memory.c                                        |    1 +
>   21 files changed, 2598 insertions(+), 36 deletions(-)
>   create mode 100644 Documentation/driver-api/vduse.rst
>   create mode 100644 drivers/vdpa/vdpa_user/Makefile
>   create mode 100644 drivers/vdpa/vdpa_user/eventfd.c
>   create mode 100644 drivers/vdpa/vdpa_user/eventfd.h
>   create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
>   create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
>   create mode 100644 drivers/vdpa/vdpa_user/vduse.h
>   create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
>   create mode 100644 include/uapi/linux/vduse.h
>



  parent reply	other threads:[~2020-12-23  6:38 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-22 14:52 [RFC v2 00/13] Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2020-12-22 14:52 ` [RFC v2 01/13] mm: export zap_page_range() for driver use Xie Yongji
2020-12-22 15:44   ` Christoph Hellwig
2020-12-22 14:52 ` [RFC v2 02/13] eventfd: track eventfd_signal() recursion depth separately in different cases Xie Yongji
2020-12-22 14:52 ` [RFC v2 03/13] eventfd: Increase the recursion depth of eventfd_signal() Xie Yongji
2020-12-22 14:52 ` [RFC v2 04/13] vdpa: Remove the restriction that only supports virtio-net devices Xie Yongji
2020-12-22 14:52 ` [RFC v2 05/13] vdpa: Pass the netlink attributes to ops.dev_add() Xie Yongji
2020-12-22 14:52 ` [RFC v2 06/13] vduse: Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2020-12-23  8:08   ` Jason Wang
2020-12-23 14:17     ` Yongji Xie
2020-12-24  3:01       ` Jason Wang
2020-12-24  8:34         ` Yongji Xie
2020-12-25  6:59           ` Jason Wang
2021-01-08 13:32   ` Bob Liu
2021-01-10 10:03     ` Yongji Xie
2020-12-22 14:52 ` [RFC v2 07/13] vduse: support get/set virtqueue state Xie Yongji
2020-12-22 14:52 ` [RFC v2 08/13] vdpa: Introduce process_iotlb_msg() in vdpa_config_ops Xie Yongji
2020-12-23  8:36   ` Jason Wang
2020-12-23 11:06     ` Yongji Xie
2020-12-24  2:36       ` Jason Wang
2020-12-24  7:24         ` Yongji Xie
2020-12-22 14:52 ` [RFC v2 09/13] vduse: Add support for processing vhost iotlb message Xie Yongji
2020-12-23  9:05   ` Jason Wang
2020-12-23 12:14     ` [External] " Yongji Xie
2020-12-24  2:41       ` Jason Wang
2020-12-24  7:37         ` Yongji Xie
2020-12-25  2:37           ` Yongji Xie
2020-12-25  7:02             ` Jason Wang
2020-12-25 11:36               ` Yongji Xie
2020-12-25  6:57           ` Jason Wang
2020-12-25 10:31             ` Yongji Xie
2020-12-28  7:43               ` Jason Wang
2020-12-28  8:14                 ` Yongji Xie
2020-12-28  8:43                   ` Jason Wang
2020-12-28  9:12                     ` Yongji Xie
2020-12-29  9:11                       ` Jason Wang
2020-12-29 10:26                         ` Yongji Xie
2020-12-30  6:10                           ` Jason Wang
2020-12-30  7:09                             ` Yongji Xie
2020-12-30  8:41                               ` Jason Wang
2020-12-30 10:12                                 ` Yongji Xie
2020-12-31  2:49                                   ` Jason Wang
2020-12-31  5:15                                     ` Yongji Xie
2020-12-31  5:49                                       ` Jason Wang
2020-12-31  6:52                                         ` Yongji Xie
2020-12-31  7:11                                           ` Jason Wang
2020-12-31  8:00                                             ` Yongji Xie
2020-12-22 14:52 ` [RFC v2 10/13] vduse: grab the module's references until there is no vduse device Xie Yongji
2020-12-22 14:52 ` [RFC v2 11/13] vduse/iova_domain: Support reclaiming bounce pages Xie Yongji
2020-12-22 14:52 ` [RFC v2 12/13] vduse: Add memory shrinker to reclaim " Xie Yongji
2020-12-22 14:52 ` [RFC v2 13/13] vduse: Introduce a workqueue for irq injection Xie Yongji
2020-12-23  6:38 ` Jason Wang [this message]
2020-12-23  8:14   ` [RFC v2 00/13] Introduce VDUSE - vDPA Device in Userspace Jason Wang
2020-12-23 10:59   ` Yongji Xie
2020-12-24  2:24     ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c892652a-3f57-c337-8c67-084ba6d10834@redhat.com \
    --to=jasowang@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=bcrl@kvack.org \
    --cc=corbet@lwn.net \
    --cc=kvm@vger.kernel.org \
    --cc=linux-aio@kvack.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=parav@nvidia.com \
    --cc=rdunlap@infradead.org \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=willy@infradead.org \
    --cc=xieyongji@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).